You don't need a full calculus course to understand AI — you need the specific concepts that appear everywhere in machine learning. Derivatives, gradients, chain rule, gradient descent. Five focused days.
This is a text-first course that links out to the best supporting material on the internet instead of trying to replace it. The goal is to make this the best course on calculus for ai you can find — even without producing a single minute of custom video.
Every calculus concept in this course is introduced in the context of why AI needs it. No abstract calculus divorced from its application.
The intuition comes before the notation. You understand what a gradient means before you compute one symbolically.
Grant Sanderson's Essence of Calculus series is the best visual introduction to calculus ever made. This course links to the relevant episodes at the right moments.
Each day is designed to finish in about an hour of focused reading plus worked examples. No live classes, no quizzes.
Each day stands alone. Read them in order for the full picture, or jump straight to the day that answers the question you have today.
What a derivative measures and why it's the foundation of optimization. Power rule, product rule, quotient rule. How neural networks use derivatives in every forward pass.
The chain rule is backpropagation. Why composing functions requires multiplying their derivatives. Walking through a simple neural net backward step by step.
Extending derivatives to functions of multiple variables. The gradient vector and what it means geometrically. Why the gradient points in the direction of steepest increase.
Finding minima and maxima with calculus. Second derivative test. Saddle points. Why neural network training is a high-dimensional optimization problem.
How gradient descent uses the gradient to update weights. Learning rate. Stochastic vs. batch gradient descent. Walking through backpropagation in a 2-layer network.
Instead of shooting our own videos, we link to the best deep-dives already on YouTube. Watch them alongside the course. All external, all free, all from builders who ship this stuff.
Grant Sanderson's visual calculus series — the best intuitive introduction to derivatives and integrals ever made.
How the chain rule connects to backpropagation in neural networks — explained visually and mathematically.
Intuitive explanations of gradient descent — how it works, why it works, and the learning rate tradeoffs.
Mathematical walkthrough of backpropagation — the chain rule applied to train a neural network.
Visual explanations of partial derivatives, gradient vectors, and multivariable optimization.
Applied calculus in the context of machine learning — the concepts that appear most in practice.
The best way to go deeper on any topic is to read canonical open-source implementations. These repositories implement the core patterns covered in this course.
Hands-On Machine Learning (3rd edition) by Aurélien Géron. The best practical ML book — builds on the calculus concepts in this course.
Andrej Karpathy's minimal autograd engine. 100 lines of Python that implement backpropagation — the perfect complement to Day 5 of this course.
NumPy — the numerical computing library that implements gradients and matrix operations for scientific Python. Essential for any AI math work.
Symbolic mathematics in Python. Lets you compute derivatives analytically — useful for verifying your manual calculus work.
You can code but the math in ML papers is a barrier. This course gives you the calculus vocabulary to understand what gradient descent and backprop actually do.
You use sklearn and PyTorch but want to understand what's happening in the optimization loop. This course fills that gap without a full math degree.
You took calculus but never understood why. This course shows you the direct application to AI and makes the math memorable.
The 2-day in-person Precision AI Academy bootcamp covers the math and engineering behind AI systems — hands-on with Bo. 5 U.S. cities. $1,490. 40 seats max. June–October 2026 (Thu–Fri).
Reserve Your Seat