Derivatives

Derivatives are the foundation of machine learning optimization. Every time a neural network learns, it's using derivatives to figure out how to adjust its parameters.

Resources

Same as previous page

I recommend watching only Professor Leonard and study and solve all problems Calculus I with integrated Precalculus book

ResourceTypeCostLinkNotes
3Blue1Brown CalculusVideo SeriesFreeYouTubeBest intuitive introduction available
Khan Academy CalculusInteractive CourseFreekhanacademy.orgSolid practice problems and explanations
MIT OCW 18.01Full CourseFreeocw.mit.eduRigorous treatment with problem sets
Paul's Online Math NotesReferenceFreetutorial.math.lamar.eduGreat for quick lookups and examples
Professor LeonardVideo LecturesFreeYouTube
Calculus I with integrated PrecalculusBookPaidBook link

The Core Concept

A derivative measures how much a function changes when you change its input by a tiny amount. If f(x) is your function, f'(x) tells you the slope of the curve at point x.

In ML terms: if your loss function is L(w) where w is a weight, then L'(w) tells you how much the loss increases or decreases when you change that weight slightly.

Essential Rules

Calculus Cheat Sheet derivatives

Why This Matters

When you hear "gradient descent," that's just following derivatives downhill to minimize a function. When you see "backpropagation," that's computing derivatives using the chain rule through a network.

Understanding derivatives geometrically (as slopes) and algebraically (as limits) gives you the intuition to debug optimization problems and understand why learning algorithms work the way they do.