Lesson 1.5: Gradient Descent Training


Gradient descent is an iterative optimization algorithm that minimizes the loss function by adjusting weights in the direction of steepest descent.

How it works:

  • Calculates gradient (∂Loss/∂W) showing how much each weight contributes to error
  • Updates weights using: W_new = W_old - η*(∂Loss/∂W) (η = learning rate)
  • Repeats until convergence
All systems normal

© 2025 2023 Sanjeeb KC. All rights reserved.