An animation showing the live classification results and weights for a neural network live while training with an SGD optimizer with a learning rate that decays fairly quickly from 1.0. Compare for example to .
Optimizers with live results:
Stochastic Gradient Descent:
Optimizer: SGD. Learning Rate: 1.0.
Optimizer: SGD. Learning Rate: 0.5.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-2.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-3.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-3. Momentum: 0.5.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-3. Momentum: 0.9.
AdaGrad:
Optimizer: AdaGrad. Decay: 1e-4
RMSProp:
Optimizer: RMSProp. Decay: 1e-4
Optimizer: RMSProp. Decay: 1e-5. rho: 0.999
Adam: