Building from the previous example of the SGD optimizer with a neural network with 0.5 momentum, here we do a momentum of 0.9 and a decay of 1e-3.
Optimizers with live results:
Stochastic Gradient Descent:
Optimizer: SGD. Learning Rate: 1.0.
Optimizer: SGD. Learning Rate: 0.5.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-2.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-3.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-3. Momentum: 0.5.
Optimizer: SGD. Learning Rate: 1.0. Decay: 1e-3. Momentum: 0.9.
AdaGrad:
Optimizer: AdaGrad. Decay: 1e-4
RMSProp:
Optimizer: RMSProp. Decay: 1e-4
Optimizer: RMSProp. Decay: 1e-5. rho: 0.999
Adam: