Neural Network Concepts Animations
### Neural Network Concepts Animation Links:

**Visualizing Neural Network Sizes**
**Visualization of an example Dogs vs Cats neural network classifier**
**Visualization of the forward pass calculation and path for a neural network**
**Single neuron with 3 inputs example**
**A single neuron with 4 inputs**
**3 neuron layer with 4 inputs**
**Arrays and their shapes**
**Dot Product in Python**
**Using the dot product for a neuron's calculation**
**Using the dot product with a layer of neurons**
**Example of what an array of a batch of samples looks like, compared to a single sample.**
**How batches can help with fitment**
**How a matrix product is calculated**
**Matrix product with row and column vectors**
**How a tranpose/transposition works**
**Matrix product with row and column vectors with a batch of inputs to the neural network**
**Adding biases after the matrix product from a batch of inputs**
**Why & how two or more hidden layers w/ nonlinear activation functions works with neural networks/deep learning**
**Example of a linear function**
**Example of a parabolic function**
**Parabolic function derivatives **
**Parabolic Function**
**Parabolic Function 2 Derivatives Graph**
**Live SGD Optimization for neural network with Learning Rate of 1.0. Epilepsy Warning, there are quick flashing colors.**
**Live SGD Optimization for neural network with 0.5 Learning Rate. Epilepsy Warning, there are quick flashing colors.**
**Live SGD Optimization for neural network with a Decaying (1e-2) Learning Rate . Epilepsy Warning, there are quick flashing colors.**
**Live SGD Optimization for neural network with a slower Decaying (1e-3) Learning Rate. Epilepsy Warning, there are quick flashing colors.**
**Live SGD Optimization for neural network with a 1e-3 Decaying Learning Rate from 1.0, along with momentum (0.5). Epilepsy Warning, there are quick flashing colors.**
**Live SGD Optimization for neural network with a 1e-3 Decaying Learning Rate from 1.0, along with momentum (0.9). Epilepsy Warning, there are quick flashing colors.**
**AdaGrad**
**RMSProp Optimizer for Neural Networks**
**RMSProp Optimizer with: LR: 0.2, decay: 1e-5, rho: 0.999**
**Adam Optimizer for Neural Networks with 0.02 learning rate and 1e-5 decay**
**Adam Optimizer for Neural Networks with 0.05 learning rate and 5e-7 decay**