Neural Network Concepts Animations
Neural Network Concepts Animation Links:
- Visualizing Neural Network Sizes
- Visualization of an example Dogs vs Cats neural network classifier
- Visualization of the forward pass calculation and path for a neural network
- Single neuron with 3 inputs example
- A single neuron with 4 inputs
- 3 neuron layer with 4 inputs
- Arrays and their shapes
- Dot Product in Python
- Using the dot product for a neuron's calculation
- Using the dot product with a layer of neurons
- Example of what an array of a batch of samples looks like, compared to a single sample.
- How batches can help with fitment
- How a matrix product is calculated
- Matrix product with row and column vectors
- How a tranpose/transposition works
- Matrix product with row and column vectors with a batch of inputs to the neural network
- Adding biases after the matrix product from a batch of inputs
- Why & how two or more hidden layers w/ nonlinear activation functions works with neural networks/deep learning
- Example of a linear function
- Example of a parabolic function
- Parabolic function derivatives
- Parabolic Function
- Parabolic Function 2 Derivatives Graph
- Live SGD Optimization for neural network with Learning Rate of 1.0. Epilepsy Warning, there are quick flashing colors.
- Live SGD Optimization for neural network with 0.5 Learning Rate. Epilepsy Warning, there are quick flashing colors.
- Live SGD Optimization for neural network with a Decaying (1e-2) Learning Rate . Epilepsy Warning, there are quick flashing colors.
- Live SGD Optimization for neural network with a slower Decaying (1e-3) Learning Rate. Epilepsy Warning, there are quick flashing colors.
- Live SGD Optimization for neural network with a 1e-3 Decaying Learning Rate from 1.0, along with momentum (0.5). Epilepsy Warning, there are quick flashing colors.
- Live SGD Optimization for neural network with a 1e-3 Decaying Learning Rate from 1.0, along with momentum (0.9). Epilepsy Warning, there are quick flashing colors.
- AdaGrad Optimizer Example
- RMSProp Optimizer for Neural Networks
- RMSProp Optimizer with: LR: 0.2, decay: 1e-5, rho: 0.999
- Adam Optimizer for Neural Networks with 0.02 learning rate and 1e-5 decay
- Adam Optimizer for Neural Networks with 0.05 learning rate and 5e-7 decay
- How weights and biases impact a single neuron
- Step Function Animation
- The math behind an example forward pass through a neural network
- How a transpose works
- Why we need to transpose weights
- Regression Demo with rectified linear (ReLU) activation function
- Analytical Derivative
- Y Intercept
- Analytical Derivative X
- Analytical Derivative 2x
- Analytical Derivative 3x^2
- AnalyticalDerivative 3x^2 + 2x
- Analytical Derivative 5x^5+4x^3-5
- AnalyticalDerivative x^3+2x^2-5x+7
- Backpropagation Example
- Simplifying Neuron Derivative
- Learning Rate Local Minimum
- Another local minimum example
- Learning Rate Small Local Minimum
- Learning Rate Too Small, 200 Steps
- Learning Rate too small 100 Steps
- Learning Rate too small 50 Steps
- Learning Rate Too Big
- Learning rate Way Too Big
- Gradient Explosion
- Good Enough Learning Rate
- Good Learning Rate
- Testing Data intuition
- Cross validation
- Regularization 1. Epilepsy Warning: quick flashing colors
- Regularization 2. Epilepsy Warning: quick flashing colors
- Regularization 3. Epilepsy Warning: quick flashing colors
- Dropout visualized
- Dropout training example 1 Epilepsy Warning: quick flashing colors
- Dropout training example 2 Epilepsy Warning: quick flashing colors
- Regression Example 1
- Regression Example 2
- Regression Example 3
- Regression Example 4
- Regression Example 5
- Regression Example 6
- Regression Example 7
- Regression Example 8
- Regression Example 9