Page 185 of the book/pdf.
Updated:
To calculate the impact of the example weight, w0, on the output
With:
To calculate the impact of the example input, x0, on the output
All of the other places mention x0, input, and not w0, weight
Multiple pages in this chapter:
Updated:
# Partial derivatives of the multiplication, the chain rule
With:
# Partial derivatives of the sum, the chain rule
The comment said multiplication, when it's sum and confuses people
Book page 213:
Updated:
# Layer initialization
def __init__(self, inputs, neurons):
self.weights = 0.01 * np.random.randn(inputs, neurons)
self.biases = np.zeros((1, neurons))
With:
# Layer initialization
def __init__(self, n_inputs, n_neurons):
# Initialize weights and biases
self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
The important part is not the comment (which also was missing), but missing n_ in variable names which we have added everywhere else
Chapter 9, page 223:
Updated:
The difference is that now the whole subtrahend solves to 0, leaving us with just the minuend in the numerator:
With:
The difference is that now the whole minuend solves to 0, leaving us with just the subtrahend in the numerator:
subtrahend and minuend were swapped (incorrectly used)