The two-step process of how a neural network learns from data.
Forwardpropagation and Backpropagation are the two key algorithms that enable a neural network to learn. They represent the process of making a prediction and then correcting the errors in that prediction. Forwardpropagation is the simpler of the two. It's the process of passing an input data point from the input layer, through all the hidden layers, to the output layer to generate a prediction. At each neuron in each layer, the algorithm calculates the weighted sum of the outputs from the previous layer and applies an activation function. This process flows 'forward' through the network until it produces a final output. Once the output is generated, we can compare it to the actual target label using a cost function (like Mean Squared Error or Cross-Entropy) to calculate the error. This is where Backpropagation comes in. Backpropagation, short for 'backward propagation of errors', is the algorithm that allows the network to learn from its mistakes. It works by propagating the error signal 'backward' from the output layer to the input layer. Using calculus (specifically, the chain rule), it calculates the gradient of the cost function with respect to each weight and bias in the network. This gradient tells us how much each weight contributed to the overall error. These gradients are then used by an optimization algorithm, like Gradient Descent, to update the weights and biases in a way that minimizes the error. This entire cycle of forwardpropagation and backpropagation is repeated thousands or millions of times with the training data until the network's predictions are accurate.