Back Propagation

Zachary Greenberg
3 min readJul 16, 2021

--

Neural networks are one of the most discussed algorithms in the data science field today. Going all the way back to the 1940s, the idea of a neural network began with a paper by Warren McCulloch and Walter Pitts, a neurophysiologist and a mathematician respectively. Neural networks take inspiration from the human brain and the function of neurons. Their job is to simply recognize patterns. A neural network can perform most of the types of tasks we face in data science from classification to regression and do even more advanced tasks like image recognition and so on. The idea of it is pretty versatile.

This is a simple example of neural net. We input the data and it goes through layers to execute an output. This model only has one layer, a model can have many many layers.

The biggest difference when comparing a neural network to something like linear regression is that the process is now automated and that there is a reiterative quality to it. Automation is great, you put the data in and it will go back and check for a more optimal solution. The question is how does it do that? I believe the answer we are looking for is in a term called back propagation.

What is back propagation?

“Back propagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss.” — Wikipedia

In layman’s terms this is the process that allows us to go back and make adjustments so that we can lower the error. This is the real meat and potatoes as it is what gives the neural network the reiterative process that works like neurons in the human brain.

In a neural net there are layers that the data goes through before the output. Back propagation automates a recursive calculation taking a step back through each layer to regulate the error. This error can now change for the better because back propagation allows for adjustment and tuning. Under the hood, this is a calculus based computation using gradient descent. Thankfully, this process is made even easier through the use of Keras, it is actually built in.

It is important to understand what goes on under the hood, so here is a sample of it in Python:

import numpy as npdef back_propagate(input_values, target_values, learning_rate):
#This function takes input values and target values along with
#learning rate. This is typically between 0 and 1. This
#function will perform back propagation and begin to
#lower the error.

y_hat = []
error = []
#weights are randomly assigned in the beginning
weights = [np.random.rand() for i in range(len(input_values))]
for i in range(len(input_values)): #this relu is our activation function
relu = max(0,input_values[i])
y_hat.append(weights[i]*relu)
error.append((target_values[i]-y_hat[i])**2)
deriv = 2 * target_values[i] * y_hat[i] gradient = target_values[i]*deriv weights[i] = target_values[i] - learning_rate * gradient

By going backwards and using gradient descent, the neural network is able to ‘self-learn’ and minimize the error. This is what makes neural networks so powerful.

To close out, neural networks are a powerful tool that add automation to our data science process. With the incredible advancement and use of back propagation the network is able to learn on its own in an attempt to decrease the error and offer a more optimal solution. The coding demonstrated above is on a simple level what goes on during this process. Through the use of Keras, back propagation is built in and ready to go when we want to create neural networks.

References:

History of Neural Networks — https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html

Coding Inspiration — https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/

Definition of Backpropagation — https://en.wikipedia.org/wiki/Backpropagation

Neural Net Image — http://neuralnetworksanddeeplearning.com/chap1.html

--

--

Zachary Greenberg
Zachary Greenberg

No responses yet