How to build a basic Neural Network

Riya Kumar
5 min readFeb 2, 2019
Scary neural network diagram.

Building a neural network sounds like sci-fi to a lot of us. This diagram of a neural network isn’t helping either, it seems rather intimidating. But, I learned that neural networks are rather simple models based on the human brain and wanted to give a basic run down as to how they work as well as how to make one!

The neural network I built takes in a series of inputs, 12 inputs more specifically, and gives 4 outputs. The end goal of the neural network is for it to learn, and adjust the weights on the synapses accordingly comparing it to the output set given to it.

But before we get into the code, what is a neural network and how does it work?

A neural network is a type of machine learning based off of neurons in our brain. At the most basic level, neural networks have an input layer, hidden layers, and an output layer.

A neuron and artificial neural network are pretty similar in function!

The hidden layers inside help pick out specific details, for example, if we were to make a neural network to classify images, one of the layers could be searching for lines while the other searches for color. The number of layers can be increased or decreased depending on the difficulty of the task given to the neural network.

But what are the arrows between the layers? Each arrow is to represent a synapse, which is again similar to how a neuron in our brain works. In our brain, the synapse in a neuron passes information from one neuron to another, while in a neural network it’s used to pass on information layer to layer. It works a bit differently in a neural network though, each synapse holds a weight and the information gathered by the layer is multiplied by that weight and passed on. Each neuron (the circles in the diagram) then undergoes the 3 following processes: multiplied by a weight, the sum, and output.

The 3 steps a neuron goes through in a neural network.

The first thing the neuron does is that it multiplies whatever signal it gets from the input by the synapses connected to it (in the diagram above it is 3 arrows). Each synapse has a weight, and the neuron can change the weight for each individually. Next, it will add in a bias and add up all the values it has gotten from the last step to get a single value. The bias helps the neural network transfer to a transfer function with higher accuracy. The last step is to get to an output and for a lot of binary inputs, binary outputs are expected. That is where a transfer function comes into play, where functions such as sigmoid functions are used to scale the value no matter how big or small it is to a value between 0 and 1.

Sigmoid Function!

Ok this is all cool, but how does it learn?

Back-propagation!

Let’s look at a basic neural network called a perceptron to understand the process behind back-propagation. A perceptron is made up of inputs, synapses, a neuron, and output. The weights and biases in a neural network are usually randomly set in the start of the process and using backpropagation it allows the neural network to adjust the weights and biases to provide more accurate results.

Back-propagation takes the output generated by the neural network and compares it to the actual output (that it should be) from the dataset, to get an error value. This error value gets sent back into the layers of the neural network and the weights and biases are adjusted accordingly. This process occurs multiple times and each time the weights and biases are adjusted, with each iteration it gets more and more accurate, therefore learning!

So how do I code a basic neural network like a perceptron?

Let’s start with the data sets.

This is the data set I used, where I gave the neural network the 4 examples shown above, and also I have the actual outputs it should give.

The next step is to assign the weights a random number, as can be seen in the code as “np.random.seed(1)” this allows us to have a starting point for weights in the neural network. The line after defines the variable synaptic_weights, and allows the neural network to take in 3 inputs and give one output.

The last step is to add in the number of times to loop this code or the number of iterations it will go through making it more precise. In the code I wrote I made the neural network go through 20,000 iterations, or basically run 20,000 times. The next line of code makes the input_layer equal to the training_inputs which we defined above, and uses a sigmoid function to bring an output between one and zero after multiplying the synaptic weight with the input. It then calculates error by subtracting the output it created from the training_output we defined above. The last 2 lines of code take the error to adjust the synaptic weights, minimizing the error amount and making the neural network form more precise outputs.

And, here it is working!

You can see the synaptic weights before and after training, and how the outputs after training are very close to the actual outputs in the data set above!

--

--