# Basic Neural Network Code Example – Java This is part two of my Basic Neural Network series, where I’ll present a simple implementation of a neural network, with feedforward – Made in Java.
The previous post explaining the basic model can be found here.

If you are more interested in the full code at first, then it can be found here.

#### Disclaimer:

This example is only meant to be a proof of concept and to show the inner working of a neural network. And should therefore not be regarded as the most correct nor optimal implementation.

#### Initial requirements:

• Support 3 layers. (1 input, 1 hidden and 1 output layer).
• Support layers of varying size.
• Support Feedforward.
• Use the sigmoid activation function.

Now that the initial requirements have been made, we can begin coding the network.

#### Local fields and constructor:

We need to support a network with three layers of varying size, which means that when we create our network, we need to hold this information somehow.

To represent a layer, a simple double[] array can be used, where each value in the array represents a node. This leaves us with three double arrays, one for each layer.

To represent the weights between each layer, a double[][] matrix can be used, where each row contain every weight for a node in the next layer. This means that the weight matrix has as many rows as there are nodes in the next layer, and as many columns as there are nodes in the previous layer.
This is because there is a weight between every node in both layers, so every node in the next layer has as many weights, as there are nodes in the previous layer.

In a double[][] matrix, the first value represents rows and the second columns.
Example: double[Row ID][Column ID] or double[NodeInNextLayer][NodeInPrevLayer]

Since the only layers with weights connecting into them, from another layer behind them, are the hidden and the output layer, then these are the only layers which require weights. This means that we only need two double[][] matrixes with weights, one for the hidden layer and one for the output layer.

#### Initialize weights:

As seen in the code gist above, we now need a way to initialize our random weights.
We know that a weight matrix has the size of rows equal to the number of nodes in the next layer, and columns equal to the number of nodes in the previous layer and that the values have to be generated randomly.

Which means that we just have to initialize a matrix of a known size with random values, which can be done by iterating through every value in such a matrix, and place a random value.

Which changes our code to:

#### Feedforward:

Now that the network contains all the information required for feedforward, we can finally begin to implement the process.

Since the process of feedforward between each layer is almost identical, and only changes depending on how many nodes are in each layer – We can choose to implement a single method, which takes two layers (double[]) and a weight matrix (double[][]) as input, and then feedforward from the previous layer into the next layer.
By calling this method twice, we will be able to feedforward from the input layer to the hidden layer and then to the output layer, which will then be a full feedforward, from input to output.

We will call this method for “feedForward()”, and can be seen below:

#### Evaluate inputs:

Everything is now ready for the final method “evaluateInputs()”, which will take a double[] array as inputs, which represent the input values of the neural network, and then perform feedforward. And in the end return the values of the output layer, which is equal to our predictions:

#### Final notes:

There you have it, a simple implementation of a neural network in less than 80 lines!

Please be aware of the iterative nature of this code.
If you want to add another layer to the network, you would simply create another layer array, weight matrix, and call the feedforward() method one more time.

The full code can be found here.

Thanks for reading, feel free to comment below.

### You May Also Like

This site uses Akismet to reduce spam. Learn how your comment data is processed.