This is a python implementation of a feed-forward neural network using numpy. The network is trained by mini-batch graident descent and uses l2 regularization.
Clone the repository
git clone https://github.com/Agnar22/NeuralNetwork.git
navigate into the project folder
cd NeuralNetwork
install requirements
pip install -r requirements.txt
if everything went well, you should now be able to run the code
python3 Main.py
I created this project to get insight into the mathematics behind backpropagation in neural networks, as well as to learn how to implement it by only using matrix operations. Numpy is used for the matrix operations.
To check if the neural network (both feed forward and backpropagation) was working, I tested it on the MNIST dataset (supplied by tensorflow).
Figure 1: the training- and validation loss for each epoch
Figure 2: a matrix showing target (rows) and prediction (columns) by the NN for the validation data
Figure 3: training- and validation loss and accuracy for each epoch
- This book about neural networks.
- This short explanation- and implementation of backpropagation from towardsdatascience.
- Figure 1 from this paper showing how the gradient is calculated for each layer.
This project is licensed under the MIT License.