Multi-Layer Perceptrons and Back-Propagation; a Derivation and Implementation in Python

Artificial neural networks have regained popularity in machine learning circles with recent advances in deep learning. Deep learning techniques trace their origins back to the concept of back-propagation in multi-layer perceptron (MLP) networks, the topic of this post.

Multi-Layer Perceptron Networks for Regression

A MLP network consists of layers of artificial neurons connected by weighted edges. Neurons are denoted n_{ij} for the j-th neuron in the i-th layer of the MLP from left to right top to bottom. Inputs are fed into the leftmost layer and propagate through the network along weighted edges until reaching the final, or output, layer. An example of a MLP network can be seen below in Figure 1. Continue reading “Multi-Layer Perceptrons and Back-Propagation; a Derivation and Implementation in Python”

Advertisements