Skip to content

Implementing the Perceptron Rule

2010 April 17
by Tedb0t

A perceptron is one of the simplest forms of neural networks: a linear classifier.  It is a method of associating input patterns with output patterns, with the advantage that it is forgiving of noise in the input.

The network consists of an input and output layer (and optional “hidden” layers in between) with weighted connections between all of them.  To train a perceptron, you present an input pattern and the output pattern it should learn to output.  Then you step through each pair of neurons in each pair of layers and modify their weights with this equation:

∆wi = c (d – sign( ∑xiwi)) xi

…where wi is the ith weight (real-valued), c is the learning constant (i.e. 0.1), d is the desired output for this node (1 or -1), and xi is the input activation (1 or -1).  The input activation term indicates that the input node in question “contributes” to the output or “inhibits” it.  sign() is given by:

sign(x) = 1 if x ≥ 0, else -1

Here’s a C++ implementation from my Neuroduino Arduino library:

int Neuroduino::signThreshold(double sum){
	if (sum >= _net.Theta) {
		return 1;
	} else {
		return -1;
	}
}

double Neuroduino::weightedSum(int l, int node){
	// calculates input activation for a particular neuron
	int i;
	double currentWeight, sum = 0.0;

	for (i=0; i<_net.Layer[l-1]->Units; i++) {
		currentWeight = _net.Layer[l]->Weight[node][i];
		sum += currentWeight * _net.Layer[l-1]->Output[i];
	}

	return sum;
}

void Neuroduino::adjustWeights(int trainArray[]){
	int l,i,j;
	int in,out, error;
	int activation;	// for each "rightmost" node
	double delta;

	for (l=1; l<_numLayers; l++) {
		// cycle through each pair of nodes
		for (i=0; i<_net.Layer[l]->Units; i++) {
			// "rightmost" layer
			// calculate current activation of this output node
			activation = signThreshold(weightedSum(l,i));
			out = trainArray[i];	// correct activation
			error = out - activation;	// -2, 2, or 0

			for (j=0; j<_net.Layer[l-1]->Units; j++) {
				// "leftmost" layer

				in = _net.Layer[l-1]->Output[j];

				delta = _net.Eta * in * error;
				_net.Layer[l]->Weight[i][j] += delta;
			}
		}
	}
}

void Neuroduino::simulateNetwork(){
	/*****
	 Calculate activations of each output node
	 *****/
	int l,j;

	for (l=_numLayers-1; l>0; l--) {
		// step backwards through layers
		// TODO: this will only work for _numLayers = 2!
		for (j=0; j < _net.Layer[l]->Units; j++) {
			_output[j] = signThreshold(weightedSum(1, j));
		}
	}
}

Related Posts:


Comments are closed.