The simplest form of a neural network, a single neuron.
The Perceptron is the ancestor of modern artificial neural networks. Developed by Frank Rosenblatt in 1958, it's a simple algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether an input, represented by a vector of numbers, belongs to some specific class or not. The perceptron is a single-layer neural network, consisting of a single neuron. It takes multiple binary inputs, x1, x2, ..., and produces a single binary output. Rosenblatt introduced weights, w1, w2, ..., real numbers expressing the importance of the respective inputs to the output. The neuron's output, 0 or 1, is determined by whether the weighted sum of the inputs is less than or greater than some threshold value. Just like the weights, the threshold is a real number which is a parameter of the neuron. A key part of the perceptron is its learning rule. The algorithm automatically learns the optimal weight coefficients. The input features are presented to the perceptron one by one and the weights are updated after each sample. The process is repeated until the algorithm can classify all training examples correctly. While a single perceptron is limited and can only learn linearly separable patterns (it can't solve the famous XOR problem), its core concepts—weighted inputs, a summing function, an activation (step function), and a learning rule—form the fundamental building blocks of the much more complex deep neural networks used today.