Overview#
Perceptron (MLP) is a class of Artificial Neuron that takes takes several binary inputs, x1,x2,…x1,x2,…, and produces a single binary output based on the bias.Perceptron are sometimes colloquially referred to as "vanilla" Artificial Neural networks, especially when they have a single Hidden layer
Perceptrons are not in common usage in Artificial Neural networks as the use of more sophisticated functions.
Perceptron in the context of a binary classification task where we refer to our two classifications as 1 (positive class) and -1 (negative class) for simplicity. We can then define a Mapping function: Z = w(transform)T x
Perceptron in Python#
import numpy as np
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
random_state : int
Random number generator seed for random weight
initialization.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications (updates) in each epoch.
"""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of
samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01,
size=1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
More Information#
There might be more information for this subject on one of the following:- [#1] - Multilayer_perceptron
- based on information obtained 2017-11-24-
