Explain how would you train perceptron? Implement it in plain python

Post questions related to neural networks
Post Reply
quantumadmin
Site Admin
Posts: 236
Joined: Mon Jul 17, 2023 2:19 pm

Explain how would you train perceptron? Implement it in plain python

Post by quantumadmin »

Training a perceptron involves adjusting its weights based on the input data to minimize classification errors. The perceptron algorithm is a simple type of linear classifier that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.

Steps to Train a Perceptron
  • Initialize Weights and Bias: Start with small random values for weights and bias.
    Feed Input and Calculate Output: For each input, calculate the output using the step function.
    Update Weights and Bias: Adjust the weights and bias based on the error (difference between the predicted and actual labels).
    Repeat: Iterate over the dataset multiple times until the weights converge or a stopping criterion is met (e.g., a maximum number of iterations).
Perceptron Algorithm

Here’s a step-by-step implementation in plain Python:

Code: Select all

import numpy as np

class Perceptron:
    def __init__(self, learning_rate=0.01, n_iter=1000):
        self.learning_rate = learning_rate
        self.n_iter = n_iter
        self.weights = None
        self.bias = None
    
    def fit(self, X, y):
        n_samples, n_features = X.shape
        self.weights = np.zeros(n_features)
        self.bias = 0
        
        for _ in range(self.n_iter):
            for idx, x_i in enumerate(X):
                linear_output = np.dot(x_i, self.weights) + self.bias
                y_pred = self._step_function(linear_output)
                
                # Perceptron update rule
                update = self.learning_rate * (y[idx] - y_pred)
                self.weights += update * x_i
                self.bias += update
    
    def predict(self, X):
        linear_output = np.dot(X, self.weights) + self.bias
        return self._step_function(linear_output)
    
    def _step_function(self, x):
        return np.where(x >= 0, 1, 0)

# Example usage:
if __name__ == "__main__":
    # Sample data (AND gate)
    X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
    y = np.array([0, 0, 0, 1])
    
    # Initialize the perceptron
    perceptron = Perceptron(learning_rate=0.1, n_iter=10)
    perceptron.fit(X, y)
    
    # Predict
    predictions = perceptron.predict(X)
    print("Predictions:", predictions)
Explanation of the Code
1. Initialization (__init__):
  • learning_rate: The step size for weight updates.
    n_iter: The number of iterations (epochs) over the training dataset.
2. Training (fit method):
  • Initialize weights and bias to zero.
    For each epoch, iterate over each sample in the dataset:
    Compute the linear output (weighted sum plus bias).
    Apply the step function to get the predicted label.
    Update the weights and bias using the Perceptron learning rule: update = learning_rate * (true_label - predicted_label).
3. Prediction (predict method):
  • Compute the linear output for the input.
    Apply the step function to produce the final binary output (0 or 1).
4. Step Function (_step_function method):

Returns 1 if the input is greater than or equal to 0, otherwise returns 0.

Example Usage

The example usage demonstrates training the perceptron on a simple AND gate dataset and then predicting the output for the same inputs. The perceptron should learn to correctly classify the AND gate outputs.

This implementation provides a basic understanding of how a perceptron works and can be extended to more complex problems and datasets.
Post Reply