https://github.com/ifrazaib/perceptrontraining
In this repo I have added Perceptron training rule is a topic of machine learning in which we use our activation function and neural network and update our weights if there is an error in actual output or output that we had to matched.
https://github.com/ifrazaib/perceptrontraining
perceptron-learning-algorithm
Last synced: 8 months ago
JSON representation
In this repo I have added Perceptron training rule is a topic of machine learning in which we use our activation function and neural network and update our weights if there is an error in actual output or output that we had to matched.
- Host: GitHub
- URL: https://github.com/ifrazaib/perceptrontraining
- Owner: ifrazaib
- License: mit
- Created: 2024-06-14T14:14:48.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-09-07T14:19:31.000Z (about 1 year ago)
- Last Synced: 2024-12-27T15:12:53.622Z (9 months ago)
- Topics: perceptron-learning-algorithm
- Language: Python
- Homepage:
- Size: 52.7 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Perceptron Training Rule
## Overview
The Perceptron is one of the simplest types of artificial neural networks used for binary classification tasks. This project demonstrates how to implement the Perceptron training rule, train a Perceptron on a dataset, and calculate the accuracy of the model.## Features
- Perceptron Training: Implementation of the Perceptron training algorithm.
- Accuracy Calculation: Evaluate the model's performance by calculating the accuracy on a test dataset.
- Binary Classification: Applicable to binary classification problems.
- Customizable Parameters: Learning rate and number of iterations can be adjusted.
## Contents
- Perceptron Training Rule
- Dataset
- Usage
- Example
- Contributing
## Perceptron Training Rule
The Perceptron training rule updates the weights based on the prediction error for each training sample. The rule is as follows:
- Initialize weights (including bias) to small random numbers.
- For each training sample:
- Compute the output using the current weights.
- Update the weights based on the error:
wi<-wi+
𝑤𝑖←𝑤𝑖+𝜂(𝑦−𝑦^)𝑥𝑖w i←wi+η(y− y^)xi
where
𝑤𝑖= is the weight for feature.
η is the learning rate,
y is the true label,
y^ is the predicted label, and
𝑥𝑖 is the feature value.
