https://github.com/mohammedsaqibms/regularization
This repository implements a 3-layer neural network with L2 and Dropout regularization using Python and NumPy. It focuses on reducing overfitting and improving generalization. The project includes forward/backward propagation, cost functions, and decision boundary visualization. Inspired by the Deep Learning Specialization from deeplearning.ai.
https://github.com/mohammedsaqibms/regularization
deep-learning dropout-regularization gradient-descent l2-regularization model-training neural-network-architecture overfitting-prevention performance-optimization regularization-techniques
Last synced: 3 months ago
JSON representation
This repository implements a 3-layer neural network with L2 and Dropout regularization using Python and NumPy. It focuses on reducing overfitting and improving generalization. The project includes forward/backward propagation, cost functions, and decision boundary visualization. Inspired by the Deep Learning Specialization from deeplearning.ai.
- Host: GitHub
- URL: https://github.com/mohammedsaqibms/regularization
- Owner: MohammedSaqibMS
- Created: 2024-09-11T17:42:01.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-02-19T15:50:14.000Z (3 months ago)
- Last Synced: 2025-02-19T16:39:33.981Z (3 months ago)
- Topics: deep-learning, dropout-regularization, gradient-descent, l2-regularization, model-training, neural-network-architecture, overfitting-prevention, performance-optimization, regularization-techniques
- Language: Jupyter Notebook
- Homepage:
- Size: 3.01 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 🎯 Neural Network Regularization with L2 and Dropout
Welcome to this project on **Regularization** techniques in neural networks! In this repository, we implement and explore two key regularization methods: **L2 Regularization** and **Dropout** to improve model generalization and performance. Below, you'll find a detailed explanation of the code, along with its key components and results.
Let's dive into the project!
## 🧠Introduction to Regularization
Regularization is essential for improving the generalization ability of machine learning models. It helps prevent **overfitting**, ensuring that the model performs well not only on the training data but also on unseen test data. In this project, we focus on:
- **L2 Regularization:** Adds a penalty proportional to the squared value of the weights, which discourages large weight values.
- **Dropout Regularization:** Randomly turns off a fraction of neurons during training to prevent the network from becoming too reliant on specific neurons.## 🔗 Acknowledgements
This project was developed as part of the **Deep Learning Specialization** by [DeepLearning.AI](https://www.deeplearning.ai/courses/deep-learning-specialization/). Special thanks to their incredible team for providing the foundational content.