An open API service indexing awesome lists of open source software.

https://github.com/mohammedsaqibms/regularization

This repository implements a 3-layer neural network with L2 and Dropout regularization using Python and NumPy. It focuses on reducing overfitting and improving generalization. The project includes forward/backward propagation, cost functions, and decision boundary visualization. Inspired by the Deep Learning Specialization from deeplearning.ai.
https://github.com/mohammedsaqibms/regularization

deep-learning dropout-regularization gradient-descent l2-regularization model-training neural-network-architecture overfitting-prevention performance-optimization regularization-techniques

Last synced: 3 months ago
JSON representation

This repository implements a 3-layer neural network with L2 and Dropout regularization using Python and NumPy. It focuses on reducing overfitting and improving generalization. The project includes forward/backward propagation, cost functions, and decision boundary visualization. Inspired by the Deep Learning Specialization from deeplearning.ai.

Awesome Lists containing this project

README

        

# 🎯 Neural Network Regularization with L2 and Dropout

Welcome to this project on **Regularization** techniques in neural networks! In this repository, we implement and explore two key regularization methods: **L2 Regularization** and **Dropout** to improve model generalization and performance. Below, you'll find a detailed explanation of the code, along with its key components and results.

Let's dive into the project!

## 🧠 Introduction to Regularization

Regularization is essential for improving the generalization ability of machine learning models. It helps prevent **overfitting**, ensuring that the model performs well not only on the training data but also on unseen test data. In this project, we focus on:

- **L2 Regularization:** Adds a penalty proportional to the squared value of the weights, which discourages large weight values.
- **Dropout Regularization:** Randomly turns off a fraction of neurons during training to prevent the network from becoming too reliant on specific neurons.

## 🔗 Acknowledgements

This project was developed as part of the **Deep Learning Specialization** by [DeepLearning.AI](https://www.deeplearning.ai/courses/deep-learning-specialization/). Special thanks to their incredible team for providing the foundational content.