An open API service indexing awesome lists of open source software.

https://github.com/264gaurav/deep-learning

Deep Learning and Neural Network learning/building and exploring
https://github.com/264gaurav/deep-learning

artificial-neural-networks dagshub deep-learning deep-neural-networks dvc experiment-tracking hyperparameter-tuning keras-tensorflow keras-tuner mlflow-tracking numpy sklearn tensorflow testing training-data

Last synced: 4 months ago
JSON representation

Deep Learning and Neural Network learning/building and exploring

Awesome Lists containing this project

README

          

# Deep Learning: Underfitting, Overfitting & Regularization

## 📌 Core Libraries

- **NumPy** → Arrays & matrix operations
- **Matplotlib** → Visualization
- **Pandas** → Data handling (CSV, DataFrames)
- **TensorFlow & Keras** → Neural networks
- **Sequential** → Linear stack of layers
- **Dense** → Fully connected layer
- **Dropout** → Reduces overfitting

---

## ⚖️ Underfitting vs Overfitting

### 🔹 Underfitting

Model too simple → poor performance on train & test.
**Fix:** Add layers/neurons, train longer, lower LR, reduce regularization.

### 🔹 Overfitting

Model too complex → memorizes training, fails on test.
**Fix:** Dropout, L2 regularization, early stopping, data augmentation, reduce complexity.

---

## 📊 Comparison

| Feature | Underfitting 🟡 | Good Fit 🟢 | Overfitting 🔴 |
| ----------------- | --------------- | ----------- | -------------- |
| Training Accuracy | Low | High | Very High |
| Test Accuracy | Low | High | Low |
| Model Complexity | Too simple | Balanced | Too complex |
| Generalization | Poor | Good | Poor |

---

## 🎯 Dropout

- Randomly drops neurons (prob _p_) during training.
- Prevents reliance on specific neurons → robust features.
- **Training:** scale surviving neurons `1/(1-p)`
- **Testing:** dropout off, scaling handled automatically.
- Best for large models, avoid overuse on small datasets.

---

## 🛠️ Ways to Reduce Overfitting

- Dropout
- L1/L2 Regularization
- Early Stopping
- Data Augmentation
- Batch Normalization
- Reduce Model Complexity
- Cross-Validation

---

## ✅ Rule of Thumb

- **Underfitting:** Increase complexity, train longer.
- **Overfitting:** Add regularization, dropout, early stopping, augmentation.

📌 **Goal:** Achieve balance → model should **generalize well** to unseen data.

# Keras Tuner - HyperParameter Tuning of models:

### For understanding of KerasTuner, refer to the [KerasTuner Readme](./2-kerasTuner/Readme.md).

# Embeddings with keras and tensoflow :

### For understanding of Embeddings, refer to the [Embeddings Readme](./5-WordEmbeddings/Readme.md).