Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nickklos10/compressive-strenght-prediction
This project predicts concrete compressive strength using a neural network regression model built with Keras.
https://github.com/nickklos10/compressive-strenght-prediction
jupyter-notebook keras matplotlib numpy pandas python scikit-learn
Last synced: 9 days ago
JSON representation
This project predicts concrete compressive strength using a neural network regression model built with Keras.
- Host: GitHub
- URL: https://github.com/nickklos10/compressive-strenght-prediction
- Owner: nickklos10
- Created: 2024-10-17T19:55:36.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2024-11-04T23:18:08.000Z (14 days ago)
- Last Synced: 2024-11-05T00:22:19.554Z (14 days ago)
- Topics: jupyter-notebook, keras, matplotlib, numpy, pandas, python, scikit-learn
- Language: Jupyter Notebook
- Homepage:
- Size: 74.2 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Concrete Compressive Strength Prediction using Keras
## Project Overview
This project aims to predict the **compressive strength** of concrete using a regression model built with the **Keras** deep learning library. Predicting concrete compressive strength is vital for ensuring the reliability and safety of concrete structures, and accurate predictions can help optimize material use and reduce costs.
### Key Features:
- Data sourced from [UCI Machine Learning Repository](https://cocl.us/concrete_data).
- Built a **neural network regression model** using Keras.
- Implemented data preprocessing techniques including **normalization**.
- Evaluated model performance using **Mean Squared Error (MSE)** and **Root Mean Squared Error (RMSE)**.
---## Dataset Description
The dataset contains information on the composition of concrete mixes and their corresponding compressive strength, measured in megapascals (MPa).
### Features (Predictors):
- **Cement** (kg in a m³ mixture)
- **Blast Furnace Slag** (kg in a m³ mixture)
- **Fly Ash** (kg in a m³ mixture)
- **Water** (kg in a m³ mixture)
- **Superplasticizer** (kg in a m³ mixture)
- **Coarse Aggregate** (kg in a m³ mixture)
- **Fine Aggregate** (kg in a m³ mixture)
- **Age** (days)### Target Variable:
- **Strength**: Concrete compressive strength (MPa).
---
---
## Installation
### Prerequisites
- **Python 3.7 or higher**
- **pip** package manager### Clone the Repository
```bash
git clone https://github.com/yourusername/Compressive-Strength-Prediction.git
cd Compressive-Strength-Prediction
```## Usage
### Running the Jupyter Notebook
1. Navigate to the notebooks directory:
```bash
cd notebooks
```
2. Launch Jupyter Notebook:```bash
jupyter notebook
```
3. Open Concrete_Strength_Model.ipynb and run the cells sequentially to execute the project.## Methodology
1. Data Preparation
- `Import Libraries`: Essential libraries such as pandas, numpy, matplotlib, tensorflow.keras, and scikit-learn are imported for data manipulation, visualization, model building, and evaluation.- `Load Data`: The dataset is loaded from the provided URL using pandas.read_csv().
- `Data Inspection`: The first few rows and statistical summaries are examined to understand the data distribution and check for anomalies.
2. Exploratory Data Analysis
- `Missing Values`: Checked for any missing or null values to ensure data quality.- `Statistical Summary`: Analyzed mean, standard deviation, and other statistical metrics to understand feature distributions.
- `Visualization`: Plotted histograms, scatter plots, and correlation matrices to visualize relationships between features and the target variable.
3. Data Preprocessing
- `Feature Selection`: Selected relevant predictor variables and the target variable (Strength).- `Normalization`: Applied StandardScaler to normalize the feature data, ensuring that all features contribute equally to the model training.
- `Train-Test Split`: Split the data into training and testing sets (70% training, 30% testing) to evaluate model performance on unseen data.
4. Building the Regression Model
- `Model Architecture`: Constructed a Sequential neural network with:- `Input Layer`: Corresponding to the number of predictors.
- `Hidden Layers`: Two hidden layers with 50 neurons each and ReLU activation.
- `Output Layer`: Single neuron with linear activation for regression output.
- `Compilation`: Used the 'adam' optimizer and mean_squared_error as the loss function.5. Training the Model
- `Model Training`: Trained the model for 100 epochs with a batch size of 10, using 20% of the training data for validation.- `Training Visualization`: Plotted training and validation loss over epochs to monitor learning progress and detect overfitting or underfitting.
6. Evaluating the Model
- `Predictions`: Generated predictions on the test set.- `Performance Metrics`: Calculated Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) to quantify model accuracy.
- `Result Interpretation`: Analyzed whether the obtained MSE is acceptable based on the dataset's context and potential application requirements.
---
## Results
After training the regression model, the following performance metrics were obtained:- `Mean Squared Error (MSE)` on Test Data: 36.436
- `Root Mean Squared Error (RMSE)`: ≈6.04 MPa## Interpretation
- `MSE of 36.436`: Indicates the average squared difference between the predicted and actual compressive strength values.- `RMSE of 6.04 MPa`: Provides an error metric in the same units as the target variable, suggesting that on average, predictions deviate by approximately 6.04 MPa from actual values.
- `Visual Analysis`: The loss curves suggest the model is performing well without significant overfitting or underfitting. The scatter plot of predictions vs. actual values shows a reasonable alignment, indicating good predictive capability.