{"id":20434770,"url":"https://github.com/netcodez/signal_processing-audio-classification","last_synced_at":"2025-10-09T11:44:54.263Z","repository":{"id":193613949,"uuid":"689173028","full_name":"Netcodez/signal_processing-audio-classification","owner":"Netcodez","description":"Deep learning Neural Network Model for classification (on/off ) of Jet engine sounds aimed at improving cost-effective fleet management","archived":false,"fork":false,"pushed_at":"2023-09-09T11:10:33.000Z","size":123,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-03-05T06:33:15.518Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://netcodez.github.io/signal_processing-audio-classification/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Netcodez.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-09-09T01:55:45.000Z","updated_at":"2023-09-09T11:08:41.000Z","dependencies_parsed_at":"2024-11-15T08:41:03.834Z","dependency_job_id":null,"html_url":"https://github.com/Netcodez/signal_processing-audio-classification","commit_stats":null,"previous_names":["netcodez/jet-engine-sound-classification","netcodez/signal_processing-audio-classification"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Netcodez/signal_processing-audio-classification","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Netcodez%2Fsignal_processing-audio-classification","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Netcodez%2Fsignal_processing-audio-classification/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Netcodez%2Fsignal_processing-audio-classification/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Netcodez%2Fsignal_processing-audio-classification/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Netcodez","download_url":"https://codeload.github.com/Netcodez/signal_processing-audio-classification/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Netcodez%2Fsignal_processing-audio-classification/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279001300,"owners_count":26083059,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-09T02:00:07.460Z","response_time":59,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-15T08:28:51.413Z","updated_at":"2025-10-09T11:44:54.229Z","avatar_url":"https://github.com/Netcodez.png","language":"Jupyter Notebook","readme":"# signal_processing-audio-classification\nNeural Network Model on Classification (on/off ) of Jet engine sounds aimed at improving cost-effective fleet management\n\n# Aim\n\nThe primary goal of this project is to develop an audio classification model using TensorFlow and Keras. The model aims to classify audio samples as either True (1) or False (0) based on specific sound patterns detected in the audio data.\n\n## Data\n\nThe project utilizes audio data collected from various sources. Each audio sample is labeled as either True (1) or False (0) based on the presence of target sound patterns. The data collection process involves reading audio files and extracting relevant features for training the classification model.\n\n## Files in the Repository\n\nThe repository contains the following key files and resources:\n\n- `README.md`: This documentation file providing an overview of the project.\n- jet-engine-audio-classification.ipynb: notebook for data preprocessing, model development, and evaluation.\n- jet-engine-audio-classification.py: python script for data preprocessing, model development, and evaluation.\n\n## Dependencies\n\nTo run the project successfully, you need to install the following Python packages:\n\n- `os`: For operating system-related functions.\n- `numpy`: For numerical operations and array handling.\n- `pandas`: For data manipulation and handling DataFrames.\n- `matplotlib.pyplot`: For data visualization.\n- `tensorflow`: For deep learning model development.\n- `tensorflow_io`: For audio data handling.\n- `keras_tuner`: For hyperparameter tuning.\n- `librosa`: For audio feature extraction.\n- `IPython.display`: For displaying audio samples.\n- `glob`: For file path manipulation.\n- `tqdm`: For progress bars and monitoring loops.\n- `seaborn`: For enhanced data visualization.\n- `itertools.cycle`: For cycling through color palettes.\n- `sklearn.model_selection.train_test_split`: For data splitting.\n- `sklearn.metrics.accuracy_score`: For accuracy calculation.\n- `sklearn.metrics.recall_score`: For recall calculation.\n- `imblearn.under_sampling.ClusterCentroids`: For addressing data class imbalance in undersampling.\n- `tensorflow.keras.models.Sequential`: For building the neural network model.\n- `tensorflow.keras.layers`: For adding layers to the model.\n- `datetime`: For measuring training duration.\n\nPlease make sure to set up a virtual environment and install these packages to avoid conflicts with system packages.\n\n## Handling Data Imbalance\n\nThe project addresses class imbalance in the training data using undersampling techniques. This step ensures that the model does not become biased toward the majority class and can effectively classify both True and False samples.\n\n## Model Design\n\nThe audio classification model is designed using TensorFlow and Keras. The architecture includes neural network layers for feature extraction and classification. Hyperparameter tuning is performed to optimize the model's performance.\n\n## Model Results\n\nThe model is evaluated on a test dataset, and the following results are obtained:\n\n- Best validation accuracy achieved during training: 0.9286\n- Test accuracy: 1.0 (perfect accuracy)\n\nThese results demonstrate the effectiveness of the classification model in accurately identifying True and False audio samples.\n\n## Possible Modifications/Improvements\n\nWhile the current model achieves high accuracy, there is always room for improvement. Potential modifications and enhancements for the project include:\n\n- Exploring more complex neural network architectures.\n- Incorporating additional audio features for improved classification.\n- Experimenting with different undersampling and data augmentation techniques.\n- Fine-tuning hyperparameters for further optimization.\n- Scaling the model for deployment on mobile devices or web applications.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnetcodez%2Fsignal_processing-audio-classification","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnetcodez%2Fsignal_processing-audio-classification","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnetcodez%2Fsignal_processing-audio-classification/lists"}