{"id":13480926,"url":"https://github.com/vyassu/DeepSentiment","last_synced_at":"2025-03-27T11:31:21.464Z","repository":{"id":201861690,"uuid":"51558817","full_name":"vyassu/DeepSentiment","owner":"vyassu","description":"Speech Emotion Recognition using FFT and SVM","archived":false,"fork":false,"pushed_at":"2017-02-21T12:20:21.000Z","size":1073,"stargazers_count":79,"open_issues_count":11,"forks_count":39,"subscribers_count":9,"default_branch":"master","last_synced_at":"2024-08-01T17:23:59.486Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vyassu.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2016-02-12T00:44:19.000Z","updated_at":"2024-02-02T11:50:53.000Z","dependencies_parsed_at":null,"dependency_job_id":"ac91b756-9f16-4848-bdb0-f9f339902640","html_url":"https://github.com/vyassu/DeepSentiment","commit_stats":null,"previous_names":["vyassu/deepsentiment"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vyassu%2FDeepSentiment","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vyassu%2FDeepSentiment/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vyassu%2FDeepSentiment/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vyassu%2FDeepSentiment/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vyassu","download_url":"https://codeload.github.com/vyassu/DeepSentiment/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":222239471,"owners_count":16953955,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T17:00:46.630Z","updated_at":"2024-10-30T14:31:09.734Z","avatar_url":"https://github.com/vyassu.png","language":"Python","readme":"# DeepSentiment\nSpeech Emotion Recognition using Fast Fourier Transform and Support Vector Machine\n\nThis module aims at extracting emotion components within human speech like Pitch and Loudness and use them to identify the emotion state of the speaker. Support Vector Machines are used to segregate the features into various emotion states like Anger, Sadness, Fear, Happy and Neutral. Some of these emotion states are interleaved, reducing the precision with which we can decipher the emotion state, hence we have also incorporated text based sentiment recognition to improve precision of prediction. We have used Pyspark (Apache Spark) library to develop the model for this purpose.\n\n## Prerequisites\n1.) The following are the prerequsite python modules that needs to be installed to execute the Standalone component:\n```\nsudo pip install numpy \nsudo pip install scipy\nsudo pip install pandas\nsudo pip install SpeechRecognition\nsudo pip install -U scikit-learn\nsudo pip install findspark\nsudo pip install flask\nsudo pip install analyse\nsudo pip install flask_cors\n```\nNote: There may be other prerequiste library files that needs to installed before installing the above mentioned modules.\n\n2.) Follow the instructions mentioned in the [ link ](http://aubio.org/) to install Aubio(pitch extraction library).\n\n3.) If you want to train your own model, then install the latest version of [ Apache Spark ] (http://spark.apache.org/downloads.html) and use the code inside Spark for training the model.\n\n## Downloads\nClone the repository using the below mentioned command and execute the bash script.\n```\ngit clone https://github.com/vyassu/DeepSentiment.git\ncd DeepSentimemt/Code/StandAlone\nchmod 755 script.sh\n$./script.sh\n```\n\n## Test and Run\n\nThere are two ways to run the program\n\n1.) HTML/CSS userinterface through which you can record your voice and get the output, or upload a WAV file. in your browser paste the below \n```\nURL http://localhost:5000/deepsentiment\n```\n2.) Execute the below mentioned command \n```\n      python Controller.py\n```\nFollow the directions for commandline testing.\n\n##Note: The record voice feature is still in development stage!!\n","funding_links":[],"categories":["Behavioral","Related"],"sub_categories":["Video-games"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvyassu%2FDeepSentiment","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvyassu%2FDeepSentiment","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvyassu%2FDeepSentiment/lists"}