{"id":20419161,"url":"https://github.com/aaaastark/false-data-injection-attack","last_synced_at":"2025-09-24T06:32:00.659Z","repository":{"id":179677691,"uuid":"550181375","full_name":"aaaastark/False-Data-Injection-Attack","owner":"aaaastark","description":"False Data Injection Attack (FDIA) with Long Sort Term Memory (LSTM) Model using Python","archived":false,"fork":false,"pushed_at":"2023-11-01T18:20:11.000Z","size":169,"stargazers_count":6,"open_issues_count":0,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-11-15T06:36:21.221Z","etag":null,"topics":["adversarial-attacks","data-science","data-visualization","deep-learning","false-data-injection","false-data-injection-attack","injection-attack","keras","lstm","lstm-neural-networks","lstm-sentiment-analysis","machine-learning","matplotlib","numpy","pandas","python","seaborn","sklearn","tensorflow","time-series-analysis"],"latest_commit_sha":null,"homepage":"https://github.com/aaaastark/False-Data-Injection-Attack","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aaaastark.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2022-10-12T10:33:10.000Z","updated_at":"2024-07-28T12:06:21.000Z","dependencies_parsed_at":null,"dependency_job_id":"ecc8fcd2-2d0d-49ef-9639-2acc1159d0c4","html_url":"https://github.com/aaaastark/False-Data-Injection-Attack","commit_stats":null,"previous_names":["aaaastark/false-data-injection-attack-with-lstm-using-python"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aaaastark%2FFalse-Data-Injection-Attack","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aaaastark%2FFalse-Data-Injection-Attack/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aaaastark%2FFalse-Data-Injection-Attack/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aaaastark%2FFalse-Data-Injection-Attack/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aaaastark","download_url":"https://codeload.github.com/aaaastark/False-Data-Injection-Attack/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":234048421,"owners_count":18771476,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-attacks","data-science","data-visualization","deep-learning","false-data-injection","false-data-injection-attack","injection-attack","keras","lstm","lstm-neural-networks","lstm-sentiment-analysis","machine-learning","matplotlib","numpy","pandas","python","seaborn","sklearn","tensorflow","time-series-analysis"],"created_at":"2024-11-15T06:36:09.348Z","updated_at":"2025-09-24T06:32:00.251Z","avatar_url":"https://github.com/aaaastark.png","language":null,"readme":"# False Data Injection Attack (FDIA) with Long Sort Term Memory (LSTM) Model using Python\n\n#### Get Started with Relevant Project Implementation\n- If you're looking for assistance with a project implementation that aligns with your needs, feel free to get in touch with us [LinkedIn](https://www.linkedin.com/in/a-a-a-a-stark-69696617b/).\n- To get in touch with us and discuss your project implementation needs, please send an email to [4444stark@gmail.com](mailto:4444stark@gmail.com).\n- Thank you for considering our services. We look forward to working with you!\n\nThe dependence on advanced information and communication technology increases the vulnerability in smart grids under cyber-attacks. Recent research on unobservable false data injection attacks (FDIAs) reveals the high risk of secure system operation, since these attacks can bypass current bad data detection mechanisms. To mitigate this risk, here we proposed this project a data-driven learning-based Long Short Term Memory (LSTM) algorithm for detecting unobservable FDIAs in systems.\n\n![0](https://user-images.githubusercontent.com/74346775/195329042-38bf19a8-688b-4a65-bd02-1b65d15552a7.PNG)\n\n## Time Series: Adversarial Attacks Data (Dataset)\n#### API that used in this Project:\n- keras\n- sklearn\n- tensorflow\n- matplotlib\n- seaborn\n- pandas\n- numpy\n\n#### Display Dataset: Adversarial Attacks Data (Time Series)\n![1](https://user-images.githubusercontent.com/74346775/195321679-5234f664-f402-4922-85b1-65ec8e94b391.PNG)\n\n#### Plot Dataset: Adversarial Attacks Data (Time Series)\n![2](https://user-images.githubusercontent.com/74346775/195321921-4f907706-411b-469f-80cb-1b04014dcbf6.PNG)\n\n#### The Accuracy of the LSTM Model (Without FDIA) are Plotted\n![3_0](https://user-images.githubusercontent.com/74346775/195323028-0b02e5cf-c89e-4d62-8d3c-6d601b6e047b.PNG)\n\n#### The loss and Accuracy of the LSTM Model (Without FDIA) are Plotted\n![3_1](https://user-images.githubusercontent.com/74346775/195322800-a7eff69b-19ce-44ef-8da9-5f97554bb1c4.PNG)\n![3_2](https://user-images.githubusercontent.com/74346775/195322814-8479ff5f-50f8-4e8e-912b-bfcb0e7838f5.PNG)\n\n#### Display Dataset With FDIA Attack: Adversarial Attacks Data (Time Series)\n![4](https://user-images.githubusercontent.com/74346775/195323718-ee8db081-629f-4d3a-a031-6d30ae7f67a9.PNG)\n\n#### The Accuracy of the LSTM Model (Within FDIA) are Plotted\n![5_0](https://user-images.githubusercontent.com/74346775/195323836-5304a4b2-a74c-4202-b02d-bce57efa8e97.PNG)\n\n#### The loss and Accuracy of the LSTM Model (Within FDIA) are Plotted\n![4_1](https://user-images.githubusercontent.com/74346775/195323915-0f127e1c-b71d-4c1e-8345-f0fc5a63a04a.PNG)\n![4_2](https://user-images.githubusercontent.com/74346775/195323935-4990e132-1644-41a1-b60c-00adcbc2e704.PNG)\n\n#### Comparison of Models: Normal LSTM Model and FDIA LSTM Model\n![6_1](https://user-images.githubusercontent.com/74346775/195324493-fb59355f-141e-4c4f-9d30-df301a62146a.PNG)\n![6_2](https://user-images.githubusercontent.com/74346775/195324495-6d713169-c232-4aef-bafa-24400bfc875f.PNG)\n\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faaaastark%2Ffalse-data-injection-attack","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faaaastark%2Ffalse-data-injection-attack","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faaaastark%2Ffalse-data-injection-attack/lists"}