https://github.com/jiangnan3/smartgrid-llm
LLM Security for Smart Grid
https://github.com/jiangnan3/smartgrid-llm
Last synced: 4 months ago
JSON representation
LLM Security for Smart Grid
- Host: GitHub
- URL: https://github.com/jiangnan3/smartgrid-llm
- Owner: jiangnan3
- Created: 2024-05-09T09:20:17.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-11-08T07:01:34.000Z (6 months ago)
- Last Synced: 2024-11-08T07:36:16.358Z (6 months ago)
- Language: Python
- Size: 197 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-LLM4Security - smartgrid-llm
README
# smartgrid-llm
This repo provides the source code data associated with the following paper
**Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation**
**by Jiangnan Li, Yingyuan Yang, and Jinyuan Sun**
The project structure is shown below. **prompt_injection.py** and **prompt_injeciton.py** are the simulations for the Bad Data Injection attack and Domain Knowledge Extraction attack respectively.
An OpenAI API Token will be needed to run the OpenAI model evaluation, and a Huggingface LLaMA-3 endpoint and token are needed to run Meta LLaMA-3 evaluation. Please add them to the credential.py definition.
```bash
smartgrid-llm/
├── SGLLM/
│ ├── attack.py
│ ├── llms.py
│ ├── utilities.py
├── data/
│ ├── csv_raw/
│ │ ├── data_binary_classification.csv
│ │ ├── meter_raw.csv
│ ├── prompt-extraction/
│ │ ├── extraction_sys_prompt.txt
│ │ ├── meter_data.json
│ ├── prompt-injection/
│ │ ├── data_binary_classification.json
│ │ ├── injection_prompt.json
│ │ ├── prompt_binary_classification.txt
├── evaluation/
│ ├── prompt-extraction/
│ │ ├── gpt35-attack.json
│ │ ├── gpt35-normal.json
│ │ ├── gpt4-attack.json
│ │ ├── gpt4-normal.json
| | |── llama3-attack.json
| | |── llama3-normal.json
│ ├── prompt-injection/
│ ├── **evaluation result files**
├── prompt_extraction.py
├── prompt_injection.py
├── credential.py
```