{"id":29538708,"url":"https://github.com/stypox/csnlp-project","last_synced_at":"2026-02-14T07:01:19.143Z","repository":{"id":303112383,"uuid":"984117088","full_name":"Stypox/csnlp-project","owner":"Stypox","description":null,"archived":false,"fork":false,"pushed_at":"2025-07-09T15:01:17.000Z","size":16682,"stargazers_count":0,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-10-04T09:28:36.298Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"TeX","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Stypox.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-05-15T12:26:02.000Z","updated_at":"2025-07-09T15:01:20.000Z","dependencies_parsed_at":"2025-07-05T20:06:41.028Z","dependency_job_id":null,"html_url":"https://github.com/Stypox/csnlp-project","commit_stats":null,"previous_names":["stypox/csnlp-project"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Stypox/csnlp-project","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stypox%2Fcsnlp-project","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stypox%2Fcsnlp-project/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stypox%2Fcsnlp-project/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stypox%2Fcsnlp-project/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Stypox","download_url":"https://codeload.github.com/Stypox/csnlp-project/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Stypox%2Fcsnlp-project/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29438976,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-14T05:24:35.651Z","status":"ssl_error","status_checked_at":"2026-02-14T05:24:34.830Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-07-17T05:10:09.050Z","updated_at":"2026-02-14T07:01:19.127Z","avatar_url":"https://github.com/Stypox.png","language":"TeX","readme":"# Investigating Layer-Specific Vulnerability of LLMs to Adversarial Attacks\n\n*Project for the **Computational Semantics for Natural Language Processing** (CSNLP) course at ETH Zürich in Spring Semester 2025.*  \n**Students**: Cagatay Gultekin, Fabio Giovanazzi, Adam Rahmoun, Tobias Kaiser.\n\n**Initial project proposal**: [CSNL_Proposal__Final.pdf](./meta/CSNL_Proposal__Final.pdf)  \n**Midterm progress report**: [progress_report.pdf](./meta/progress_report.pdf)  \n**Final paper**: [Investigating_Layer_Specific_Vulnerability_of_LLMs_to_Adversarial_Attacks.pdf](./meta/Investigating_Layer_Specific_Vulnerability_of_LLMs_to_Adversarial_Attacks.pdf)  \n**Poster**: [poster.pdf](./meta/poster.pdf)\n\nSee [below](#reproducing-our-results) for how to reproduce our results.\n\n[\u003cimg src=\"./meta/poster.png\" width=600 alt=\"Poster\" /\u003e](./meta/poster.pdf)\n\n## Reproducing our results\n\n### Environment setup\n\nRun these commands **in this order**:\n```sh\nconda create --name csnlp python=3.12 # python 3.12 works\nconda activate csnlp # activate environment\npip install -r ./requirements.txt\n```\n\n### Training the model\n\nThe file [train.py](./train.py) is used to train the model using the regularization described in the paper. The script takes various parameters (get the full list with `--help`), though the relevant ones are:\n\n- `--layers`: used to specify which layers are gradient-regularized\n- `--learning_rate`: $\\eta$ in the paper\n- `--reg_lambda`: $\\lambda$ in the paper\n- `--max_length`: we used 512, as described in the paper\n- `--batch_size`: we used 2, as described in the paper\n- `--epochs`: we used 25000, which combined with a batch size of 2 makes for 50000 sequences\n- `--model`: either \"Qwen/Qwen2.5-1.5B-Instruct\" or \"microsoft/Phi-3-mini-4k-instruct\"\n- `--dataset`: this is by default \"allenai/c4\"\n\nSending `Ctrl+C` will run an inference on the partially trained model; use double `Ctrl+C` to exit.\n\n### Evaluating perplexity\n\nThe file [evaluate_perplexity.py](./evaluate_perplexity.py) is used to compute perplexity of a model. This script also takes several parameters, the relevant ones are:\n\n- `--max_length`: we used 512, as described in the paper\n- `--batch_size`: we used 2, as described in the paper\n- `--epochs`: we used 25000, which combined with a batch size of 2 makes for 50000 sequences\n- `--model`: path to a local model for which to calculate perplexity, or either \"Qwen/Qwen2.5-1.5B-Instruct\" or \"microsoft/Phi-3-mini-4k-instruct\" to calculate perplexity of a pretrained model\n- `--dataset`: this is by default \"allenai/c4\"\n\n### Running GCG attacks\n\nThe Python notebook [evaluate_GCG_attacks.ipynb](./evaluate_GCG_attacks.ipynb) contains the code and the relevant documentation to run GCG attacks on the trained models and calculate various statistics.\n\n### Raw results\n\nAll of the data results used in the paper can be found under the `.txt` files in the root of the repository. At the bottom of each file there is a **precise description** of how the results were obtained. To parse the results into a machine-readable format, use [results_to_latex.py](./results_to_latex.py), [results_to_latex_extended.py](./results_to_latex_extended.py) and [perplexity_to_latex.py](./perplexity_to_latex.py).\n\n### Reimplemented model\n\nThe folder [reimplemented_model/](./reimplemented_model/) should be ignored, as it just contains a reimplementation of the tinyllama LLM that we made to learn more about its inner workings.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstypox%2Fcsnlp-project","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fstypox%2Fcsnlp-project","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstypox%2Fcsnlp-project/lists"}