https://github.com/gurbaaz27/cs776a-course-project
Can Adversarial training defend against Poisoning attacks?
https://github.com/gurbaaz27/cs776a-course-project
adversarial-attacks adversarial-training backdoor-attacks cnn-keras computer-vision deep-learning poisoning-attack
Last synced: about 2 months ago
JSON representation
Can Adversarial training defend against Poisoning attacks?
- Host: GitHub
- URL: https://github.com/gurbaaz27/cs776a-course-project
- Owner: gurbaaz27
- Created: 2022-04-11T15:07:51.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-07-28T13:21:20.000Z (almost 3 years ago)
- Last Synced: 2025-03-25T01:11:41.857Z (2 months ago)
- Topics: adversarial-attacks, adversarial-training, backdoor-attacks, cnn-keras, computer-vision, deep-learning, poisoning-attack
- Language: Jupyter Notebook
- Homepage:
- Size: 44.7 MB
- Stars: 4
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# CS776A: Deep Learning for Computer Vision
## Course Project: Adversarial Training Is All You Need### Table of Contents
1. [Code](#code)
2. [Implementation References](#implementation-references)
3. [Presentations](#presentations)
4. [Team Details](#team-details)## Code
- Abstract is present as `CS776_Project_Abstract_grp1.pdf`
- Final Report is present as `CS776_Project_Report_grp1.pdf`
- Presentations are present in `presentations/` directory.
- Colab notebooks are present in offline form in `noteboooks/` directory. You may upload them on `colab` to run, or simply install all the dependencies to run locally.
- Trained weights are present in `weights/` directory.
- The attacks and trainers have been implemented and are present in `src/` directory.
- Sample tests showing how to import and run code from `src/` are present in `test/` directory.
- To install the dependencies present in `requirements.txt`, run the following code```
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```## Implementation References

1. Projected Gradient Descent :
2. Fast Gradient :
3. Poisoning Backdoor Attack :
4. Clean Label Backdoor Attack :
5. Adversarial Trainer:## Presentations
- [Introduction](https://docs.google.com/presentation/d/1MmP0-k36qOBBNjjmTbijOYmaXke5C320Nk0bbWvtvHs/edit?usp=sharing)
- [MidTerm](https://docs.google.com/presentation/d/1nM_yWx62foza3gbXtWaH5z8jiSWkhmBDAvXaYklACuI/edit?usp=sharing)
- [EndTerm](https://docs.google.com/presentation/d/15msRML-j4l8fJDiF01JdxQi4W_CI6NPOQ-SlDda5v48/edit?usp=sharing)## Team Details
- Name : Four of a Kind
- Members :
| **Name** | **Roll No.** |
| ----------- | ----------- |
| Antreev Singh Brar | 190163 |
| Anubhav Kalyani | 190164 |
| Gurbaaz Singh Nandra | 190349 |
| Pramodh V Gopalan | 190933 |