Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ZhengyuZhao/AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
https://github.com/ZhengyuZhao/AI-Security-and-Privacy-Events
adversarial-examples adversarial-machine-learning ai-privacy ai-security data-poisoning
Last synced: 9 days ago
JSON representation
A curated list of academic events on AI Security & Privacy
- Host: GitHub
- URL: https://github.com/ZhengyuZhao/AI-Security-and-Privacy-Events
- Owner: ZhengyuZhao
- License: mit
- Created: 2021-10-04T11:57:58.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-08-22T06:20:57.000Z (3 months ago)
- Last Synced: 2024-08-22T07:34:30.066Z (3 months ago)
- Topics: adversarial-examples, adversarial-machine-learning, ai-privacy, ai-security, data-poisoning
- Homepage:
- Size: 118 KB
- Stars: 124
- Watchers: 7
- Forks: 15
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-trustworthy-deep-learning - A curated list of AI Security & Privacy events - Security-and-Privacy-Events) ![ ](https://img.shields.io/github/last-commit/ZhengyuZhao/AI-Security-and-Privacy-Events) (Other Lists)
README
# A curated list of AI Security & Privacy academic events
## Seminar
+ [**NLP & LLM Security**](https://sig.llmsecurity.net/talks/)
+ [**Privacy and Security in ML (PriSec-ML)**](https://prisec-ml.github.io/)
+ [**Machine Learning Security (MLSec)**](https://www.youtube.com/c/MLSec/playlists)
+ [**Seminars on Security & Privacy in Machine Learning (ML S&P)**](https://vsehwag.github.io/SPML_seminar/)
+ [**AI Security and Privacy (AISP)**](https://space.bilibili.com/1556922191/?spm_id_from=333.999.0.0) (in Chinese)## Conference
+ [**IEEE Conference on Secure and Trustworthy Machine Learning (2022-)**](https://satml.org/)
+ [**The Conference on Applied Machine Learning in Information Security (2017-)**](https://www.camlis.org/)## Workshop
- ### Security & Privacy
+ **Artificial Intelligence and Security** ([CCS 2008-](https://aisec.cc/))
+ **Deep Learning Security and Privacy** ([S&P 2018-](https://dlsp2024.ieee-security.org/))
+ **Dependable and Secure Machine Learning** ([DSN 2018-](https://dependablesecureml.github.io/index.html))
+ **Security Architectures for Generative-AI Systems** ([S&P 2024](https://sites.google.com/view/sagai2024/home))
+ **AI System with Confidential Computing** ([NDSS 2024](https://sites.google.com/view/aiscc2024))- ### Machine Learning & Artificial Intelligence
+ **Red Teaming GenAI: What Can We Learn from Adversaries?** ([NeurIPS 2024](https://redteaming-gen-ai.github.io/))
+ **Safe Generative AI** ([NeurIPS 2024](https://safegenaiworkshop.github.io/))
+ **Towards Safe & Trustworthy Agents** ([NeurIPS 2024](https://www.mlsafety.org/events/neurips/2024))
+ **Socially Responsible Language Modelling Research** ([NeurIPS 2024](https://solar-neurips.github.io/))
+ **Next Generation of AI Safety** ([ICML 2024](https://icml-nextgenaisafety.github.io/))
+ **Trustworthy Multi-modal Foundation Models and AI Agents** ([ICML 2024](https://icml-tifa.github.io/))
+ **Secure and Trustworthy Large Language Models** ([ICLR 2024](https://set-llm.github.io/))
+ **Reliable and Responsible Foundation Models** ([ICLR 2024](https://iclr-r2fm.github.io/))
+ **Privacy Regulation and Protection in Machine Learning** ([ICLR 2024](https://pml-workshop.github.io/iclr24/))
+ **Responsible Language Models** ([AAAI 2024](https://sites.google.com/vectorinstitute.ai/relm2024/home))
+ **Privacy-Preserving Artificial Intelligence** ([AAAI 2020-2024](https://aaai-ppai24.github.io/))
+ **Practical Deep Learning in the Wild** ([CAI 2024, AAAI 2022-2023](https://practical-dl.github.io/))
+ **Backdoors in Deep Learning: The Good, the Bad, and the Ugly** ([NeurIPS 2023](https://neurips2023-bugs.github.io/))
+ **Trustworthy and Reliable Large-Scale Machine Learning Models** ([ICLR 2023](https://rtml-iclr2023.github.io/))
+ **Backdoor Attacks and Defenses in Machine Learning** ([ICLR 2023](https://iclr23-bands.github.io/))
+ **Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data** ([ICLR 2022](https://pair2struct-workshop.github.io/))
+ **Security and Safety in Machine Learning Systems** ([ICLR 2021](https://aisecure-workshop.github.io/aml-iclr2021/))
+ **Robust and Reliable Machine Learning in the Real World** ([ICLR 2021](https://sites.google.com/connect.hku.hk/robustml-2021/home))
+ **Towards Trustworthy ML: Rethinking Security and Privacy for ML** ([ICLR 2020](https://trustworthyiclr20.github.io/))
+ **Safe Machine Learning: Specification, Robustness and Assurance** ([ICLR 2019](https://sites.google.com/view/safeml-iclr2019))
+ **New Frontiers in Adversarial Machine Learning** ([ICML 2022-2023](https://advml-frontier.github.io/))
+ **Theory and Practice of Differential Privacy** ([ICML 2021-2022](https://tpdp.journalprivacyconfidentiality.org/2022/))
+ **Uncertainty & Robustness in Deep Learning** ([ICML 2020-2021](https://sites.google.com/view/udlworkshop2021/home))
+ **A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning** ([ICML 2021](https://advml-workshop.github.io/icml2021/))
+ **Security and Privacy of Machine Learning** ([ICML 2019](https://icml2019workshop.github.io/))
+ **Socially Responsible Machine Learning** ([NeurIPS 2022](https://tsrml2022.github.io/), [ICLR 2022](https://iclrsrml.github.io/), [ICML 2021](https://icmlsrml2021.github.io/))
+ **ML Safety** ([NeurIPS 2022](https://neurips2022.mlsafety.org/))
+ **Privacy in Machine Learning** ([NeurIPS 2021](https://priml2021.github.io/))
+ **Dataset Curation and Security** ([NeurIPS 2020](http://securedata.lol/))
+ **Security in Machine Learning** ([NeurIPS 2018](https://secml2018.github.io/))
+ **Machine Learning and Computer Security** ([NeurIPS 2017](https://machine-learning-and-security.github.io/))
+ **Adversarial Training** ([NeurIPS 2016](https://sites.google.com/site/nips2016adversarial/))
+ **Reliable Machine Learning in the Wild** ([NeurIPS 2016](https://sites.google.com/site/wildml2016nips/home))
+ **Adversarial Learning Methods for Machine Learning and Data Mining** ([KDD 2019-2022](https://sites.google.com/view/advml))
+ **Privacy Preserving Machine Learning** ([FOCS 2022, CCS 2021, NeurIPS 2020, CCS 2019, NeurIPS 2018](https://ppml-workshop.github.io/))
+ **SafeAI** ([AAAI 2019-2022](https://safeai.webs.upv.es/))
+ **Adversarial Machine Learning and Beyond** ([AAAI 2022](https://advml-workshop.github.io/aaai2022/))
+ **Towards Robust, Secure and Efficient Machine Learning** ([AAAI2021](http://federated-learning.org/rseml2021/))
+ **AISafety** ([IJCAI 2019-2022](https://www.aisafetyw.org/))- ### Computer Vision
+ **The Dark Side of Generative AIs and Beyond** ([ECCV 2024](https://privacy-preserving-computer-vision.github.io/eccv24.html))
+ **Trust What You learN** ([ECCV 2024](https://twyn.unimore.it/))
+ **Privacy for Vision & Imaging** ([ECCV 2024](https://sites.google.com/view/rcv-at-eccv-2022/home?authuser=0))
+ **Adversarial Machine Learning on Computer Vision** ([CVPR 2024](https://cvpr24-advml.github.io/), [CVPR 2023](https://robustart.github.io/), [CVPR 2022](https://artofrobust.github.io/), [CVPR 2020](https://adv-workshop-2020.github.io/))
+ **Secure and Safe Autonomous Driving** ([CVPR 2023](https://ai-secure.github.io/SSAD2023/))
+ **Adversarial Robustness in the Real World** ([ICCV 2023](https://iccv23-arow.github.io/), [ECCV 2022](https://eccv22-arow.github.io/), [ICCV 2021](https://iccv21-adv-workshop.github.io/), [CVPR 2021](https://aisecure-workshop.github.io/amlcvpr2021/), [ECCV 2020](https://eccv20-adv-workshop.github.io/), [CVPR 2020](https://adv-workshop-2020.github.io/), [CVPR 2019](https://amlcvpr2019.github.io/))
+ **The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security** ([CVPR 2021](https://quovadiscvpr.cispa.de/), [ECCV 2020](https://cvcops20.cispa.saarland/), [CVPR 2019](https://cvcops19.cispa.saarland/), [CVPR 2018](https://vision.soic.indiana.edu/bright-and-dark-workshop-2018/), [CVPR 2017](https://vision.soic.indiana.edu/bright-and-dark-workshop-2017/))
+ **Responsible Computer Vision** ([ECCV 2022](https://sites.google.com/view/rcv-at-eccv-2022/home?authuser=0))
+ **Safe Artificial Intelligence for Automated Driving** ([ECCV 2022](https://sites.google.com/view/saiad2022))
+ **Adversarial Learning for Multimedia** ([ACMMM 2021](https://advm-workshop-2021.github.io/))
+ **Adversarial Machine Learning towards Advanced Vision Systems** ([ACCV 2022](https://sites.google.com/view/workshop-of-amlavs))- ### Natural Language Processing
+ **Trustworthy Natural Language Processing** ([2021-2024](https://trustnlpworkshop.github.io/))
+ **Privacy in Natural Language Processing** ([ACL 2024](https://sites.google.com/view/privatenlp/), [NAACL 2022](https://sites.google.com/view/privatenlp/home/naacl-2022), [NAACL 2021](https://sites.google.com/view/privatenlp/home/naacl-2021), [EMNLP 2020](https://sites.google.com/view/privatenlp/home/emnlp-2020), [WSDM 2020](https://sites.google.com/view/privatenlp/home/wsdm-2020))
+ **BlackboxNLP** ([2018-2024](https://blackboxnlp.github.io/))
- ### Information Retrieval
+ **Online Misinformation- and Harm-Aware Recommender Systems** ([RecSys 2021](https://ohars-recsys.isistan.unicen.edu.ar/topics-of-interest), [RecSys 2020](https://ohars-recsys2020.isistan.unicen.edu.ar/))
+ **Adversarial Machine Learning for Recommendation and Search** ([CIKM 2021](https://sisinflab.github.io/adverse2021/))## Tutorial
- ### Machine Learning & Artificial Intelligence
+ **Quantitative Reasoning About Data Privacy in Machine Learning** ([ICML 2022](https://icml.cc/Conferences/2022/Schedule?showEvent=18439))
+ **Foundational Robustness of Foundation Models** ([NeurIPS 2022](https://sites.google.com/view/neurips2022-frfm-turotial/home))
+ **Adversarial Robustness - Theory and Practice** ([NeurIPS 2018](https://adversarial-ml-tutorial.org/))
+ **Towards Adversarial Learning: from Evasion Attacks to Poisoning Attacks** ([KDD 2022](https://sites.google.com/view/kdd22-tutorial-adv-learn/))
+ **Adversarial Robustness in Deep Learning: From Practices to Theories** ([KDD 2021](https://sites.google.com/view/kdd21-tutorial-adv-robust/))
+ **Adversarial Attacks and Defenses: Frontiers, Advances and Practice** ([KDD 2020](https://sites.google.com/view/kdd-2020-attack-and-defense/home))
+ **Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications** ([ICDM 2020](https://tutorial.trustdeeplearning.com/))
+ **Adversarial Machine Learning for Good** ([AAAI 2022](https://sites.google.com/view/advml4good))
+ **Adversarial Machine Learning** ([AAAI 2018](https://aaai18adversarial.github.io/index.html#syl))- ### Computer Vision
+ **Adversarial Machine Learning in Computer Vision** ([CVPR 2021](https://advmlincv.github.io/cvpr21-tutorial/))
+ **Practical Adversarial Robustness in Deep Learning: Problems and Solutions** ([CVPR 2021](https://sites.google.com/view/par-2021))
+ **Adversarial Robustness of Deep Learning Models** ([ECCV 2020](https://sites.google.com/umich.edu/eccv-2020-adv-robustness))
+ **Deep Learning for Privacy in Multimedia** ([ACMMM 2020](http://cis.eecs.qmul.ac.uk/privacymultimedia.html))
- ### Natural Language Processing
+ **Vulnerabilities of Large Language Models to Adversarial Attacks** ([ACL 2024](https://llm-vulnerability.github.io/))
+ **Robustness and Adversarial Examples in Natural Language Processing** ([EMNLP 2021](https://2021.emnlp.org/tutorials))
+ **Deep Adversarial Learning for NLP** ([NAACL 2019](https://sites.cs.ucsb.edu/~william/papers/AdvNLP-NAACL2019.pdf))- ### Information Retrieval
+ **Adversarial Machine Learning in Recommender Systems** ([ECIR 2021](https://www.youtube.com/watch?v=8V4TLdYMit8&list=PLted5MzCy6KwnlE3kFmeQJhDJCbS1lAt0&index=3), [RecSys 2020](https://www.youtube.com/watch?v=tjzykHbBd0w&list=PLted5MzCy6KwnlE3kFmeQJhDJCbS1lAt0&index=2), [WSDM 2020](https://github.com/sisinflab/amlrecsys-tutorial/blob/master/Tutorial-AML-RecSys-WSDM2020.pdf))## Special Session
+ **Special Track on Safe and Robust AI** ([AAAI 2023](https://aaai.org/Conferences/AAAI-23/safeandrobustai/))
+ **Special Session on Adversarial Learning for Multimedia Understanding and Retrieval** ([ICMR 2022](https://al4mur.github.io/))
+ **Special Session on Adversarial Attack and Defense** ([APSIPA 2022](https://sites.google.com/ahduni.edu.in/2022-apsipa-ss-aad))
+ **Special Session on Information Security meets Adversarial Examples** ([WIFS 2019](https://signalprocessingsociety.org/WIFS2019/index9f1c.html?q=node/18))