Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Oct 2, 2024 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
A pytorch adversarial library for attack and defense methods on images and graphs
Implementation of Papers on Adversarial Examples
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
🗣️ Tool to generate adversarial text examples and test machine learning models against them
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
PyTorch library for adversarial attack and training
This is the implementation of MalConv proposed in [Malware Detection by Eating a Whole EXE](https://arxiv.org/abs/1710.09435) and its adversarial sample crafting.
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.
Library containing PyTorch implementations of various adversarial attacks and resources
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Code for "Adversarial attack by dropping information." (ICCV 2021)
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Add a description, image, and links to the adversarial-examples topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-examples topic, visit your repo's landing page and select "manage topics."