NIPS 2018 - Adversarial Vision Challenge - Untargeted and Targeted Attacks
Seetarama Raju Pericherla
Bengaluru, Karnataka
- 0 Collaborators
Me and co-researchers are working on developing solutions to the NIPS 2018 - Adversarial Vision Challenge (https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-untargeted-attack-track). As part of the same, we are participating in 2 tracks - Untargeted attacks track and Targeted attacks track. Challenge Description: The overall goal of this challenge is to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. As of right now, modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications. Use of project: Real world systems such as Self Driving cars, etc., consist of the current state of the art deep learning and machine learning models. This project will help evaluate how vulnerable these models are to adversarial attacks. This will further help identify the weak areas in models and build more powerful and robust models. ...learn more
Project status: Under Development
Intel Technologies
AI DevCloud / Xeon,
Intel Opt ML/DL Framework
Overview / Usage
Me and co-researchers are working on developing solutions to the NIPS 2018 - Adversarial Vision Challenge (https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-untargeted-attack-track). As part of the same, we are participating in 2 tracks - Untargeted attacks track and Targeted attacks track.
Challenge Description:
The overall goal of this challenge is to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. As of right now, modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications.
Use of project:
Real world systems such as Self Driving cars, etc., consist of the current state of the art deep learning and machine learning models. This project will help evaluate how vulnerable these models are to adversarial attacks. This will further help identify the weak areas in models and build more powerful and robust models.
Methodology / Approach
There have been multiple papers on various adversarial attacks (both targeted and untargeted) such as Fast Gradient Sign Method, Projected Gradient Descent, Iterative Fast Gradient Sign Method, Boundary Attacks, etc. There are basically 3 categories of adversarial attacks - Gradient based, Score based, Decision based. We have started working on the project since August and our approach is summarized in the fllowing steps.
Approach:
- Read papers related to adversarial attacks.
- Implement these existing adversarial attacks on small-scale datasets like MNIST, etc.
- Implement these existing adversarial attacks on large-scale datasets like ImageNet, etc.
- Understand the challenges and drawbacks of these existing attacks.
- Research on ways of improving existing attacks.
- Develop new adversarial attacks.
How is Technology playing a key role in our approach?
In order to implement the existing state of the art adversarial attacks on different types of datasets and also develop new adversarial attacks, we require optimized and high end software and hardware support. This is where cloud technologies like Intel AI DevCloud plays a key role in providing the necessary support. When we compared the performance of our project by running our project codes in our local machine and in the Intel AI DevCloud, we were able to witness the power of Intel AI DevCloud. Intel AI DevCloud provides a proper environment for running our project.
Technologies Used
Python 3, Python Libraries ( Intel Optimized Tensorflow, NumPy, matplotlib, sklearn, skimage, FoolBox), Jupyter Notebook