AI has made possible many tasks that couldn’t be thought of being carried out without human intervention like self-driven cars, voice-command-based devices, etc. These technologies have reduced human interaction and improved the efficiency of the work that was carried out manually. But as its usage has increased over the years, it has caught the attention of hackers and attackers. They have made AI vulnerable to many adversarial attacks that can have disastrous results.
Adversarial attacks are basically attacks that deceive AI systems that lead them to make mistakes. These attacks can be carried out on various AI systems like self-driven vehicles, home-assistants, search engines, election surveys, etc. Adversarial attack on AI is a serious issue and should be addressed and better defense mechanisms should be developed.
So, let’s look at the types of adversarial attacks on AI
In a White-Box attack, attackers very well know the algorithm of the AI system they are attacking. The attacker has details related to the working mechanism of the machine and the type of data it has. They have also the model architecture and have access to the building code and can modify them. So, using this knowledge the attacker can directly interact with the target device and distract from its original goal.
In this type of adversarial attack, the attacker has no knowledge of algorithms and working mechanisms of the target device. Similarly they have no idea of the model architecture, data, and the building code of the target AI system. This attack is performed by executing queries against the target, and analyzing the resulting changes and outputs, and try to build a copy of the target device using that data. Finally, after creating a copy or simulator of the target device, White-Box attacks are carried out.
In confidentiality attacks, the data, and algorithms used to develop and train an AI system is leaked. Further, this leaked data can be used by many other people and affect the original device or create a copy of the original system.
In integrity attacks, the data, and algorithms used to train an AI system is tempered that cause the AI system to behave differently. This adversarial attack is used basically in scenarios like avoiding malware detection, to discredit the product or company, to bypass network anomaly detection, etc. The most common example is the poisoning of the search engine auto-complete functionality, that can defame a product or company.
Availability attacks are the attacks in which the AI system is modified in such a way that it seems normal to a human, but works completely different for the machine. In this type of adversarial attack, the input is modified or input is given by the attacker such that the machine works according to the forged inputs. An example of an availability attack is hijacking self-driven cars and drones.
All you need to know about Artificial Intelligence
Learn Artificial Intelligence
|Top 7 Artificial Intelligence University/ Colleges in India||Top 7 Training Institutes of Artificial Intelligence|
|Top 7 Online Artificial Intelligence Training Programs||Top 7 Certification Courses of Artificial Intelligence|
Learn Artificial Intelligence with WAC
|Artificial Intelligence Webinars||Artificial Intelligence Workshops|
|Artificial Intelligence Summer Training||Artificial Intelligence One-on-One Training|
|Artificial Intelligence Online Summer Training||Artificial Intelligence Recorded Training|
Other Skills in Demand
|Artificial Intelligence||Data Science|
|Digital Marketing||Business Analytics|
|Big Data||Internet of Things|
|Python Programming||Robotics & Embedded System|
|Android App Development||Machine Learning|