Wait, what? Seriously?

Very seriously: There is an easy way to trick the artificial intelligence of a self-driving car into ignoring you as a pedestrian. You can do that by using simple paint, low tech, nothing special. No electrical equipment needed.

Wanna see how? Keep on reading…Just don’t try this at your home-town, please.

There is a security issue/loophole of deep learning systems that the industry has been ignoring for quite a while now or what we would characterize as “bury beneath the carpet behavior”. In this article and we will show you what this loophole is and what we can do about it.

Alex Louizos is a vascular surgeon and serial entrepreneur that created several companies in the artificial intelligence space.

Connect with Alex here.

We have achieved great progress in the artificial intelligence space the last 10 years. I have experienced the exponential shift between needing the whole supercomputer of the campus to run a small neural network for a small task, to being able to train terabytes of images on my GPU laptop. We have really revolutionized the AI space with deep learning and GPUs. With this revolution, we started build amazing AI systems. The most exciting of all perhaps, the self driving cars.

Nevertheless, there is a very special bug, a “ghost in the shell”, a vulnerability hidden in all these deep learning systems. Nobody wants to talk about it and, even more importantly, openly deal with. And the only reason this vulnerability hasn’t created havoc yet, is that our deep learning systems are not wide spread. But they will soon be, faster than we might imagine.

The vulnerability is called “adversarial attack”. Iit is a way to trick a neural network into believing that an object is something else other than it looks to a human brain. The object causing this adversarial attack looks the same to a human eye. But it tricks the deep learning system (even the most advanced ones) into believing it is something else completely.

See the example in the figure below. If we slightly perturb the picture of the panda, for us it is still a panda. For a deep learning system (including those that are deployed in self-driving cars) it is now a gibbon. Pay attention to the high confidence of the gibbon recognition. This is not an exception. It is smt that can be easily created.

Fig 1: The anatomy of an adversarial attack. Small changes on the image can completely fool even the most advanced deep learning systems ,

Fig 1: The anatomy of an adversarial attack. Small changes in the image can completely fool even the most advanced deep learning systems.

There is no need to have access to the deep learning system to perturb the photos. Just preprocess them in such a way before sending them to the machine vision system.

But what about real-time camera vision systems? Can this be done in real time with real objects? We can still do it, simply by providing adversarial images to the video camera. You can see a live video of this adversarial attack making an AI system believe that a banana is actually a toaster.

Now imagine the following, not so futuristic, hacking scenario. A clever hacker goes to places that self driving cars are being tested. She/he knows all the tricks of the AI trade. She/he believes she is a cyberpunk. She decides to hack a sign for “Speed limit of 80m/h”. She is armed with very low tech tools. That is all she needs to hack a multi million dollars sophisticated artificial intelligence system. She gets her paint and strategically paints the “Speed Limit” sign to create an adversarial “Stop” sign.

Look at the result below:

No alt text provided for this image

This is completely realistic and can be done even with the state of the art of self-driving cars (please do not get ideas).

Deep learning systems, in general, are very vulnerable to this, simple to execute, attack. You might be able to train these systems to be a bit more robust to this but eventually, all of them are vulnerable to the right way of painting objects. Unfortunately, all the advanced systems use common base neural networks as the initial base for their training and this makes them even more vulnerable. This is because everybody has access to these base neural networks. These bases are the “subconscious” that a hacker can use to create the adversarial attack.

What else beyond fooling a self-driving car can you do with adversarial attacks? You can force an AI vision system to completely ignore you. With this hack, you are completely invisible to a real-time surveillance security system. A clever hacker might use this trick to go completely undetected against surveillance systems and never trigger an automatic alarm unless a person is behind the monitors. See the video here

The person on the left is recognized successfully by the machine vision system but the person on the right that uses an adversarial patch which makes him practically invisible to the AI vision system.

Some people have gone to extreme measures to even use stealth fashion with anti-surveillance clothes that help you stay hidden from surveillance cameras by exploiting this vulnerability. You can even buy anti-surveillance t-shirts online. One can also use adversarial software to make their photos adversarial so they are not detectable by face recognition systems either these systems belong to Facebook or they are government’s systems e.g. the DMV.

Moreover, adversarial attacks are not limited to vision systems only. Any deep learning system is vulnerable to these attacks. For example, a credit risk scoring AI system can be probed for sensitivities and eventually tricked into the way the hacker wants it to behave. If an employee has access to the training of these AI systems she can even build “backdoors” by manipulating the training dataset. By this manipulation of the training dataset, the rogue employee can make the system behave in a predictable way only she knows about. She can exploit this even if she leaves the company. How about airport security systems? Can you trick the TSA systems with adversarial attacks?

What worries me a lot is that the discussion about safety hasn’t even started, whereas the AI systems responsible for crucial many aspects of our lives are already deployed in production.

Beyond the adversarial attacks, we haven’t even started testing these systems for biases or racism. Do you believe it is too early for these considerations? Well, think again! Next time you submit your CV to a company they might use an AI to check through “good” candidates automatically. What are good candidates? Obviously similar to existing “good” employees. This self-reinforcing monoculture is one of the biggest dangers of our future with advanced artificial intelligent systems that dictate our lives.

The cybersecurity of an artificial intelligence system is a very important part of our cyber infrastructure both for enterprises and more importantly as a country that wants to be champion in the AI space. While the use of these systems is increasing exponentially, we are still ignoring the serious safety issues and how easy and low tech it is for these systems to be exploited.

In Manxmachina we help enterprises build and secure their artificial intelligence systems and prevent the risk of financial and security breach that can lead to serious financial loss and most importantly branding damage.

If you are responsible for an AI system please be worried. But where should you start?

Let’s make it easy by asking some preliminary questions as a mini-audit:

  1. What are the risks involved with my AI system failing?
  2. What is the plan for recovery upon potential fail?
  3. Who is responsible for training my system? Who is procuring my training dataset?
  4. Is the training dataset clean from bias regarding gender/age/socioeconomic status and/or racism? What is my risk for a public relations fiasco because of that? (check example here). Have I checked for such biases? Is my brand reputation and my personal reputation safe?
  5. Who has access to the weights of the neural network? What is the risk of the weights leaking along with all my IP?
  6. Is there private information in my training dataset? For example, are there patient name data in my training data? My neural network weights are not HIPAA compliant in this case.
  7. Who has access to probe the Ai system? Is there control on this access? Is there a limiting throttle on how many times my system can be probed?
  8. Have I tested my system with adversarial attacks? How can I treat these vulnerabilities without compromising accuracy?

Would you like to continue the discussion?

Would you want us to create a comprehensive cybersecurity audit of your artificial intelligence system? Connect with us here


Alexandros Louizos, MD

Alexandros Louizos, MD is a vascular surgeon that left his career in surgery in 2015 to join the artificial intelligence revolution. He is a serial entrepreneur and has created multiple businesses. He has designed and executed artificial intelligence systems currently in production is Fortune 500 companies

Leave a Reply

Your email address will not be published. Required fields are marked *