2014/03/Ziad K Abdelnour Addressing FPC Event.jpg
Print Print This Page

Blog

Weaponizing Artificial Intelligence – Facing and Curbing the Imminent Threat

By : Ziad K Abdelnour| 26 December 2017
Please Share!TwitterFacebooktumblrGoogle+PinterestLinkedIn

Artificial Intelligence

Many of us don’t even realize to what extent Artificial Intelligence (AI) and machine learning have already become a part of our daily lives.

Every time you book an Uber, your app uses machine learning to minimize your wait time, determine the price of your ride, and match you with other passengers. Facebook uses AI to recognize faces and even your email spam filters uses the technology. And what do you think is flying the plane more than half the time whenever you’re on a commercial flight?

It’s both incredible and unnerving to think that what we’d only seen in science fiction movies is fast becoming our reality. But while the intent of this technology has always been to make our lives better, governments are already using the tech to make their citizens “safer.”

Russian weapons maker Kalashnikov is working on an automated gun system that uses AI that would make you think you were watching the latest installment in The Terminator franchise. Over 30 countries have already or are in the development phases of armed drones which with every successive generation have more autonomy to be used to help identify marks and maneuver missiles.

In fact, the same autonomous technology that has allowed self-driving cars to avoid pedestrians could very well be used to detect and attack targets.

Facing the Imminent Threat

Whether AI is being used to pit a human against a machine in an innocent game of chess or it’s being developed to strengthen militaries, AI in the hands of hackers who have nothing but ill-intent is the more alarming threat. While militaries have maintained that they remain in control, only limiting automated engagements to defend against missiles and high-speed rockets, the fear beyond the misuse of such weaponry is the possibility of the technology and its accompanying data falling into the wrong hands.

While it’s hard to prove, cybersecurity experts believe that hackers are already in position to use AI for their diabolical plans. In fact, they may already be using it.

Cyber-attacks are scary enough as cybercriminals extract massive amounts of personal data that can be sold and used for everything from blackmailing large corporations, stealing the identities of everyday citizens, and committing fraud. And if hackers infiltrated military databases, they could expose locations, strategies, or even leave an entire country defenseless if they were hit with malware that was potent enough to shut down all operations.

And with AI and machine learning, cyber-attacks can now be automated on a grand scale – essentially weaponizing the technology.

Curbing the Threat

In the same way that a deadly snakebite is treated with serum containing venom from a poisonous snake, the best defense against weaponized AI is more weaponized AI.

Many experts maintain that cybercriminals are most likely still using traditional approaches to carry out their evil plans. However, with AI becoming increasingly more accessible, hackers including AI in their arsenal in the very near future appears highly probable. Developing weaponized AI now to detect suspicious activity to counter cyber-attacks is better than being defenseless when you’re hit.

At the scale that cybercriminals may use weaponized AI, a single cybersecurity expert or even their entire team with limited resources would be unprepared to face a complex for who is armed with superior machine learning and AI sophistication.

You have been warned…. Beware

Top