AI Systems Reverse Engineered Using Adversarial Machine Learning

AI security expert Dawn Song warns that “adversarial machine learning” could be used to reverse-engineer systems—including those used in defense.

Song said adversarial machine learning could be used to attack just about any system built on the technology. “It’s a big problem,” she told the audience. “We need to come together to fix it.”

Adversarial machine learning involves experimentally feeding input into an algorithm to reveal the information it has been trained on, or distorting input in a way that causes the system to misbehave.

By inputting lots of images into a computer vision algorithm, for example, it is possible to reverse-engineer its functioning and ensure certain kinds of outputs, including incorrect ones. Song presented several examples of adversarial-learning trickery that her research group has explored.
by Will Knight March 25, 2019 Intelligent Machines

How malevolent machine learning could derail AI

Can your own phone be used as a weapon against you? Can your mind be controlled with weaponized technology? A Top Secret control infographic released by accident?

A weapon that causes you to hear voices?

“Voice of God” technology leaked in Gov. docs for MIND CONTROL!


Published on May 11, 2018

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.