AI Surprises Its Creators By Cheating In Task
What we have long seen in films is becoming a reality.
A research team from Stanford and Google that developed a machine learning agent meant to convert aerial images into street maps was deceived after the AI hid information.
The researchers wanted to speed up the process of converting satellite imagery to Google’s maps. In this regard, a network called the CycleGAN was used. Not so hard to see right?
Wait for it… CycleGAN was doing amazing with its conversion. So good was it that it reconstructed aerial pictures and add some details that were not in the aerial pictures when it did the reverse process.
This intriguing effort made the researchers start to audit the data the machine learning agent was working on and noticed that it had played on them.
The researchers claim that their intention was for the Al to match the aerial pictures correctly but it had learned to encode the pictures to another street map instead of the “real” map.
What it did not know is that the information it hid in “a nearly imperceptible, high-frequency signal” will be needed later.
And for those scared that Al’s might take over their jobs, the saying garbage in, garbage out holds water even in this instance.
The machine only showed that machines might not be able to do some difficult jobs such as converting images to each other but go about the easy way of superimposing images in such a way that humans might find difficult to detect.
It also revealed that networks such as CycleGAN will do the easier job if not carefully monitored or in simple HAL words, “It can only be attributable to human error.”