A competition pitting one machine-learning system against another is a glimpse of cyberwarfare in the future and may even yield humanity’s best defense against a cyberattack of nightmarish proportions.
Machine learning, which predicts a set of uncertain outcomes from a fixed set of inputs, is increasingly used across an array of industries for vital information. As its use grows, many are concerned it will be used in sophisticated cyberattacks to evade security measures. It’s already used for defense, so its not a stretch to imagine artificial intelligence deployed as an offensive measure.
Some worry cybercriminals could use AI to fool computer vision systems on self-driving cars, for instance. A similar tactic could trick voice- or face-recognition, too.
The contest, hosted by data science platform Kaggle, is made up of three trails, MIT Technology Review reports. In the first stage, competitors are tasked with confusing a machine-learning system to cause an error. Another challenge is forcing the system to incorrectly classify something. The third round challenges competitors to build a defensive machine-learning system.
“As machine learning becomes more widely used, understanding the issues and risks from adversarial learning becomes increasingly important,” Benjamin Hamner, the contest host told Technology Review.
This article was featured in the InsideHook newsletter. Sign up now.