User Review
( votes)Speaking to Computer Weekly in October 2019 during an event at the company’s Helsinki headquarters, Hypponen said that although true AI is a long way off – in cyber security it is largely restricted to machine learning for threat modelling to assist human analysts – the potential danger is real, and should be considered today.
“I believe the most likely war for superhuman intelligence to be generated will be through human brain simulators, which is really hard to do – it’s going to take 20 to 30 years to get there,” said Hypponen.
“But if something like that, or some other mechanism of generating superhuman levels of intelligence, becomes a reality, it will absolutely become a catalyst for an international crisis. It will increase the likelihood of conflict.”
Hypponen posited a scenario where a government, or even a corporation, announces it will debut a superhuman artificial intelligence within the next month.
“How are others going to react?” he said. “They will immediately see the endgame. If those guys get that it’s going to be game over, they will win everything, they will win every competition, they will beat us in every technological development, they will win every war. We must, at any cost, steal that technology. Or if we can’t steal it, we must, at any cost, destroy that technology.”
The idea that AI could eventually inform the development of autonomous cyber weapons is not new, and has been previously voiced by other threat researchers, including Trend Micro’s Rik Ferguson, who earlier this year said CISOs should be thinking about how to prepare for autonomous, self-aware, adaptive attacks, even though they are not yet a reality.
During a speech in 2018, the now-Liberal Democrat leader Jo Swinson proposed a Geneva Convention for cyber warfare, saying cyber defence was the new civil defence. Policies to this effect have appeared in the Lib Dem General Election manifesto, which was published on 20 November.
Hypponen stressed that currently, cyber threats from AI were extremely limited in their scope, and while companies such as F-Secure are using machine learning for defence, attackers have not yet used it for offence.