User Review
( votes)AI and machine learning techniques are said to hold great promise in security, enabling organisations to operate an IT predictive security stance and automate reactive measures when needed. Is this perception accurate, or is the importance of automation gravely overestimated?
Security Think Tank- Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.
Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.
Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.
The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.
Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.
Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.