During Infosecurity Europe in London this week, cybersecurity experts sounded off on worries about artificial intelligence being used for nation state cyber weapons.
LONDON, UK – With the infosec community eyeing artificial intelligence as the next big frontier for cyber defense, experts here at Infosecurity Europe on Tuesday warned that several challenges in how AI processes and interprets data need to first be fleshed out before widespread adoption.
AI, in the context of security, holds the promise of allowing security applications to automate functions at scale and opens the door for more efficient defense tools and processes. For example, AI could aid in container process anomaly detection, enterprise user login analytics and take defensive measure when sensing network or system anomalies.
But AI is still facing a plethora of challenges that are creating distrust in the technology. Those include human-based bias inherent embedded in data culled by systems and AI-based applications being leveraged by cybercriminals for various malicious uses.
Particularly when it comes to nation state cyber tools, “AI isn’t good enough when lives are on the line,” said Nicola Whiting, chief strategy officer with Titania, at Infosecurity Europe on Tuesday. “There’s a big program with trust when it comes to AI. We can’t always trust that AI is unbiased… and we can’t validate it unless we fix this.”
Her point is, we can’t trust AI outcomes if the data used to reach those conclusions contain human biases or are intentionally manipulated.