AI: the holy grail of cybersecurity or a threat to us all?
Artificial intelligence (AI) is the stuff of science fiction movies no more. Both AI and the sub-discipline of machine learning are increasingly being touted by cybersecurity firms as a way to tackle sophisticated online threats and help stretched IT teams do more with less.
Yet although new research reveals that the vast majority of SMEs have bought into AI as the future of security, there are serious concerns about the claims made by vendors and the maturity of the technology.
A brave new world
Senseon research claims 81% of SMEs believe AI will be able to improve their security posture and 76% think it will improve the efficiency of their day-to-day jobs. Perhaps unsurprisingly, an overwhelming majority (69%) of respondents to the study claim they are planning to implement AI security in the next five years, with over two-fifths (44%) planning to do so in the immediate future.
So why is the technology in such high demand?
AI and machine learning tools can be trained using large sets of data to observe “normal” behavior. They are then able to process data as it is fed in real-time and flag behavior that deviates from the norm, indicating potential malicious activity.
- In a cybersecurity context, this is especially useful to spot attacks that have never been seen before or are otherwise designed to evade traditional tools like Anti-Virus.
- It can also be used to spot attacks which don’t contain any malware at all, such as Business Email Compromise (BEC) threats. These typically impersonate the CEO or CFO to trick the recipient into wiring corporate funds out of the business. In that respect, BEC is more like email fraud than a traditional cyber-attack.
But AI algorithms can be trained to “learn” the normal writing style of C-level execs, for example, and then spot when a scammer is trying to spoof their email.
The value of AI
The value of AI in this context is in spotting threats that human eyes might miss, whilst freeing up security teams, increasingly in high demand thanks to chronic skills shortages, to focus on higher value tasks.
New academic research has revealed similar techniques can be used to spot scammers on dating sites that try to trick vulnerable users into sending them money.
After analyzing 15,000 profiles from the free Dating ‘N More website, a prototype AI tool found that fake profiles usually contain more images and emotive language. The project had a 93% success rate. Given that BEC ($1.3bn) and romance scams ($362m) incurred more losses than any other cybercrime reported to the FBI last year, such AI tools are clearly much needed.
More hype than substance?
It’s a gap that is being rapidly filled by the cybersecurity market. At Infosecurity Europe, the region’s largest cybersecurity trade show, there are few if any vendors that don’t claim to offer some kind of AI or machine learning-powered capabilities.
But can IT and business leaders believe the hype? According to recent research from venture capital firm MMC Ventures, 40% of European AI start-ups didn’t actually use the technology in any meaningful way. It’s a trend likely to be repeated globally.
The problem of data bias
There are also doubts over whether it’s quite the Holy Grail some in the industry will have you believe. For one thing, there’s the problem of data bias. AI systems are fundamentally only as good as the data on which they are trained.
Thus, Amazon was last year forced to shelve a four-year project to develop an AI tool to automate the selection of job candidates. It was found the data on which it was trained was biased in favor of men.
Thousands of AI experts last year also signed a pledge not to develop autonomous weapons systems. As Titania CSO, Nicola Whiting, said during her keynote at Infosecurity Europe:
“If the experts are saying AI is not good enough yet when lives are on the line, how can it be good enough to make decisions on our networks?”
Is AI transparent at all?
It’s not just a problem of data bias but also the accuracy and type of data (probabilistic or deterministic), and how transparent the AI system is — ie, whether it allows others to validate decision making and data integrity.
Unless these areas are properly addressed by industry players, it will be difficult to fully trust the decisions AI tools make. As AI has a bigger and bigger effect on our daily lives — used in everything from credit scoring to criminal justice sentencing — these concerns become increasingly important. Thankfully they are starting to be addressed by the likes of the OECD.
So what should business leaders look for to gain an advantage for their security teams?
- Due diligence becomes extra important
Get a security expert to double check the claims made by security vendors touting AI features: is it marketing hype or does the firm offer something genuinely useful?
- Make sure it’s the right fit for your organization
AI is not a silver bullet for everything and many of the tools on the market may be too high powered for smaller businesses. At the very least it should be used alongside rather than as a replacement for more traditional tools like AV, firewalls, intrusion detection etc.
But ultimately, it will pay to keep a close eye on developments. AI may not be fully formed yet, but it will become increasingly important to cyber defense, as a new arms race between the security industry and cyber-criminals starts to emerge.
“All of us are excited about AI making our jobs easier,” best-selling author Jamie Bartlett told an audience of security professionals at Infosecurity Europe. “But the criminals are thinking about the same things, and usually doing it before we are.”