According to Nicole Eagan, CEO of software company Darktrace, only two out of every ten cybersecurity experts typically embrace artificial intelligence (AI) as a key component of threat detection. The others, she explains, tend to be “totally resistant” or agree to “give [AI] a try” but don’t put in the effort required to make the most of the tech post-purchase.
Granted, information security professionals are known to be risk-averse, which has the flip side of sometimes making them resistant to try out new tech — and for good reason: Protecting the company against risk is the number one job. Yet, theoretically, AI has the potential to more quickly identify a larger number of problems. So why doesn’t every security team use it?
Mike Small, senior analyst for research firm KuppingerCole, believes many actually do — they just might not think of it as AI. Darktrace and competitors like Senseon and SecBI perform threat detection on a higher level than traditional antivirus software. But, Small says, “What they are doing is not, in a sense, completely unique.” At its core, he explains, threat detection AI is a heightened form of behavioral analytics that looks for patterns to identify possible threats and vulnerabilities. All the big cybersecurity platforms like Symantec and McAfee have this general type of technology already rolled in.
“[Teams] are buying things for the outcome, rather than the technology,” Small explains, and the outcome many get from behavioral matching embedded in larger tools works just fine: Last year, Forbes reported that McAfee makes more than $2.5 billion a year. In comparison, Darktrace sold upwards of $400 million.
While it isn’t realistic to expect a specialized industry tool to sell as much as a household-name platform, the numbers show that when it comes to threat detection, standalone AI hasn’t taken over the market yet. So, what are the real roadblocks? It depends on who you ask. Vendor Eagan, expert Small, and a buyer, Eric Gauthier, all give completely different answers.