Over the past few weeks, the cybersecurity landscape has changed dramatically. Employees working at home mean more exposed attack surface and plenty of unusual user behavior patterns. And newly deployed remote collaboration platforms might not have been fully vetted yet.
One sector of the cybersecurity industry might help compensate for these new risk factors: deception technology. Formerly known as honeypots — a term that does not Google well — deception technologies sprinkle the environment with fake “accidentally leaked” credentials, decoy databases, and mock servers that are invisible to legitimate users. You then wait for attackers to stumble on them. False positive rates are low, so companies can immediately kick off automated remediation strategies like blocking IP addresses and quarantining infected systems.
This technology may have a bad reputation for manageability and overhead, but artificial intelligence (AI) and machine learning (ML) are eliminating some of the biggest problems, and some companies are already putting it to work.
AI speeds deception technology rollout at Aflac
Insurance giant Aflac, for example began looking at deception technology three years ago and ran proofs of concept with multiple vendors. “What we wanted was a technology that could be attack agnostic,” says DJ Goldsworthy, Aflac’s director of security operations and threat management, “one that doesn’t depend on any signatures or behavioral patterns. One that would detect any type of attack.”