Cybersecurity and AI/ML, Before the New Age of AI: A Recap, and a Look Forward
A wrap-up to the six-part series and a look at the future of cybersecurity and AI/ML.
Derek E. Brink, CISSP, VP and research fellow at Aberdeen Strategy & Research, wraps up his six-part series on the natural fit between cybersecurity and AI/ML.
I’ve shared a diverse mix of examples from my previous work at Aberdeen of how leading solution providers have actually been leveraging these capabilities for several years:
- Endpoint detection and response (EDR),
- Email security,
- Managed detection and response (MDR),
- Insider risk,
- Advanced bot detection and mitigation services.
Given the last few months of excitement and hyperbole over newly available tools like OpenAI ChatGPT, Google BARD, and Microsoft Bing AI, one might get the mistaken impression that artificial intelligence (AI) and machine learning (ML) has just burst upon the scene — much like the Greek goddess Athena emerged fully-grown from the forehead of her father, Zeus.
Ironically, Athena was not only the goddess of war, but also of practical insight and prudent restraint, which captures a lot of the sturm und drang in our current conversations about AI. But I digress. The simple point is that the use of AI/ML in cybersecurity really isn’t all that new. If you haven’t already, check out the five interesting examples listed above.
Differences Between Traditional AI/ML and Generative AI
To be fair, it’s worth calling out the difference between the traditional AI/ML that’s discussed in these particular examples and the generative AI that’s currently capturing our collective imaginations. What kind of analyst would I be if I didn’t attempt to do this using a simple 2-by-2 matrix, as shown below?
-
- All five examples (EDR, email security, MDR, data protection, and web application security) fall under the traditional AI/ML column — in which computers perform specific tasks based on pre-programmed rules and algorithms.
- In all five examples, leading cybersecurity solutions that incorporate AI/ML have aspects of both the “Strong” (e.g., pattern recognition, which generally augments and up-levels current cybersecurity roles) and “Weak” (e.g., process automation, which generally relieves humans from doing certain tasks) rows as representative use cases.
- Going forward, we can no doubt expect leading cybersecurity solutions to incorporate new capabilities enabled by the generative AI column — with corresponding use cases such as predictions and content creation, as shown below. As always, Aberdeen will incorporate those into its upcoming research projects in cybersecurity.
Source: Aberdeen Strategy & Research, August 2023
Next week, my colleagues and I at Aberdeen will begin sharing some of the key findings and insights from our just-concluded research study on AI in the Enterprise: The State of AI in 2023. Some early examples include:
- How much are enterprises currently investing in AI-based initiatives as a percentage of their annual IT budgets?
- How has AI started to affect current jobs in cybersecurity?
And as they say: There’s much, much more. In the meantime, you can keep up with selected research findings across multiple coverage areas, which Aberdeen shares at www.aberdeen.com.
Did you enjoy this series on the future of AI/ML in cybersecurity? Share your thoughts with us on Facebook, X, and LinkedIn. We’d love to hear from you!
MORE ON CYBERSECURITY AND AI/ML, BEFORE THIS NEW AGE OF AI
- Cybersecurity and AI/ML, before this new Age of AI: Endpoint Security
- Cybersecurity and AI/ML, before the new Age of AI: Email Security
- Cybersecurity and AI/ML, before the new Age of AI: Managed Detection and Response
- Cybersecurity and AI/ML, Before the New Age of AI: Insider Risk
- Cybersecurity and AI/ML, Before the New Age of AI: Bad Bot Detection and Mitigation