Agentic AI Has the Potential to Change the Game for Cybersecurity
According to experts, agentic AI brings new possibilities to the cybersecurity landscape, especially when security teams face challenges due to talent shortages and increasing volumes of alerts.
News
- Middle East Insurers Embrace Next-Gen AI to Compete Beyond Underwriting
- AI71 Launches SuperHive to Streamline the Construction Lifecycle with AI
- Agentic AI Is Advancing Rapidly. Experts at the AI Research Forum Will Decode its Impact.
- CyberArk Strengthens AI Security in the Cloud with AWS Marketplace Launch
- Saudi Arabia’s First Arabic AI-Driven Data Governance Platform Launched
- Why Cyber Risk Management Still Lacks Business Maturity Despite Rising Investments

[Image source: Chetan Jha/MITSMR Middle East]
Technology companies are funneling billions of dollars into artificial intelligence (AI) to keep up with unfettered demand. But now, more than ever, it is imperative that they make sure their technology is secure, as there is a significant shift — it’s agentic AI.
These systems are capable of planning and executing tasks independently, managing manufacturing capabilities, and operating production systems. They also interact with tools, environments, other agents, and sensitive data. Over time, human intervention and supervision of the technology will decrease.
Recently, Gartner published a report predicting that by 2029, agentic AI will be capable of resolving 80% of routine customer service interactions with no human intervention.
With the development of agentic AI, the cybersecurity industry faces a rapidly shifting threat landscape, as businesses grow paranoid that AI agents could go rogue.
And so, once AI agents start managing critical infrastructure, it’s critical to protect access points, as technological innovations also create more opportunities for hackers.
The Risks Are High
While this new technology has huge potential, the risks are also high. AI agents can act independently, and so they may misinterpret a situation or take actions that weren’t intended. Threat actors might also trick these systems by feeding them false information or finding ways to influence their decisions.
According to Vladislav Tushkanov, Machine Learning Technology Research Group Manager at Kaspersky, several risks should be addressed:
- Adversarial manipulation occurs when unauthorized or malicious actors influence, hijack, or manipulate the behavior or outputs of autonomous AI, resulting in incorrect decisions, harmful actions, or data leaks. These attacks are novel and LLM-specific, leading to a larger attack surface that defenders need to secure.
- Agentic AI may be exploited by cybercriminals to launch automated cyberattacks, including spearphishing, which could increase the total number of attacks that the already strained cybersecurity workforce needs to deal with.
“These concerns are real, and businesses are paying attention and exercising caution. Companies are asking important questions: How do we stay in control of these AI agents? Can we explain their decisions?” says Kenan Abu Ltaif, Regional Lead for the Middle East and Turkey at Proofpoint, adding that as agentic AI becomes more common, strong oversight and clear safety rules will be essential.
New Shift Demands A New Mindset
Interestingly, this has led the cybersecurity industry to rethink how to secure AI, defending both with and against agentic AI. Experts say agentic AI has the potential to change the game for cybersecurity.
“Agentic AI is changing how we think about cybersecurity,” says Dr. Emad Fahmy, Director of Systems Engineering for the Middle East, Netscout. “Instead of waiting for attacks and reacting manually, organizations can now build systems that adapt and heal themselves in real time.”
This shift demands a new mindset – focused on automation, continuous intelligence, and orchestration. “We’re moving from reactive defence to proactive, resilient security architectures,” adds Fahmy.
Unlike traditional systems that wait for instructions, agentic AI can make decisions and take actions based on specific goals. This shift means cybersecurity can no longer rely on separate tools and manual responses.
Instead, Abu Ltaif says integrated, flexible systems are needed to think and act more like humans. “It allows security tools to detect threats and act across email, cloud, and messaging platforms in smarter, more connected ways. It clearly indicates that cybersecurity is moving into a new era.”
New Opportunities Agentic AI Introduces In Cybersecurity
According to experts, agentic AI brings new possibilities to the cybersecurity landscape while necessitating a fundamental shift in the ecosystem. This is especially important as cybersecurity teams face challenges due to talent shortages and increasing volumes of alerts.
It can take over tasks that usually require human effort, such as sorting through suspicious emails, responding to security alerts, or digging into unusual activity in apps or browsers, says Abu Ltaif.
“Cybersecurity tools powered by agentic AI can help security teams move faster and cover more ground without needing to grow their headcount. The technology also works across many digital channels, so threats don’t slip through the cracks.”
He adds another big plus: “It can offer real-time coaching to employees who may be making risky decisions online.”
While agentic AI provides innovative approaches to enhance threat detection, improve response capabilities, and strengthen AI security, Fahmy, agentic AI also lets security teams anticipate threats rather than chase them.
“These systems can recognize patterns, predict where problems might emerge, and respond automatically—often without human intervention. “Instead of relying on static defences, they adapt continuously, reducing manual workloads and improving protection across increasingly complex, distributed environments.”
Whether agentic AI holds its promise of boundless automation remains to be seen, Tushkanov says that in cybersecurity, many defensive tasks are complex but repetitive and thus allow automation. That is why he says, “Many companies are exploring LLM-based agents, and products are being made agent-ready by introducing, for example, MCP-based integrations.”
He adds: “LLM-based agents can draw data from many sources, from host telemetry to threat intelligence and act on it, implementing complex workflows that often require human intervention now: automated penetration testing to rapid incident response. Even more important, LLM-based agents can become powerful assistants, providing human auto-analysts with contextual data to aid decision-making in complex modern environments.”
Securing Agentic Infrastructure
Agentic AI systems do more than just analyze information; they also take action based on that information. These agents can access tools, produce outputs that lead to subsequent effects, and interact with sensitive data in real time.
As AI agents proliferate and become increasingly autonomous and integrated into enterprise workflows, experts emphasize that “real” security solutions of the infrastructure they depend on play a crucial role.
Experts say security teams should take steps to control who can access sensitive data, set limits on agents’ actions, and watch for signs of misuse.
According to Tushkanov, agentic AI infrastructure could be secured with visibility and control.
- Restrict agentic AI access to sensitive systems and data using strong access controls and data segmentation to minimize the risk of internal exposure and broadened attack surfaces.
- Integrate advanced cybersecurity solutions that detect and block AI-generated attacks, including adaptive phishing and automated intrusion attempts.
- Conduct continuous audits and real-time monitoring of agent activity, decision-making, and data access to quickly identify anomalies, manipulation, or misuse.
- Human oversight and expert intervention are required at critical junctures to prevent unintended outcomes or exploitation, especially for high-impact decisions or system changes.
Securing agentic systems starts with the basics. That means trusted automation, clean data, and clear oversight. “You need closed-loop monitoring that can spot problems and react quickly, plus reliable data feeds to maintain visibility. These systems must be able to adapt, but do so within boundaries. Without human oversight, new vulnerabilities could emerge as fast as the old ones are solved,” says Fahmy.
Abu Ltaif emphasizes the human element in keeping agentic AI secure, saying businesses need to build guardrails into the system from the start. “That means making sure every action an AI agent takes is trackable, explainable, and, if needed, can be stopped by a human.”
“With the right checks in place, agentic AI can be powerful and trustworthy,” he adds.
MIT Sloan Management Review Middle East will host a summit on Agentic AI, bringing together leading researchers, academic pioneers, and technology experts in Dubai on September 23. For more details, speaker announcements, and to request an invitation, visit here