Agentic AI: Nine Essential Questions

How does agentic AI work? What can it do for your organization? What security issues should be on your radar screen? Catch up on key information from MIT SMR experts.

Topics

  • In January, MIT SMR columnists Thomas H. Davenport and Randy Bean predicted that agentic AI would be “a sure bet for 2025’s ‘most trending AI trend.’ ” They called that one correctly.

    “Agentic AI seems to be on an inevitable rise: Everybody in the tech vendor and analyst worlds is excited about the prospect of having AI programs collaborate to do real work instead of just generating content, even though nobody is entirely sure how it will all work,” they noted.

    That’s still true, almost a year later. Agentic AI continues to capture the imaginations of leaders and the hopes of tech vendors. Yet much of the discussion around AI agents is hypothetical, and most corporate work remains in the early-experimentation stage. Even OpenAI cofounder Andrej Karpathy recently declared that it may take 10 years for AI agents to work well.

    While market watchers are beginning to raise concerns about the circular nature of the deals fueling the AI economy, many corporate leaders are nonetheless feeling significant pressure to figure out how to innovate using AI — especially agentic AI.

    With all the hype about agentic, however, it can be tough to sort through the facts. Do you have a clear picture of what agentic AI does? Of how software agents communicate? Of what the technology’s limitations are? Here, we briefly answer some key questions about agentic AI technology, using excerpts from two recent MIT SMR articles, “Agentic AI at Scale: Redefining Management for a Superhuman Workforce” and “Three Agentic AI Security Essentials.” Let our expert researchers and practitioners get you up to speed. Let’s delve into it.

    1. What are AI agents?

    “Although there is no agreed-upon definition, agentic AI generally refers to AI systems that are capable of pursuing goals autonomously by making decisions, taking actions, and adapting to dynamic environments without constant human oversight. According to MIT’s AI Agent Index, deployment of these systems is increasing across fields like software engineering and customer service despite limited transparency about their technical components, intended uses, and safety.”

    “AI agents — powered by large language models (LLMs) — are no longer futuristic concepts. Agentic AI tools are working alongside humans, automating workflows, making decisions, and helping teams achieve strategic outcomes across businesses.”

    2. How do AI agents differ from other AI tools?

    “Unlike older AI applications that operate within narrowly defined boundaries, like chatbots, search assistants, or recommendation engines, AI agents are designed for autonomy.”

    3. Do companies see tangible ROI from agentic AI investments?

    “Among companies achieving enterprise-level value from AI, those posting strong financial performance and operational efficiency are 4.5 times more likely to have invested in agentic architectures, according to Accenture’s quarterly Pulse of Change surveys fielded from October to December 2024. (This research included 3,450 C-suite leaders and 3,000 non-C-suite employees from organizations with revenues greater than $500 million, in 22 industries and 20 countries.) These businesses are no longer experimenting with AI agents; they are scaling the work.”

    4. How do AI agents communicate to get work done?

    “AI agents operate in dynamic, interconnected technology environments. They engage with application programming interfaces (APIs), access a company’s core data systems, and traverse cloud and legacy infrastructure and third-party platforms. An AI agent’s ability to act independently is an asset only if companies are confident that those actions will be secure, compliant, and aligned with business intent.”

    5. What kinds of security gaps can arise with agentic AI?

    “Agentic AI has the power to transform enterprise operations precisely because it operates across systems and not just within them. Unlike older AI assistants, which are confined to a single application, AI agents work among multiple systems and platforms, often using APIs to help execute entire business workflows. But this same interoperability causes trouble for many organizations as the web of cyber vulnerabilities grows. … Two critical vulnerabilities: data poisoning and prompt injections.”

    6. What is data poisoning? What are prompt injections? How do they relate to agentic AI?

    “Data poisoning is the deliberate manipulation of training data to degrade system integrity, trustworthiness, and performance, and is one of the most insidious threats to agentic AI systems. In a recent Accenture cybersecurity survey, 57% of organizations expressed concern about data poisoning in generative AI deployments. Such attacks introduce inaccuracies into training data or embed hidden back doors that activate under certain conditions. For instance, in March 2024, a vulnerability in the Ray AI framework led to the breach of thousands of servers, wherein attackers injected malicious data in order to corrupt AI models. …

    “The prompt-injection security threat affects AI systems that rely on language models to interpret inputs. In this scenario, malicious instructions are embedded in a seemingly benign content, such as text or even images. Once that content is processed by the AI, the hidden prompts can hijack system behavior.”

    7. What steps can companies take to improve agentic AI security?

    “They must map vulnerabilities across their organization’s tech ecosystem, simulate real-world attacks, and embed safeguards that protect data and detect misuse in real time.”

    8. How does mapping all the interactions between LLMs, tools such as OCR, internal systems, and users lessen risk?

    “Mapping every interaction lessens risk by:

    • Exposing hidden data connections or back doors.
    • Highlighting where controls such as encryption and access restrictions are critical.
    • Closing off unintended or unnecessary interactions that someone could turn into an unauthorized pathway.
    • Improving anomaly detection by establishing a clear baseline of expected behavior, to make unauthorized activities easier to spot.

    “Mapping does not eliminate risk by itself, but it exposes and constrains system behavior, making it harder for unauthorized AI use or data leaks to go unnoticed.”

    9. When agentic AI systems are making critical decisions, how can companies ensure accountability?

    “We offer the following recommendations for organizations seeking to improve accountability over agentic AI systems:

    1. Adopt life-cycle-based management approaches. Agentic AI is fast, complex, and dynamic. Implement a continuous, iterative management process that tracks agentic AI systems from initial design through deployment and ongoing use. Instead of one-time reviews, introduce recurring assessments, technical audits, and performance monitoring to detect and address issues in real time. Management approaches should make oversight an embedded part of daily operations, not a periodic or isolated compliance task.
    2. Integrate human accountability into AI governance structures. Design management frameworks to explicitly assign specific roles and responsibilities for both the human manager and agentic AI system over every stage of the AI life cycle. Establishing decision-making protocols, escalation paths, and evaluation checkpoints must be part of every agentic AI system deployment to ensure that people remain answerable to outcomes. These structures should reinforce that agentic AI is a tool within human-led processes.
    3. Enable AI-led decisions in defined circumstances. While human oversight is essential, the properties of agentic AI stretch its limits. New management approaches should identify areas where AI can and should prevail based on its superior speed, accuracy, or consistency. In such cases, governance can focus instead on defining boundaries, monitoring performance, and ensuring that human intervention is reserved for higher-risk scenarios. These responsibilities should be agreed upon among senior corporate leadership and clearly communicated to managers so that they fully understand their accountability in these situations.
    4. Prepare for agentic AI that creates other AI systems. Failure to account for AI systems developed or modified autonomously by other AI systems can result in a significant visibility gap in an organization. Recognizing and integrating these emergent systems will be critical to defining the scope of AI in the enterprise. Governance structures and management approaches that do not account for AI offspring will foster, not mitigate, AI-related risks.
    5. When it comes to agentic AI, make the implicit explicit. Since agentic AI systems require explicitly defined rules and threshold values, organizations should clarify the role and scope of agentic AI in their management structures. Just as human labor scales through hierarchical or structured management systems designed to ensure accountability, the integration of agentic AI in the workforce requires a clear understanding and articulation of its scope and a deliberate articulation of its role within these organizational frameworks, including its relation to the human components of an increasingly superhuman workforce.”

    Want to learn more? Read the full articles: “Agentic AI at Scale: Redefining Management for a Superhuman Workforce” and “Three Agentic AI Security Essentials.”

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.