The risks and challenges of agentic AI
Agentic AI is a revolutionary tool, but like all technology, it's not infallible. Let's return to your proactive travel assistant. It knows you're planning a holiday, so, with its usual enthusiasm, it jumps into action.
It books your flight, reserves a taxi to the airport, arranges fast-track security, handles automatic check-in and even speeds you through friendly immigration (your agents and the immigration agents having connected, worked together and approved your travel without you having to fill in a single multiple-page visa form).
Everything seems seamless... until you find yourself in Sydney, Nova Scotia, not Sydney, Australia.
This kind of mistake is a stark reminder of the risks of agentic AI, not just in terms of getting it wrong, but also for security and trust. Handing over personal or sensitive information to AI agents is a big deal and requires an element of trust. While incredibly capable, AI agents are still only as good as their programming, data, and training.
Why did this happen?
- Misinterpretation of context: Your agent didn't clarify your intended destination and assumed Sydney, Nova Scotia, based on incomplete data. It might have based its decision on your recent searches for something different, or confusion of matching information.
- Poor validation processes: The system didn't cross-check flight details against your other travel history.
- Autonomy without oversight: The agent acted without confirming critical decisions with you.
How can agentic AI risks be mitigated?
Businesses and individuals using agentic AI must plan for potential errors by implementing safeguards.
Guardrails and clear validation rules: AI systems should require confirmation for critical decisions, like booking expensive flights or choosing destinations.
Explainability and transparency: Your AI should provide detailed reasoning for its actions, helping you understand why it made a particular choice.
Human-in-the-loop systems: For complex or high-stakes decisions, an AI agent should always include a human reviewer to ensure accuracy.
Common sense reasoning in AI: AI systems should incorporate "common sense agents" that question their own reasoning and assumptions before acting. For example, the AI might self-assess: "Sydney, Canada seems unlikely; I should ask for clarification before proceeding".
These protections don't just apply to travel, they're critical for businesses as well. Imagine an agentic AI managing supply chains accidentally overriding inventory or breaching compliance rules. Without oversight and validation processes, the costs could be catastrophic.