As artificial intelligence transitions from a passive chatbot to an active “agent” capable of making decisions and executing tasks, a new frontier of digital risk has emerged. The primary concern? AI agents running wild with your credit cards.
To address this looming security gap, the FIDO Alliance —the industry leader in authentication standards—has announced the launch of two new working groups. Supported by major industry players like Google and Mastercard, these groups aim to build the foundational security protocols required to govern “agentic commerce.”
The Shift from Passive AI to Autonomous Agents
Traditional AI requires constant human prompting. However, “agentic AI” is designed to act on a user’s behalf. Imagine telling an AI, “Buy these sneakers if they drop below $100,” and having the agent monitor stock and execute the payment autonomously.
While this offers immense convenience, it introduces unprecedented vulnerabilities:
– Agent Hijacking: A bad actor could intercept an agent and give it rogue instructions.
– Lack of Intent Verification: Without clear protocols, merchants cannot distinguish between a legitimate user-authorized transaction and a glitch or a malicious command.
– Privacy Risks: Facilitating these transactions requires sharing sensitive data across a complex web of platforms, merchants, and banks.
Building a “Security Baseline” for AI
The FIDO Alliance is working to ensure that the industry does not repeat the mistakes made during the era of passwords. Just as the world eventually moved toward more secure authentication to replace easily stolen passwords, the industry must now establish guardrails for autonomous interactions.
The new initiative focuses on three core pillars:
1. Cryptographic Validation: Using advanced math to prove that an agent is acting strictly within the parameters set by the human user.
2. Phishing Resistance: Creating authorization mechanisms that cannot be easily tricked by social engineering or identity theft.
3. Transparency and Recourse: Establishing frameworks so that if a transaction goes wrong, there is a clear, verifiable “paper trail” to resolve disputes between users and merchants.
Industry Contributions: AP2 and Verifiable Intent
To accelerate this process, Google and Mastercard are contributing open-source tools to the working groups, bypassing the years of development typically required for such standards.
- Google’s Agent Payments Protocol (AP2): Provides a method to cryptographically verify that a user actually intended for a specific transaction to occur.
- Mastercard’s Verifiable Intent Framework: A secure mechanism designed to give users granular control over what an agent is allowed to do.
“We want to provide cryptographic proof that a transaction was authorized by the user themself, but keep it private,” says Stavan Parikh, Google’s VP and GM of payments. This approach allows for “selective disclosure,” meaning a merchant sees only what they need to see to fulfill the order, protecting the user’s broader privacy.
The Race Against Rapid Adoption
The primary challenge is speed. AI technology is evolving much faster than the traditional cycle of industrial standardization. As Mastercard’s Chief Digital Officer, Pablo Fourez, notes, the rapid pace of AI development “compresses” timelines that used to take years into mere months.
For the ecosystem to succeed, these protocols must not only be technically sound but also practical enough for merchants and banks to adopt at scale. Without these guardrails, the high cost of fraud and consumer distrust could stifle the very innovation that makes agentic AI so promising.
Conclusion
As AI agents move from experimental tools to mainstream financial actors, the industry is racing to establish cryptographic standards that ensure autonomy does not come at the expense of security. The success of this initiative will determine whether the future of AI commerce is defined by seamless convenience or widespread financial exploitation.
















