Riskified Expands AI Agent Intelligence to Protect Merchants' Native AI Shopping Assistants
Akihiro Suzuki
Twitter
Source: www.businesswire.com
Key Takeaways
- Riskified expands AI Agent Intelligence to protect merchants' native AI shopping assistants from fraud
- AI agent-driven traffic shows significantly higher fraud risk, with threat actors exploiting agentic protocols
- E-commerce businesses must embed risk intelligence layers into their AI assistants from the design stage
Riskified Launches Fraud Protection for Merchants' Own AI Assistants

Riskified Announces Expansion of AI Agent Intelligence to Secure Native Merchant AI Shopping Assistants
Riskified expands AI Agent Intelligence to secure merchants' native AI shopping assistants from fraud and abuse
On March 3, 2026, Riskified (NYSE: RSKD), a fraud prevention platform for e-commerce, announced an expansion of its "AI Agent Intelligence" capabilities. This expansion now covers protection for AI shopping assistants that merchants build and operate themselves.
Previously, AI Agent Intelligence primarily focused on monitoring orders from external AI shopping agents like ChatGPT. The expansion now extends protection to "native AI assistants" that merchants deploy directly on their digital storefronts.
Co-founder and CTO Assaf Feldman stated that merchants gain a "home-field advantage" by launching their own virtual shopping assistants, maintaining direct relationships with customers, and described Riskified as the risk intelligence layer powering that advantage.
Background and Industry Trends
The movement of AI agents into e-commerce purchasing processes has accelerated rapidly since the second half of 2025. According to McKinsey & Company research, 82% of the retail industry has already launched generative AI pilots aimed at transforming customer service.
Meanwhile, a global survey conducted by Riskified in September-October 2025 with over 5,000 consumers found that 73% already use AI in their shopping activities. Seventy percent reported being comfortable with AI agents making purchases on their behalf.
However, this rapid growth comes with serious risks. Riskified's analysis shows that LLM-driven traffic carries higher fraud risk compared to standard search traffic — 2.3x higher for ticket sellers and 1.8x higher for electronics retailers. Fraud rings are exploiting agentic protocols and chatbots to rapidly deplete inventory and resell through fake storefronts.
Details of the New AI Agent Intelligence Features
The core of this expansion consists of two new capabilities.
AI Agent Identity Signals enables merchants' AI shopping assistants to directly query Riskified's "Identity Graph" (an identity information database) and retrieve risk indicators in real time. Three connection methods are supported: MCP (Model Context Protocol) integration, Google's Agent-to-Agent (A2A) protocol, and standard RESTful APIs.
This allows native AI assistants to obtain real-time risk assessments during customer conversations. For example, when handling return requests or exchange decisions, instant decisions based on consumer risk level and eligibility become possible.
The enhanced AI Agent Policy Builder is a tool built on Riskified Decision Studio that allows businesses to build, simulate, deploy, and track business rules for AI agent-originated orders. Specifically, it detects and prevents patterns such as programmatic refund fraud, reseller arbitrage, and promotion abuse.
Additionally, AI Agent Approve, introduced in August 2025 through a partnership with HUMAN Security, continues to be available. Published as an MCP server package on AWS Marketplace, it serves as a trust layer enabling demand-side actors (LLMs and AI agents) to securely communicate with supply-side (merchant) Riskified platforms.
Implications and Action Items for E-Commerce Businesses
This announcement provides important guidance for e-commerce businesses considering deploying their own AI assistants.
First, while AI assistants can be revenue drivers, deploying them without fraud protection carries significant risk. Survey results showing that 45% of consumers use AI for product discovery, 37% for review summaries, and 32% for price comparison underscore the strong demand for AI assistants.
There are three practical considerations for deployment. First, embedding a risk intelligence layer from the AI assistant design stage. Rather than retrofitting, an architecture that integrates real-time risk assessment within conversation flows is essential.
Second, supporting open protocols such as MCP, A2A, and REST APIs. Riskified offers multiple connection methods, enabling flexible integration based on the merchant's technology stack.
Third, leveraging multi-merchant networks. Fraud patterns that cannot be detected from a single merchant's data can be identified by combining network information across multiple merchants. The value of "network-based" fraud detection like Riskified's will only increase in the age of AI agents.
Riskified plans to showcase these capabilities in more detail at its global summit Ascend 2026, scheduled for May 4-6, 2026 in New York. A Japan edition is planned for October.
Conclusion
Riskified's announcement clearly signals that agentic commerce security is shifting from "defending against external agents" to "protecting your own agents." The trend of e-commerce businesses using AI assistants as their own customer touchpoints is irreversible, and building the infrastructure to ensure their safety is an urgent priority.
What stands out is the protocol standardization progressing through support for three protocols — MCP, A2A, and REST. As the entire e-commerce infrastructure shifts toward agent compatibility — not just fraud detection, but payments, logistics, and CRM — the extent to which security layer standardization advances will be the next focal point.
Related Articles

Experian Warns Agentic Commerce Fraud Is the Biggest Threat of 2026
Experian releases its 2026 fraud predictions. For the first time in history, AI agent fraud surpasses human error as the leading threat. E-commerce businesses must urgently build multi-layered AI-driven fraud prevention and agent authentication systems.

Cambridge University Warns AI Agent Safety Disclosures Are 'Dangerously Behind' -- Security and Transparency Framework Efforts Accelerate
A joint MIT-Cambridge study reveals that 26 of 30 major commercial AI agents fail to disclose safety evaluations. NIST, Vouched, and SentinelOne race to build trust frameworks for agentic commerce.

"My AI Bought That, Not Me" — Chargebacks911 Warns of New Dispute Wave in Agentic Commerce Era
Chargebacks911 warns of new chargeback categories driven by AI agents. As Visa and Mastercard advance agentic payment pilots, dispute resolution frameworks remain undeveloped. E-commerce businesses must urgently build agent permission controls, transaction audit trails, and notification systems.
Tags
Share this article