Search This Blog

Wednesday, 1 April 2026

Of Agentic Commerce, Prompt Injection and Authorised Push Payment Fraud

I've posted separately on prompt injection (among the risks associated with generative/agentic AI) and UK payment service providers' refund obligations for authorised push payment fraud (among other forms of potential redress). But I don't think I've explained the clear link between the two, and the implications for the full 'magic' of 'agentic commerce'... This post is for information purposes only. If you need legal advice (e.g. legal analysis in the context of modelling/testing the tech), please let me know.

Authorised push payment fraud (APP fraud) occurs where you are tricked into paying someone you did not intend to. As mentioned, there are limited APP fraud reimbursement requirements for payment service providers who facilitate payments using certain UK payment systems, for example, and there may be other grounds for recovering your money, initially at the expense of your payment service provider.

Agentic commerce describes a scenario where some kind of 'agentic AI' tool (or 'bot') is deployed for the purpose of enabling the automation of product search, selection, purchase and payment. Such tools may be 'native' to an open generative AI platform or made available by a retailer or retail marketplace within that platform, as the recent failure of Walmart's trial of OpenAI's Instant Checkout has demonstrated, leading OpenAI to phase out its own bot and leave the retailers to handle product selection, purchase and payment.

The challenge for guarding against APP fraud in such an automated scenario is that AI agents ultimately cannot distinguish between legitimate and adverse 'prompts', which might be inadvertently introduced either directly or indirectly via an authenticated agent by an authenticated user. 

There are innumerable ways that an adverse prompt could find its way to a user's device, and then to the AI agent, but here are some recent examples. And the prompt may cause the agent or another system or programme to behave adversely, with a fraudulent purchase and/or payment to an unintended payee being only two examples.

Of course, UK payment service providers do have obligations to guard against APP fraud, whether they are part of the reimbursement schemes or not - and even extended mandatory processing times to allow for suspect transactions to be investigated. And anyone making a payment from a UK bank or payment account will be familiar with a number of hoops to be gone through except in a limited number of own-account payments, even after multi-factor account log-in.

Where payment service providers are acting as merchant acquirers, it is likely they their acquiring agreements push the risk of fraudulent transactions, chargebacks and refund onto the merchant. But there is still the job of monitoring and managing the risk that the merchant won't be able to reimburse the payment service provider...

One challenge will be whether the adverse prompts will be able to beat the 'hoops' or trick the user into overriding them, or override them in an automated fashion. Equally, a payment service provider might argue that the consumer has lost any right of reimbursement due to 'gross negligence' in not adhering to a certain 'standard of caution' in its use of an AI agent, for example.

At any rate, the need to meet such challenges would seem to count against the full promise of a 'magically' automated agentic commerce experience... or make it potentially enormously expensive to support without necessarily knowing whether unforeseen claims might suddenly surge.