Search This Blog

Monday, 14 October 2024

FCA Warns Payment Service Providers To Control Other Types Of APP Fraud

While there is now a mandatory reimbursement regime and other protections for authorised push payment (APP) fraud for consumer payments made within the UK in GBP using the Faster Payment System and the CHAPS system, the FCA has also told banks and other payment service providers that a combination of their obligation to guard against financial crime and their consumer duty means that firms must also offer the same protection in other scenarios where consumers may be tricked into making payments using their services, such as between payment accounts at the same service provider ('on us' APP fraud).

If you are planning to provide a lower level of protection to ‘on us’ APP fraud reimbursement compared to payments made through FPS and CHAPS, we ask you to contact us to provide an explanation of the steps you have taken to meet those obligations.

Of course, 'consumers' include 'micro-enterprises' (who employ fewer than 10 people and have a turnover or annual balance sheet of up to €2m) and small charities (who have annual income of less than £1m).

The FCA's letter to CEOs also sets out various expectations in relation to the mandatory APP regime, including a reminder on anti-fraud controls and the need to factor in the potential level of fraud reimbursement into firms' working capital calculations.

This post is for information purposes, please get in touch if you require legal advice.

Sunday, 29 September 2024

The FCA Wonders Out Loud Whether UK E-money Is Really Redeemable at Par...

The 'decoupling drama' surrounding USDT stablecoins appears to be echoing in the UK e-money world amid news from the UK Financial Conduct Authority that it doesn't know whether UK e-money firms fully safeguard the cash corresponding to their customers' e-money balances. This bombshell comes with a commitment to change the safeguarding rules in ways that could bring further problems, casting serious doubt on whether UK authorities' really have a grip on the payments sector.

This post is for information purposes only. If you would like legal advice, please let me know.

Context

The FCA's consultation on proposed changes to the 'safeguarding' rules for non-bank payment service providers makes you wonder who's been responsible for supervising the 24 year old sector. The regulatory regime has been under the FCA's direct supervision since it took over from the beleaguered Financial Services Authority in 2013. The sector comprises over 1,200 firms and processed £1.9 trillion in payment transactions in 2023. Electronic money (basically prepaid stored value that's used for making e-payments to others) represents about £1 trillion of these volumes, issued by 250 firms. Some e-money balances, such as those relating to prepaid card programmes, are significant and held for long periods. 

E-money is supposed to be issued on receipt of funds, and to be 'redeemable' on demand, at 'par value'. So, if you pay £1 to the issuer, it should immediately credit your online payment account in its systems with £1 and that balance should continue to be 'worth' 1 GBP when you transfer, spend or withdraw it. You have the regulatory right to withdraw - or 'redeem' - your e-money balance on demand. 

But e-money balances (like other non-bank payment flows) are not subject to the deposit guarantee under the Financial Services Compensation Scheme that backs bank deposits (up to a limit of £85,000 for all your deposits with the one bank). Instead, the right to redeem your e-money at par is underpinned by a regulatory obligation on the issuer to safeguard the corresponding amount of cash in GBP in a designated bank account, separate from its own funds (or with insurance), so that the funds are available to pay out immediately on demand.

Other types of non-bank payment service provider (payment institutions) must also safeguard customer funds, but they're only supposed to hold funds for as long as it takes to execute/process the related payment order, rather than allow their customers to hold an ongoing balance, so the time during which the funds are 'at risk' of the PSP going bust or dissipating the funds should be shorter than for e-money balances.

What's the immediate problem (opportunity)?

The FCA admits in its consultation paper that it does not know whether firms are failing to fully safeguard funds corresponding to the payment transactions they process or the e-money they issue. Worse, it reveals that in the 5 insolvencies of e-money institutions from 2018-2023 only 20% of funds were available and it took over 2 years on average time for an administrator to distribute the first round of customers' balances...

This seems to echo what happened when the value of  Tether's USDT 'stablecoins' - which aim to trade at parity with the USD - de-pegged from the USD. The scenario presented traders with an arbitrage opportunity: some borrowed amounts in a rival stablecoin and bought USDT at a discounted rate, betting that if USDT returned to its 1:1 peg, they could sell their USDT at parity and repay their loans at a profit.

In principle, there may be little difference between a right to redeem an 'e-money' balance in an online account and a 'fiat-backed stablecoin'. Indeed, the EU regulates fiat-backed stablecoins in the same way that it regulates e-money, while the FCA suggests they should be regulated differently, as recently discussed on LinkedIn.

Could there be an 'arbitrage opportunity' between balances issued by different e-money issuers, based on the extent of their safeguarding and availability of the balances?

Why Doesn't the FCA Make Firms Reveal How Much is Safeguarded At All Times?

Alarmingly, the FCA says the problem arises from firms not understanding how to safeguard, as well as "challenges in supervision and enforcement": 

33. In some firm failures there has been evidence of safeguarding failings which put client funds at risk and resulted in shortfalls. The current light-touch regime around [FCA!] reporting requirements means that supervisors have insufficient information to identify firms that fall short of our expectations. This then prevents the FCA from being able to prioritise resources, be that support or enforcement, on firms that pose the greatest risk to clients prior to insolvency. 

34. In particular, we are concerned about 2 areas. First, regulatory returns do not contain sufficient detail to assess whether firms are meeting their safeguarding obligations. Second, the safeguarding audits provided for in the Approach Document do not have to submitted to the FCA, further limiting our oversight

35. Furthermore, the lack of clarity and precision in current provisions leads to difficulties in enforcement as firms may be able to contest findings. This can undermine the credibility of enforcement as a deterrence.

Begging the question: in such circumstances, should the market continue to believe that UK issued/FCA-regulated e-money is really on par with GBP? 

New Rules...

The UK authorities' proposed remedy is to bring in more detailed rules, in two phases: supplementary rules under the current regulations "to reduce the incidence and extent of pre-insolvency shortfalls" (why so late?) and moving the e-money/payment services safeguarding regime under the FCA's wider 'client asset rules' (CASS) regime "to improve the speed and cost of distributing funds post-insolvency" - suggesting that the last attempt to improve the insolvency regime for non-bank payment service providers failed.

The interim rules will only echo current requirements, however, with only monthly reporting on the amount of e-money issued and corresponding cash safeguarded. Will the market be told? Even stablecoin issuers publish the amount of backing assets they hold (to prevent a 'run' on their stablecoins and a crash in the value). Maybe e-money issuers should start doing that, too? 

Among the eventual CASS rules will be an obligation to hold safeguarded funds under a statutory trust in favour of their e-money holders. This reflects the FCA's frustration at having already lost the argument in the case of Ipagoo in the Court of Appeal, which held that there is no statutory trust in favour of e-money holders under the E-money Regulations. The FCA is also pressing for a statutory trust over the cash which 'backs' fiat-backed stablecoins (while the EU has not). 

The statutory trust idea, in particular, raises a number of issues. 

The first issue is whether an e-money holder could have property rights in two distinct assets: the e-money balance (or the right to redeem it at par) and the beneficial interest in the pool of cash held by the issuer in the statutory trust (equating to the par value of e-money held)? If so, does the e-money holder simply have double the value of their e-money balance and/or could the value of these interests diverge?

Secondly, if the e-money itself gives the holder rights in the underlying cash in the statutory trust, why isn't e-money an investment instrument of some kind (the very thing that stablecoin issuers have structured their offerings to avoid, for fear of creating a regulated 'security')? Could it be traded on an exchange (or 'multi-lateral trading facility'), for instance? 

Thirdly, the requirement for the corresponding cash to be held in trust is no guarantee that an adequate amount will be held, or that the issuer won't somehow subvert the trust by, for example, failing to deduct 'own funds' (such as amounts owed in fees). What would such a failure mean for the value of the e-money balance itself (or the right to redeem it at par)?

There are likely other issues, such as those arising where an e-money holder has somehow granted an interest to a third party in either the e-money balance or the beneficial interest in the statutory trust. Currently, only the e-money issuer may have an interest in corresponding cash that is safeguarded. 

None of this is to suggest that there aren't answers in each case. The point is that the new concept of a statutory trust over the cash corresponding to e-money balances raises fresh uncertainty where the situation already appears grave under simpler rules; and without really solving the fundamental problems of potentially safeguarding too little and slow distribution on insolvency. 

More transparency and closer supervision would seem to be preferable.

Conclusion

The potential for new safeguarding rules is an almighty distraction from the critical uncertainty surrounding the integrity of the non-bank payment sector today.  

To ensure market confidence, e-money and payment firms may need to resort to publishing their safeguarding position on a daily basis, regardless of the FCA's requirements.

And new FCA rules will prove futile if the level of supervision remains the same.

This post is for information purposes only. If you would like legal advice, please let me know.


Monday, 23 September 2024

A New Role In Ireland!

I'm pleased to say that I've been welcomed into a new consulting role in Dublin with Crowley Millar, a boutique in the financial district that prides itself  on 'pragmatic expert advice' - apt for someone whose other blog is Pragmatist! 

Huge thanks to Hugh Millar and the other partners for agreeing to take me on, and to Bryan Sweeney who led the charge on the very kind recommendation of a mutual client.

So it wasn't the beach that's kept me from these pages. In fact, it's been such a frantic summer I've had to restrict myself to posting on LinkedIn and occasionally Mastodon while absorbed by an almighty due diligence exercise and various other things. Fortunately, the change in UK government meant a pause on the consultation front.

At any rate, I've resurfaced, both here and in Dublin, so stay tuned...


Saturday, 20 July 2024

If DAOs Are Really Autonomous, They Could Be Regulated As AI Systems Under the EU's AI Act...

Two recent publications - that of the EU's Artificial Intelligence Act and the UK Law Commission's 'scoping paper' on whether and to what extent Decentralised Autonomous Organisations should be granted legal status - got me thinking about this, because both AI systems and DAOs will tend to be global or 'borderless' in nature. It seems to me that the EU may have granted certain DAOs a form of legal status already - as AI systems - while focusing responsibility and liability on only some of the roles involved... If so, we can add this to other examples of sector-specific regulation in areas where DAOs might be established to operate, which could also have significant implications for the DAO and its participants. Please let me know if you require legal advice in these areas.

Defining AI systems and DAOs

‘AI system’ means [with limited exceptions] a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

The Law Commission uses the terms "DAO" very broadly to describe:

a new type of online organisation using rules set out in computer code. A DAO will generally bring together a community of (human) participants with a shared goal – whether profit-making, social or charitable. The term DAO does not necessarily connote any particular type of organisational structure and therefore cannot on its own imply any particular legal treatment.
As to what is meant by "autonomous" the Law Commission found that:

In the context of a DAO, “autonomous” has no single authoritative meaning. Some suggest that “autonomous” refers to the fact that the DAO has (a degree) of automaticity; that is, it relies in part on software code which is capable of running automatically according to pre-specified functions. Others suggest that “autonomous” is a broader, descriptive term used to encapsulate the idea that DAOs are capable of operating in a censorship-resistant manner without undue external interference or internal (or centralised) control. In this paper we allow for both meanings.

To merge the two concepts: a DAO's governance or decision-making could be automated 'with varying levels of autonomy' using codified 'smart contracts' that operate automatically in certain circumstances, to infer from the inputs received how to generate recommendations or decisions that influence the DAO or some other virtual or physical environment. 

Whom would this affect?

The AI Act applies to any person who supplies an AI system (or GPAI model) on the EU (read EEA) market (wherever they may be located) and anyone located outside the EU who provides or deploys an AI system outside the EU, if the output of the AI system is to be used in the EU. 

The AI Act encompasses a range of roles or actors who might - or should - have responsibility/liability in connection with the risks posed by an AI system, each of whom qualifies as an "operator":

‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;

‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; 

‘authorised representative’ means a natural or legal person located or established in the [EEA] who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation; 

'importer’ means a natural or legal person located or established in the [EEA] that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country; 

‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the [EEA] market; 

When we think about who might be involved or 'participate' in a DAOs, the Law Commission has grouped them as follows (though the roles may not be mutually exclusive):

  1. Software developers 
  2. Token holders of the tokens that enable governance or other types of participation 
  3. Investors/shareholders (where DAOs use recognised legal entities such as limited companies). 
  4. Operators/contributors in connection with the DAO's tokens (miners/validators), software, management etc. 
  5. Customers/clients, where the DAO offers an external service.
However, it is clear that these roles don't readily 'map' to the AI Act's concepts of responsibility for managing risks associated with the establishment, deployment and ongoing operation of DAOs.

This is not unusual when it comes to sector-specific regulation, which tends to focus on certain activities that some legal person or other must be conducting in the course of developing/establishing, deploying, operating and winding-down/up (although perhaps a lot of this type of regulation tends to be more limited in its territorial application).

Conclusion

Of course it's important to think of DAOs in terms of being an 'organisation' of some kind with legal implications for the participants depending on the actual type (Chapters 3 to 5 of the Law Commission's paper). 

However, it's also critical to consider the potential impact of sector-specific regulation that governs the activities of developing/establishing, deploying, operating and winding-down/up certain types of services or products. This type of regulation tends to be more limited in its territorial application, so requires a country-by-country (or even state-by-state analysis in countries like the US or India or regional trade arrangements, like the EU). Significant examples of this type of regulation that may have very grave implications for the liability and responsibilities of DAO participants include anti-money laundering requirements, financial regulation and tax (Chapter 6 of the Law Commission's paper), and we can add the AI Act as a more recent example. 

Please let me know if you require legal advice in these areas.


Monday, 15 July 2024

Of APP Fraud, Safeguarding And "Asset Pools"

The awesome scale of 'authorised push payment' fraud is causing sleepless nights throughout the banking and payments industry, and much uncertainty as to where liability sits. There is a seemingly endless array of scenarios in which APP fraud can occur. Examples include impersonation, investment, romance, purchase, invoice and mandate, CEO fraud and advance fees. It's conceivable that liability could vary according to whether or not the payer is a consumer (or to be treated as one), as well as the type of institutions and payment services involved. I've set out below a quick summary of the current state of play for information purposes only, including various cases before the courts. Let me know if you need legal advice on any aspect, including possibly lobbying the new government to grasp some of the nettles via some form of regulatory action, to spare everyone a lot of time and expense...

Regulatory developments

I covered the Payment Systems Regulator's proposals in this area last June, and these have been brought in with effect from 7 October 2024. 

The CRM Code only covered 60% of APP fraud within its voluntary scope, so mandatory reimbursement requirements were always on the cards. 

The new reimbursement requirement applies to consumers, micro-enterprises and small charities which are all treated as ‘consumers’ under the Payment Services Regulations 2017 (PSRs), as with the CRM Code. In other words, it only covers payments made using Faster Payments where the victim is deceived into allowing or authorising a payment from their account with a PSP to another account outside the victim's control at another PSP.  

Firms must reimburse all in-scope customers who fall victim to APP fraud (with some exceptions), sharing the cost of reimbursing victims 50:50 between sending and receiving PSP, with extra protections for vulnerable customers. 

As the operator of Faster Payments, Pay.UK is responsible for monitoring all directed PSPs’ compliance with the FPS reimbursement rules and will operate a reimbursement claim management system (RCMS) that all members (direct participants) in Faster Payments must use from 1 May 2025, with various reporting standards mandated by the Payment Systems Regulator, with some limited to the larger participants. Affected PSPs must also explain this to their customers, including in service terms and conditions, so let's know if I can help there in particular.

As mentioned in March, the previous government proposed an amendment to Regulation 86 of the Payment Services Regulations to extend the time limit on processing a payment order where it has been authorised by the payer but their PSP reasonably suspects APP fraud.

Liability aside from the regulatory solution

Breach of Duties

As clarified by the Supreme Court in Philipp v Barclays: 

  • banks have a duty to execute a valid (clear, lawful) payment order promptly
  • a bank cannot execute a payment outside its mandate, so cannot debit the relevant amount from the customer's account in that case, and if it were to do so, then the customer has a debt claim against the bank.
  • banks also have a duty of care to customers to interpret, ascertain and act in accordance with their customers' instructions, which only arises where the validity or content of the customer's instruction is unclear or leaves the bank with a choice about how to carry out the instruction. The duty won't apply in the case of a valid payment order that is clear and leaves no room for interpretation or choice about what is required to execute it (i.e. the bank must simply execute, according to the first duty above). 
  • Where the general duty of care arises, and the payment instruction was given by an agent of the customer, and a bank has reasonable grounds to believe that the payment instruction given by the agent is an attempt to defraud the customer, the Quincecare duty requires the bank to refrain from executing the payment pending its inquiries to verify that the instruction has actually been authorised by the principal/customer. A similar duty applies where the bank is on notice that the customer lacks mental capacity to handle their finances or bank accounts.
  • the bank may also have a duty to take reasonable steps to recover funds that its customer claims to have paid away by mistake or as a result of fraud.

These findings are generally consistent with the Payment Services Regulations 2017 (PSRs), although (as the Supreme Court also explained), the PSRs did not provide for reimbursement of authorised payments, so did not assist victims of APP fraud, partly because they deem such payments to be correctly executed. However, the PSRs do oblige payment service providers to "make reasonable efforts to recover the funds involved", for which PSPs can charge any contractually agreed fee; and Regulation 90 has been amended to enable liability to be imposed “where the payment order is executed subsequent to fraud or dishonesty” under the Payment Systems Regulator's arrangements explained above - but this does not provide a direct right of action for customers.

It's has since been accepted (e.g. in Larsson v Revolut) that the above duties which apply to banks in a payment scenario, also applies to other types of regulated PSPs (e-money institutions and payment institutions). 

In Larsson, the claim was against the receiving PSP with which the payer also happened to have an account, although that wasn't the account from which payment was taken. However, the court held there were no duties owed by the PSP of the payee ('receiving PSP') to the payer, but did preserve the (slim) possibility of arguing 'dishonest assistance in a breach of trust' such that a constructive trust may have arisen over the proceeds of the payment transaction. 

CPP v NatWest further considered the concept of a 'retrieval duty'. That claim was held to be time-barred in the case of the PSP of the payer; but not in the case of the PSP of the payee, which might owe the duty where: 

  • it assumed a responsibility to protect the payer from the fraud; 
  • it has done something which prevents another from protecting the payer from that danger; 
  • it has a special level of control over that source of danger; or 
  • its status creates an obligation to protect the payer from that danger. 

I could see claimants arguing that the presence of voluntary and mandatory APP fraud schemes lend weight to some of these factors, while PSPs arguing that those schemes should be disregarded as they only operate strictly within their own scope.

Unjust enrichment

Terna v Revolut involves a claim by the payer that the receiving PSP was 'unjustly enriched' when the payer instructed its own bank/PSP to pay funds to a third party account in the mistaken belief that it was paying a genuine invoice from an energy supplier. The payment went via a correspondent (intermediary) bank via a series of SWIFT inter-bank messages; and the funds disappeared from the third party account within hours of being credited by the payee's PSP (an e-money institution). 

For this type of claim to succeed, the payee's PSP must have benefited at the claimant's expense in a way that was 'unjust' and without any defence

When the payee's PSP received funds in its account with a correspondent bank, it issued e-money to the payee, so claimed that it had not benefited. Some first instance decisions are consistent with that, but established banking law holds that this is not a valid argument; and the court was not convinced that the position may be different with an e-money institution that must issue e-money on receipt of funds and safeguard the funds (which a bank does not have to do) because one safeguarding option involved investing the cash (not to mention insurance as another option). Instead, the court held, these facts might operate as a defence, but that could only be decided on a full trial.

On whether the PSP was unjustly enriched 'at the claimant's expense' the court held that SWIFT and CHAPS payments should be treated the same way; and these were potential instances of 'indirect benefit' rather than 'direct benefit'. Here, the court considered that an 'indirect benefit' is to be treated the same as a direct benefit, where there is agency or a 'set of co-ordinated transactions' and that both applied (contrary to an earlier High Court case of Tecnimont). The likely questions at trial, therefore, are whether the enrichment was 'unjust' and/or a defence applied. 

Fortunately, permission to appeal has been granted, so there's an opportunity to settle the difference of opinion between High Court judges. It's probably too much to ask, but in that event it would be helpful if the Court of Appeal were to add some guidance as to how it would treat claims of unjust enrichment in situations where other forms of payment services (and systems) are implicated. For example,  'money remittance' is defined in the PSRs to mean: 

"the transmission of money (or any representation of monetary value), without any payment accounts being created in the name of the payer or the payee, where— 

(a) funds are received from a payer for the sole purpose of transferring a corresponding amount to a payee or to another payment service provider acting on behalf of the payee; or 

(b) funds are received on behalf of, and made available to, the payee.


Liability where funds are frozen or accounts suspended for regulatory reasons

Kopp v HSBC is another interim judgment, which involves a situation where the payer's bank suspended the payer's account following an anti-money laundering review that the payer argued had been carried out, preventing the payer making certain payments for which it then incurred liability to the payees under an indemnity, including ongoing interest. On an interim summary judgment application, the court held there was a triable issue as to whether the bank's liability clause ('buried' in the service terms) might fail to satisfy the reasonableness requirement under the Unfair Contract Terms Act (which also protect small businesses). That meant the court also refrained from deciding whether the clause in question excluded these heads of liability on the basis that they were not “direct loss of profit” or “other direct losses” or were expressly excluded as being “indirect or consequential loss (including lost business, data, profits or losses resulting from third party claims) even if it was foreseeable”.

Failure to safeguard customer funds

The extension of bank duties and potential APP fraud liability to all types of regulated PSPs (accepted in Larsson) sadly raises the prospect of the insolvency or a voluntary winding up of smaller e-money or payment institutions. 

This is relatively rare, since PSPs are required to have a certain amount of minimum capital (both by regulation and, where applicable, card scheme rules) and to manage their working capital to remain a going concern, unless and until they are fully 'wound-down'. 

However, sudden, unexpected losses could conceivably arise, particularly where there is poor record-keeping or other problems, such as dissipation of assets or perhaps a sudden, significant 'spike' in APP fraud for which it is at least probable that the PSP might be liable (a matter for directors to consider in the exercise of their duties). 

One consequence of APP fraud in this context would likely be that funds which ought to have been, or should have remained, safeguarded were not. The question would then arise whether the affected customer has a priority claim in the "asset pool" of the failed PSP. 

I recently explained the position in more detail in the context of the administration of UAB Payrnet in Lithuania. In the UK, an “insolvency event” (including a ‘voluntary winding up’) of the PSP triggers the creation of an “asset pool” of ‘relevant funds’ to be distributed by an administrator according to a specific hierarchy. The claims of e-money holders are to be paid in priority to all other creditors, with no rights of set-off or security applicable until the e-money holders have been paid. If funds should have been safeguarded according to the regulations but were not, national laws come into play within the overall intention behind the E-money Directive to achieve ‘maximum harmonisation’ of the e-money regime. 

In the case of Ipagoo a failed UK e-money institution, the UK Court of Appeal decided that the EMD did not require the UK to impose a statutory trust over the “asset pool” under UK e-money regulations (EMRs), so they don't impose or create a trust. Instead, the court held that the EMD requires all funds received by EMIs from e-money holders to be safeguarded, not merely those that had actually been safeguarded appropriately. Therefore, the “asset pool” must include both relevant funds that have been safeguarded in a compliant way as well as a sum equal to relevant funds that ought to have been, but had not been, safeguarded in accordance with EMRs, along with the “costs of distributing the asset pool” (including the costs of ‘reconstituting’ the asset pool in circumstances where relevant funds have not been safeguarded, as administrative costs associated with the asset pool itself).

Therefore, it might be claimed (possibly via a retrieval duty or unjust enrichment argument) that funds wrongly paid out should have remained safeguarded, though there is perhaps a question whether the payer qualifies as an 'e-money holder' or other 'user' for whom the institution holds relevant funds within the asset pool.

Conclusion

While the various court proceedings are proving somewhat helpful in revealing and resolving some of the uncertainty relating to where liability for APP fraud might sit, this is clearly a very slow and costly process. It would have been preferable for the Treasury, FCA and Payment Systems Regulator to have worked together more proactively to address the issue. With the change in government already heralding more attention being given to detailed issues, it is to be hoped that these are included.

Let me know if you need legal advice on any aspect, including possibly lobbying the new government to grasp some of the nettles via some form of regulatory action, to spare everyone a lot of time and expense...


Sunday, 14 July 2024

Potential Support for Decentralised Autonomous Organisations (DAOs) Under English Law?

Source: Yield App

Following an earlier consultation that I covered for the SCL, the Law Commission has identified issues and potential reform/innovation to aid the new UK Government in considering whether to support Decentralised Autonomous Organisations (DAOs), under English law.  

The Commission does not think we need a DAO-specific legal entity, but suggests that "a limited liability not-for-profit association with flexible governance options" could be useful, subject to certain issues relating to anti-money laundering, financial services regulation and taxation.

The Commission's paper explains:

  • the philosophy and technology behind DAOs;
  • their possible legal characterisation, including how liability might be attributed to a DAO or its participants;
  • legal entities that might be used as part of a "hybrid" DAO’s structure
  • whether England and Wales is an attractive jurisdiction for DAOs, bearing in mind areas of local regulation that may affect them;
  • further work that might be useful, if DAOs are considered worthwhile, to ensure that they can be regulated;
  • whether current law can accommodate the use of blockchain/DLT for governance purposes.

Personally, under current law it seems likely to me that anyone attempting to set up a DAO in or from the UK is potentially running the risk of incurring unlimited personal liability under UK regulation and English law. In other words, they would be taking on the significant burden of resisting claims by third parties to the effect that they could be personally liable for the relevant activities and obligations, whether from a general liability standpoint or in relation to certain regulatory breaches that might occur, in the process of establishing and operating the DAO.

If you require legal advice on DAOs under UK/English law, please let me know.


Wednesday, 29 May 2024

Virtual IBANs (vIBANS) Explained

The European Banking Authority has put together a helpful overview of the six main ways that virtual International Bank Account Numbers (vIBANs) are being issued and used throughout Europe (and the UK). The EBA clarification is welcome, as there is confusion as to whether a vIBAN represents/creates a corresponding payment account, or just operates as a unique reference number to track payments. Not only does this confusion fail to fix underlying problems, such as IBAN discrimination, but it can also trigger many other compliance and commercial challenges. There is also concern as to whether vIBAN schemes are adequately supervised and transparent, including from a financial crime standpoint... Please let me know if I can help you navigate any issues you have regarding vIBANs under existing regulation, or under the proposed controls relating to them.

What's a vIBAN?

You will likely have seen your own unique IBAN - the string of letters and numbers that relates to your current account (the most common type of payment account). IBANs are used in about 80 countries, including the UK and EU, and each IBAN has some letters to denote the country where the bank account is based. They are usually used for international or 'cross border' payments, instead of the 'account number and sort code' for domestic payments. 

Tracking incoming payments is not really a consumer problem, but some businesses need to receive many payments from many different customers into one current account (to pay for, say, purchases or deposits). Usually, the business gives each customer a unique reference number to include in each payment order, along with the bank account number, so the company knows who paid. The trouble is that many customers don't include their unique reference number when making the payment from their bank, so the business receiving the payment has to spend time figuring out ('reconciling') who made the payment while the funds sit in 'suspense'. That means extra cost and delay in processing transactions, and that's usually a problem for both the business and the customer concerned.

Where a business has a really big corporate customer, it might make sense to a separate bank account (and related IBAN) just for that customer. But that would usually prove very costly for lots of consumers or smaller business customers. 

Enter the vIBAN! 

Instead of the additional reference number with the IBAN, the business gives each customer one unique account number to make the payment to. That a unique number looks like an other IBAN but is really just a unique number that the issuer uses to receive and move the funds to the business's actual IBAN and bank account. It's literally a virtual IBAN.

In many cases, the vIBAN may be issued/controlled by an intermediate payment service provider which holds a database matching each vIBAN with a customer number given to it by the business. When the money comes into the relevant IBAN account for that business, the intermediary electronically reports who made the payments in a way that the business can then match with its actual customer database. 

The business could also make payments to customers using the same process, but in reverse. 

Each vIBAN can also be limited to a particular type of transaction, like 'top-ups' to an e-money account or prepaid card.

One confusing aspect of the EBA report is that it refers to the actual bank account associated with the IBAN as the 'master account' which implies that each vIBAN is itself an account when that is not the case.  There is only one actual bank account involved in an IBAN/vIBAN arrangement. If there has to be a reference to a 'master' at all, then it would be more helpful if the IBAN itself were referred to as the 'master IBAN' to differentiate it from the vIBANs associated with it (there is also the concept of a 'secondary IBAN' that refers to the actual bank account).    

Do vIBANs have other uses and benefits?

vIBANs have other benefits besides solving the main payment-tracking problem. 

Sometimes, for example, local banks or businesses in one country will refuse to make payments to an IBAN that has a different country code (so-called 'IBAN discrimination'). So a supplier based outside that country will arrange for local vIBANs to be presented to its local customers, so they or their bank won't refuse to pay. IBAN discrimination is banned in the EU and UK under the Single Euro Payments Area (SEPA Regulation), but the rule is not always enforced in practice. 

The same set up that defeats IBAN discrimination is also used to establish a more cost-effective global payment network. That's because the ability to receive and make payments locally in many different countries usually requires a network of different local payment service providers, but they can all be controlled by one corporate team. That team can manage payments for external customers or internal payments among the companies in the same corporate group, and can also keep balances in different currencies to minimise foreign exchange exposures and conversion costs.

What are the risks/concerns relating to vIBANs?

Possibly the easiest concern to understand is that people making payments (and their payment service providers) are nervous about not being able to tell the difference between an IBAN and a vIBAN. Only the recipient/payee and the PSP(s) issuing the vIBANs know the difference and there may not be any direct customer relationship between the PSP that issued the vIBAN and the end-user. As described above, the initial 'payee' may in fact be an intermediary PSP and not the PSP of the actual intended end-payee. That could interfere with the need to verify the actual payee (and efforts to stop 'Authorised Push Payment' fraud). Similarly, the use of vIBANs may impede the transparency requirements around payer and payee under Funds Transfer and SEPA/ISO standards.

At the regulatory/technical level, some regulators (indeed vIBAN issuers!) don't understand that vIBANs as really just reference numbers. Sometimes vIBANs are also confused with a 'secondary IBAN' which, like the original or 'primary IBAN' also identifies an actual bank account. Some regulators also seem to believe (or there is contractual/operational confusion which suggests) that a vIBAN necessarily implies or creates a distinct, extra payment account (rather than just the data showing which end-user made/received a payment). Unfortunately, that analysis would also mean there is a direct customer relationship between vIBAN issuer and the end-user, which would trigger a lot of related contractual and compliance requirements and confusion over customer's rights (including deposit guarantee schemes), regulatory supervision and complaints. 

However, even if the vIBAN were to be considered an 'identifier' of the actual bank account under the associated IBAN, where the end-user is not the named holder of that bank account then the account could not be deemed that end-user's payment account. The EBA is concerned that this may mean an end-user somehow lacks a payment account and the related regulatory protection that brings. But, of course, the vIBAN itself is just a reference number that the end-user quotes when initiating a payment order - and a payment order could only be initiated in relation to a source payment account from which the relevant funds are to be drawn to fund the payment transaction.

Some regulators agree that vIBAN issuance is not itself a regulated banking/payment service activity, so cannot provide the basis for an institution to open a branch in another member state (host state) under passporting arrangements. Other regulators allow vIBAN to constitute an activity enabling passporting or requiring local authorisation/agency. This means that institutions need to check the regulators view on both sides of each border they wish to 'cross' by issuing vIBANs.

The EBA has also found that some regulators have effectively banned cross-border issuance of vIBANs, by requiring that there must be no divergence between the country code of the vIBAN and the country code of the IBAN of the actual payment account. While those regulators point to ISO technical standards for their view, the EBA has explained that the European Commission does not share that interpretation, nor is it consistent with the SEPA Regulation.  

There's also some risk that a PSP issuing vIBANs might be facilitating the operation of an unauthorised payment service business, depending on the nature of the business being operated by the immediate customer and services offered to end-users (i.e. is that customer offering one of the specified 'payment services' as a regular occupation or business activity). 

There is also the potential for failures in fraud reporting where a payment is made to a vIBAN in one country, but actually means the payment has been routed to a payment account with an IBAN in another country.

Are separate vIBAN controls needed?

Some regulators' concerns about vIBANs might be addressed if those same regulators were to tackle IBAN discrimination in their jurisdictions - so that vIBANs aren't needed as a 'band aid' or 'sticking-plaster' for that problem. 

For anti-money laundering purposes, it's especially important for the issuing PSP to understand the payee's business, the scenario in which the vIBANs are being used, and the type of end-users able to use the vIBANs and the rationale for the payments being received (or made). This becomes more critical where the end-users are based in another jurisdiction.

In turn, the payment services regulator in the country where the vIBAN arrangement is deployed needs to know that: vIBANs are being issued; the issuing PSP has the right risk assessment, customer due diligence and transaction monitoring controls in place (including where another PSP or business is actually allocating the vIBANs to the end users); and that suspicious transactions involving vIBANs can be detected, reported to the correct country authority and readily traced. Again, this becomes more important where the end-users to whom vIBANs are issued are based in another ('host') jurisdiction to the ('home') jurisdiction where the IBAN and bank account are based.

Some PSPs have already lost their licences over failure to comply with existing controls in the vIBAN context, but the EU's new AML Regulations will explicitly require the 'account service' PSP that offers the underlying payment account to which the IBAN relates to be able to obtain customer due diligence information on end users to whom the associated vIBANs are issued 'without delay' and in any case within five working days - even where vIBANs are issued by another PSP. 

In addition, the next anti-money laundering directive (AMLD6) will require all national bank account registers to hold information on vIBANs and their users. 

The EBA has also included an Annex listing the factors that may increase or reduce the risk of money laundering or terrorist financing.

Please let me know if I can help you navigate any issues you have regarding vIBANs under existing regulation, or under the proposed controls relating to them.


Friday, 24 May 2024

Understanding Card Scheme Fees: Payment Systems Regulator Report

The Payment Systems Regulator has issued a consultation paper/interim report burrowing further into the apparent lack of competition between the two major card schemes and potential harm to customers, particularly on the acquiring side. The PSR identified in a previous report that the card acquiring market wasn't working for UK merchants whose turnover is less than £50m, with one problem being the inability to compare pricing. This report reveals that fees charged by the two main card scheme operators have increased 30% in real terms over 5 years, with no link to improvement in service quality. The report looks at how the scheme operators deal with both their card issuing and acquiring members, but tends to focus on problems that acquirers have in understanding fees imposed on them, since they represent about 75% of the operators' net scheme/processing fee revenue. The specific problems and the proposed remedies are outlined below. There’s an opportunity to respond to the consultation by 31 July. Please let me know if I can help you understand the potential commercial or regulatory impact in your case.

In particular, the report sets out a number of areas where the quality of service is leading to poor outcomes for acquirers and merchants, including a lack of transparency in billing information for mandatory and optional fees, and as to the triggers of (potentially avoidable) 'behavioural fees' (intended to deter certain practices or incentivise the adoption of specific technical solutions):

(a) acquirers often experience difficulties accessing, assessing and acting on information they receive from Mastercard and Visa – which requires time to query, and some even employ consultants or pay for additional reporting or other services from the schemes themselves to understand pricing and fees charged;

(b) as a result, many acquirers aren't able to adopt a very sophisticated assessment of the impact of scheme/processing fees - and even where they can pass fees on contractually they may decline to do so or ‘misbill’ (either under/over bill) merchants;

(c) A large majority of acquirers described issues relating to the transparency of information on mandatory and optional fees - in fact acquirers reporting such issues accounted for over 90% of the total acquiring market;

(d) Poor outcomes for acquirers include:

  • Acquirers have difficulty understanding behavioural fees, which may also be distorting the behaviour and responses of acquirers and merchants, and limit the point of the behavioural fees;
  • Acquirers also find it difficult to understand mandatory and optional scheme and processing fees and how they apply, including whether certain services (and therefore the fees) are optional or mandatory;
  • Acquirers have problems accessing and clarifying information with the scheme operators, in a timely fashion or sometimes at all.

(e) remedies include requiring Mastercard and Visa to:

  • Develop and publish a pricing methodology to explain how the prices of these services relate to costs, together with obligations to document decisions;
  • Demonstrate that a service is ‘optional’, i.e. that viable alternatives to supply by the two card schemes exist;
  • Provide acquirers and merchants with more accurate and relevant information about behavioural fees, so they can be avoided or at least their cost can be correctly allocated;
  • Consult more widely before introducing new services or making changes to prices.
  • Provide bespoke materials to help specific businesses understand the scheme services being supplied;
  • Improve the quality and timeliness of information provided to acquirers, including billing information.

Please let me know if I can help you understand the potential commercial or regulatory impact in your case.

Monday, 20 May 2024

Are Influencers Regulated?

We've come along way from sponsorship and advertising deals for sports stars, actors and other celebrities. Now products are marketed by some people whose celebrity and vast wealth comes only from marketing products through posting their own personal 'lifestyle' content in the social media.  Of course, 'traditional' celebrities are also in on the act, and can command even greater sums for their own highly personalised, lifestyle-type endorsements. Yet such personal content is rarely filtered through any type of compliance process, unlike traditional advertising. And the temptation to make a fortune at the touch of a screen often overrides any sense of responsibility on the part of the influencer. As a result, the role of 'influencer' has become one of the most highly regulated in society... and that regulation will only intensify. Please get in touch if you need advice.

It's no surprise that politicians are at pains to see this evolution as wildly positive:

Europeans are spending more time online, meaning that influencers who create content for social media have a greater impact than ever before on the way we perceive and understand the world. In order to ensure that this impact is positive, the EU must provide support to influencers, enabling them to build their media literacy and increase their awareness and appreciation of the rules that govern their actions online. 

- Benjamin Dalle, Flemish Minister for Brussels, Youth, Media and Poverty Reduction

In typical civil law fashion, the EU is calling for positive regulation that will effectively permit the practice of being an influencer and govern how it can be done lawfully.

In common law countries, it's also a case of the law catching up, but the authorities in charge of the marketing rules are less enthusiastic, responding with advertising bans, for example, and now a Financial Conduct Authority prosecution relating to activities between 2018 and 2021 (perhaps more to do with its ban on certain marketing high risk financial derivatives to retail customers). 

The regulators responsible for retail sales (CMA), broadcasting (Ofcom) and advertising (ASA) began jointly targeting 'hidden advertising' in 2020, while the FCA's latest social media guidance is also partly aimed at influencers and other affiliate marketers. 

Yet even the guidelines can be tough to follow, and influencers may well cross the line into other regulated activity, such as the need to register with the FCA for anti-money laundering purposes if you make arrangements with a view to crypto trading, for example.

Please get in touch if you need advice.



Thursday, 4 April 2024

European Commission Also Fires Up The Digital Markets Act

Having just opened multiple investigations under the new Digital Services Act, the European Commission has also announced investigations under its new Digital Markets Act (DMA). These investigations would also benefit UK businesses providing services to EU/EEA residents. This post is for information purposes. If you need advice, please let me know.

As previously explained in more detail, the DMA aims to control unfair practices of very large digital platform operators (“gatekeepers”) when providing services that other businesses use to reach their customers online. These gatekeepers effectively act as private rule-makers, and are able to create ‘bottlenecks’ and ‘choke points’ that limit access, unfairly exploit data for their own purposes and/or impose unfair conditions on participants. The DMA operates outside the scope of existing EU competition controls. Member state’s regulators cannot go further than the DMA restrictions, which must be applied consistently throughout the EU. Gatekeepers can be fined up to 20% of worldwide revenue for breaches. The DMA applied from May 2023 and any firm designated as a gatekeeper then has six months to comply with the various requirements. 

Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft were designated as gatekeepers in September 2023, giving them until 7 March 2024 to comply. The European Commission believes they do not. Specifically, the Commission alleges that: 

  • the 'steering rules' in Google Play violate Article 5(4) (gatekeepers must allow business users, free of charge, to make offers to and contract with end users acquired either via its core platform service or through other channels, regardless of whether they use the core platform services of the gatekeeper for either purpose);
  • the self-preferencing on Google Search violates Article 6(5) (gatekeepers must not treat more favourably (in ranking and related indexing and crawling), services and products offered by the gatekeeper itself in preference to similar services or products of a third party, and must apply transparent, fair and non-discriminatory conditions to such rankings);
  • the steering rules in the Apple App Store violate Article 5(4);
  • the choice screen for Safari violates Article 6(3) (gatekeepers must allow and technically enable end users to easily un-install any apps on the gatekeeper's operating system (except apps essential for the functioning of that operating system or the device which cannot technically be offered on a standalone basis by third parties). Gatekeepers must also allow and technically enable end users to easily change default settings on the operating system, virtual assistant and web browser of the gatekeeper that direct or steer end users to products or services provided by the gatekeeper); and
Within the next 12 months, the Commission will inform these gatekeepers of its preliminary findings and proposed orders to fix any issues.

In addition, the Commission:
  • is seeking information on Apple's fees for alternative app stores and Amazon's marketplace ranking practices;
  • issued document retention orders to Alphabet, Amazon, Apple, Microsoft and Meta to help it monitor their compliance with the DMA; and 
  • granted Meta an extension of six months to ensure Facebook Messenger complies with the interoperability requirement in Article 7 (there's a long list of those!).
These investigations would also benefit UK businesses providing services to EU/EEA residents. 

This post is for information purposes. If you need advice, please let me know.

Wednesday, 3 April 2024

FCA Finalises Updated Guidance On Financial Promotions Via The Social Media

The FCA has finally finalised its updated guidance on financial promotions via the social media, basically confirming the draft on which it consulted in July 2023.

Perhaps the only real changes are to clarify where a foreign promotion may be capable of having an effect in the UK and so be subject to the UK restrictions.

The finalised guidance explains:

  • what a financial promotion is
  • the various financial promotion rules and where they apply
  • the need for each communication to be 'standalone compliant'
  • certain information must be given 'prominence'
  • where the social media may or may not be suitable for financial promotions
  • restrictions on high risk investments
  • certain prescribed risk warnings
  • marketing strategies in the context of the consumer duty, sharing/forwarding promotional communications and affiliate marketing
  • restrictions on the use of influencers and social media platforms.
This post summarises the FCA's proposed new social media guidance for information purposes only. If you require legal advice, please get in touch.

Tuesday, 26 March 2024

European Commission Starts Using Powers Under The Digital Services Act

The European Commission has begun using its powers under the Digital Services Act in earnest. Action ranges from initial information requests, to formal investigative proceedings, including action based on civil society complaints received. I've summarised the scope of the DSA at the end of this note, which is for information purposes only. These investigations would also benefit UK businesses providing services to EU/EEA residents. If you would like advice on any aspects, please let me know.

Following earlier information requests to AliExpress, the Commission has launched a formal investigation into a long list of potential failures and resulting infringements. I don't recommend even clicking on the AliExpress site to check it out.

Ominously, the Commission also wants more information on how some very large online search engines and other platforms mitigate the risks of creating and spreading information using generative AI, such as ‘hallucinations', deepfakes and the automated manipulation of services to mislead voters; as well as questions covering electoral processes, illegal content, protection of fundamental rights, gender-based violence, protection of minors, mental well-being, protection of personal data, consumer protection and intellectual property. Bing, Google Search, Facebook, Instagram, Snapchat, TikTok, YouTube, and X/Twitter have until 5 April 2024 to respond on questions related to elections and 26 April 2024 for the other queries. There were also previous requests to Meta regarding 'pay or consent' models for Facebook and Instagram, as well as 'shadow banning' and the launch of Threads.

And, based on a civil society complaint, the Commission has also fired a shot across the bows of LinkedIn, by asking for more details on how it complies with the prohibition on presenting ads by profiling special categories of personal data, such as sexual orientation, political opinions, or race., how it ensures that all necessary transparency requirements for advertisements are provided to its users, including basic information about the nature and origins of an ad and the ban on presenting advertisements based on profiling using special categories of personal data. LinkedIn also has until 5 April 2024 to respond. 

As explained previously, the DSA establishes a harmonized approach to protecting EU-based users of online communication, e-commerce, hosting and search services across the EU, by granting intermediary service providers (“ISPs”) exemption from certain liability if they perform certain obligations. An ISP will be in scope if it is either based in the EU or has a substantial connection with the EU (a significant number of users as a proportion of the population or by targeting its activities at one or more Member States). There are extra requirements for ISPs with at least 45m average monthly active EU users (designated as ‘very large online’ (VLO) platforms and VLO search engines). There are some exemptions for small enterprises and micro-enterprises.

These investigations would also benefit UK businesses providing services to EU/EEA residents. This post is for information purposes only. If you would like advice on any aspects, please let me know.


Thursday, 21 March 2024

UK Extends Payment Processing Times Where Authorised Push Payment Fraud Is Suspected - Updated

The UK's second and far more significant recent departure from the EU directive on payment services (PSD2) involves extending the time limit on processing a payment order where it's been authorised by the payer but their payment service provider (PSP) reasonably suspects that it's been initiated after fraud or dishonesty by someone else (which could include the actual payee, of course), known as 'authorised push payment (APP) fraud'. The draft regulations are available hereIf you need any help with implementing or evaluating the impact of the new processes, including updating service terms and conditions, please let me know.

The PSP must form its suspicion and explain to the payer the reasons for the delay and anything required of the payer to help the PSP to decide whether to execute the payment (if lawful to do so, and not 'tipping-off' under money laundering regulation) by the end of the next business day after receiving the payment order (the usual time limit for processing). 

The PSP then has up to 3 more business days to investigate before processing (or not). 

Regardless of whether the payment order is executed, the PSP is liable to the payment service user, for any charges for which the user is responsible and any interest which the user must pay, as a consequence of a delay to the execution of a payment order in reliance on this new right.

This new right is limited to authorised, UK-only, GBP transactions (but not those initiated through a payee, like a direct debit). 

The related policy note explains that comments are due in to the Treasury by 12 April. Tthe goal is to bring these changes in to support the Payment Systems Regulator's rules on reimbursements for APP fraud from October. The regulatory amendments have since been made and will take effect on 30 October 2024.

This post is for general information purposes and is not legal advice. If you need any help with implementing or evaluating the impact of the new processes, including updating service terms and conditions, please let me know.


Wednesday, 20 March 2024

Payment Service Termination Changes: Full of Sound and Fury, Signifying Nothing

As a result of Farage's hissy fit over no longer meeting the commercial criteria for being a Coutts customer, the Tories clearly felt they had to do something. Well, here it is: 90 days' (instead of 2 months') notice to terminate a payment service contract that has no end date 'for convenience', with an obligation to explain and say how to complain. There are predictable exceptions, and this does not affect other grounds for contract termination or terminating/freezing an account/transaction. Yet more public funds have had to be wasted on a right wing conspiracy theory/culture war than solving an actual problem. Think Rwanda. Comments on this earth-shattering proposal are due by 14 April 2024 and the regulations might be laid before the General Election, or not... ;-)

As if they needed spelling out (as they would each render provision of the payment service unlawful in any event), the exceptions to having to give 90 days' notice of termination for convenience are stated to be: 

  • there's a requirement to cease activities under specific money laundering regulations; 
  • the related payment account turns out to be operated by a 'disqualified person' (under the Immigration Act); 
  • if the payment service provider reasonably believes a payment service provided under the contract is being/likely to be used in connection with a serious crime; 
  • if the FCA, Treasury or the Secretary of State lawfully require it; or 
  • if the payment service provider reasonably believes that the payment service user has committed an offence in connection with the user’s provision of goods or services to a third party.

A tale truly worthy of Macbeth's soliloquy.


Wednesday, 6 March 2024

AI Risk Management: An Update

It's a while since I covered the legal aspects of AI here, but I've been posting on the topic fairly frequently on LinkedIn and more recently on Pragmatist. The widespread use of artificial intelligence (AI) - particularly generative AI - as well as the problems described below and the fact that you may not know you are relying on it, means you need to know how these technologies work (at least conceptually, if not in detail) and their impact. At scale, the harms from AI can arise before being detected, and a lot of AI has been launched as a ‘minimum viable product’ to suit the interests of developers over other stakeholders. But to avoid over-reacting, we need to be realistic about what AI can really achieve. To chart a safe route for the development and deployment of AI there’s a need prioritize the public interest, and align technology with widely shared human values rather than the self-interest of a few tech enthusiasts, no matter how wealthy they are. That means uniting the AI industry, researchers and civil society around the public perspective. In this respect AI should be treated like aviation, health and safety, and medicines. It seems unwise for the next generation of AI to launch into unregulated territory. If you would like advice on any aspects of this post, please let me knowA version of this post has since been published by the Society for Computers and Law.

 
What is AI?

The term "AI" embraces a collection of technologies that involve ‘machine learning’ at some point:

  • artificial neural networks (ANN) – one ‘hidden’ layer of processing
  • deep learning networks (DNN) – multiple ‘hidden’ layers of processing
  • machine perception - the ability of processors to analyse data (whether as images, sound, text, unstructured data or any combination) to recognise/describe people, objects and actions.
  • automation
  • machine control – robotics, autonomous vehicles, aircraft and vessels
  • computer vision – image, object, activity and facial recognition
  • natural language processing - speech and acoustic recognition/response
  • personalisation
  • Big Data analytics
  • Internet of things (IoT)

While AI technologies themselves may be complex, the concepts are simple. Traditionally, we load a software application and data into a computer, and run the data through the application to produce a result/output. But machine learning involves feeding the data and desired outputs into one or more computers or computing networks that are designed to write the programme (e.g. you feed in data on crimes/criminals and the output of whether those people re-offended, with the object of producing a programme that will predict whether a given person will re-offend). In this sense, data is used to ‘train’ the computer to write and adapt the programme, which constitutes the "artificial intelligence".

So, in a traditional computing scenario you can more readily discover that the wrong result was caused by bad data but this may be impracticable with a single hidden layer of computing in an ANN, let alone in a DNN with its multiple hidden layers.

Generative AI tools are built using foundation models that are either single modal (receiving input and generating content using only text, for example) or multi-modal (able to deal with, text, audio and images and so on). A large language model (LLM) is a type of foundation model. As explained to the House of Lords' communications and digital select committee, LLMs are designed around probability and have nothing to do with ‘truth’. They learn patterns of language and generate from those learned patterns. So, a valid output for the AI may be obviously wrong to a human with more facts available. 

Various AI technologies are often used in conjunction  (e.g. scanning documents for hints of fraud, robotic process automation ("RPA") and personalising services for individuals or groups of customers); and may be combined with devices or other machines in the course of biometrics, robotics, the operation of autonomous vehicles, aircraft, vessels and the 'Internet of things.

AI is better than humans at some tasks (“narrow AI”) but “general AI” (same intelligence as humans) and “superintelligence” (better than humans at everything) are the stuff of science fiction.

What is AI used for?

AI is used for:

  1. Clustering: putting items of data into new groups (discovering patterns);
  2. Classifying: putting a new observation into pre-defined categories based on a set of 'training data'
  3. Predicting: assessing relationships among many factors to assess risk or potential relating to particular conditions (e.g. creditworthiness);
  4. Generating new content.

The Challenges with AI

There is a long list of concerns about AI, including:

  1. cost/benefit – it cost $50m in electricity to teach an AI to beat a human being at Go, hundreds of attempts to get a robot to do a backflip; and the power to generate a single AI image from text could charge an iPhone;
  2. dependence on training data licences, quantity, quality, timeliness and availability;
  3. lack of  understanding - an AI might predict 79% of European Court judgments doesn't know any law, it just counts how often words appear alone, in pairs or fours;
  4. inaccuracy - no AI is 100% accurate;
  5. Infringement of copyright, privacy, confidentiality, trade secrets etc. in the training data;
  6. Whether using AI can meet the test of “author’s own intellectual creation” to attract copyright protection;
  7. ‘hallucination’ by generative AIs (producing spontaneous errors or inaccurate responses (e.g. fictitious court citations or literary ‘quotes’ from bogus work);
  8. Deepfakes (deliberately created fake still and moving images and/or recordings)
  9. Making existing types of malicious activity easier;
  10. lack of explainability - machine learning involves the computer adapting the programme in response to data, and it might react differently to the same data added later, based on what it has 'learned' in the meantime; 
  11. Specific legal/ethical issues associated with specific AI technologies, such as the use of automated facial recognition by the police; and where liability falls given that the AI itself has no legal personality or status.
  12. Bias - the inability to remove both selection bias and prediction bias; 
  13. the challenges associated with the reliability of evidence and how to resolve disputes arising from its use - lawyers have not typically been engaged in AI development and deployment;
  14. There are concerns around the secondary impact of AI on employment and on other services that it might draw upon without refreshing or maintaining.
  15. AI systems may reveal training data and actual copyright material and privacy information under a ‘divergence attack’ or merely unusual requests that causes the AI to break its ‘alignment’ (e.g. asking ChatGPT 3.5 to repeat the word ‘poem'). 
  16. Some users complain that chatbots can be lazy, or fail to perform requested tasks without prompts (or maybe even at all). 

The House of Lords committee (like the FTC in the US) found that AI poses credible threats to public safety, societal values, copyright, privacy, open market competition and UK economic competitiveness.

LLMs may amplify any number of existing societal problems, including inequality, environmental harm, declining human agency and routes for redress, digital divides, loss of privacy, economic displacement, and growing concentrations of power.

LLMs might entrench discrimination (for example in recruitment practices, credit scoring or predictive policing); sway political opinion (if using a system to identify and rank news stories); or lead to casualties (if AI systematically misdiagnoses healthcare patients from minority groups).

Unacceptable Uses for AI

From all these challenges one can deduce and infer acceptable and unacceptable use-cases. For instance, it now seems obvious to use an AI system to trawl through a closed set of discovered documents and other data, seeking evidence on a certain issue.

An AI might be allowed to run in a fully automated way where commercial parties are able to knowingly accept a certain level of inaccuracy and bias and losses of a quantifiable scale (though we’ve seen disasters arise through algorithmic trading and where markets for some instruments suddenly grind to a halt through human distrust of the outputs).

But an AI should not be used to fully automate decisions that affect an individual’s fundamental rights and freedoms, grant benefits claims, approve loan applications, invest a person’s pension pot, individual pricing or predict, say, criminal conduct. It is also probably unacceptable to simply overlay a right to human intervention in such cases – or rely on human intervention by staff – since the Post Office/Horizon scandal has demonstrated that human intervention is no panacea! AI might be used to some degree in steps along the way to a decision, but the decision itself should be consciously human. In other words, a human should be able to explain why and how the decision was reached, the parameters and so on, to be able to re-take the decision if necessary.

The default position among many AI technologists is that AI development should free-ride on human creativity and personal data. This has implications for copyright, trade marks and privacy.

Copyright

OpenAI has admitted that their platforms would not exist without access to copyright materials:

 “Because copyright today covers virtually every sort of human expression – including blogposts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials,” said OpenAI in its submission to the House of Lords communications and digital select committee (as also covered in the The Guardian). 

Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos.

Midjourney founder David Holz has admitted that his company did not receive consent for the hundreds of millions of images used to train its AI image generator, outraging photogarphers and artists. And a spreadsheet submitted as evidence in a copyright lawsuit against Midjourney allegedly lists thousands of artists whose images the startup's AI picture generator "can successfully mimic or imitate." 

Illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz filed suit in the Northern District of California against Midjourney Inc, DeviantArt Inc (DreamUp), and Stability A.I. Ltd (Stable Diffusion). They term these text-to-image platforms “21st-century collage tools that violate the rights of millions of artists.” 

The New York Times has sued OpenAI and Microsoft for allegedly building LLMs by copying and using millions of The Times’s copyright works through Microsoft’s “Copilot” and OpenAI’s ChatGPT, seeking to free-ride on The Times’s investment in journalism by using it to build substitutive products without permission or payment. 

Getty Images claims Stability AI ‘unlawfully’ scraped millions of images from its site.  . Getty Images argued before a UK’s House of Lords committee that “ask for forgiveness later” opt‑out mechanisms were “contrary to fundamental principles of copyright law, which requires permission to be secured in advance”.

Trade marks

AI has revolutionised advertising and marketing in terms of how products are searched for and/or ‘found’. This depends on:

·       which search methods customers use to find your products and services and how those engines select their results;

·       how voice-controlled personal assistants select products if the user asks it to buy items from a shopping list but without specifying brands (they may use buying history or prioritise products under paid promotional schemes); and

·       your brand's presence in search engine results (keywords) or other AI-controlled marketing programmes.

AI and data protection

The Information Commissioner’s Office has identified AI as a priority area and is focusing in particular on the following aspects: (i) fairness in AI; (ii) dark patterns; (iii) AI as a Service (AIaaS); (iv) AI and recommender systems; (v) biometric data and biometric technologies; and (vi) privacy and confidentiality in explainable AI.

In addition to the basic principles of UK GDPR and EU GDPR compliance at Articles 5 and 6 (lawfulness through consent, contract performance, legitimate interests; fairness and transparency; purpose limitation; data minimisation, accuracy; storage limitation; and integrity and confidentiality), AI raises a number of further issues. These include:

·       The AI provider’s role as data processor or data controller.

·       Anonymisation, pseudonymisation and other AI compliance tools:

                Taking a risk-based approach when developing and deploying AI.

                explain decisions made by AI systems to affected individuals.

                Only collecting the data needed to develop the AI system and no more.

                Addressing the risk of bias and discrimination at an early stage.

                Investing time and resource to prepare data appropriately.

                Ensuring AI systems are secure.

                Ensuring any human review of AI decisions is meaningful.

                Working with external suppliers to ensure AI use will be appropriate.

·       Profiling and automated decision-making – important to consider that human physiology is ‘normally’ distributed but human behaviour is not

                Right to object to solely auto decision, except in certain situations where you must at least have the right to human intervention anyway, with further restrictions on special categories of personal data.

·       The lawful basis for web-scraping (also being considered by the IPO in terms of copyright protection).

How to govern the use of AI?

Given the scale of the players involved in creating AI systems, and the challenges around competition and lack of explainability, there’s a very real risk of regulatory capture by Big Tech.

For evidence of Big Tech involvement in governance issues, witness the boardroom psychodrama over the governance of OpenAI and who should be its CEO, a battle won by Microsoft as a shareholder over the concerns of OpenAI’s board of directors.

To date, the incentives to achieve scale over rivals or for start-ups to get rich quick have obviously favoured early release of AI systems over concerns about the other challenges, though that may have changed with the recent decision by Google to pull the Gemini text to image system.

There’s also a cult among certain high profile venture capitalists and others in Silicon Valley, self-styled as ‘techno-optimism’. They’ve published a 'manifesto' asserting the dominance of their own self-interest, backed by a well-funded 'political action committee' making targeted political donations, supporting candidates who back their tech agenda and blocking those who don’t.

To chart a safe route for the development and deployment of AI there’s a need prioritize the public interest, and align technology with widely shared human values rather than the self-interest of a few tech enthusiasts, no matter how wealthy they are. That means uniting the AI industry, researchers and civil society around the public perspective, as advocated by The Finance Innovation Lab (of which I’m a Fellow).    

In this respect AI should be treated like aviation, health and safety, and medicines and it seems unwise for the next generation of AI to launch into unregulated territory.

There are key liability issues to be solved and mechanism for attributing and apportioning causation and liability upstream and downstream among developers, deployers and end-users.

To address concentration risk and barriers to entry there needs to be easier portability and the ability to switch among cloud providers.

In the absence of regulation, participants (and victims) will look to contract and tort law (negligence, nuisance and actions for breaches of any existing statutory duties).

Regulatory Measures

Outside the EU, the UK is a rule taker when it comes to regulating issues that have any global scale, China, EU and the US will all drive regulation, but geography and trade links means the trade bloc on the UK’s doorstep is the most important.

Examples of regulatory measures from the EU, US and China (summarised at the end of this note)  seek to draw some red lines in areas impacted by AI to at least force the industry to engage with legislators and regulators if the law is not to overly restrict development and deployment of AI. You might question the flexibility of this approach but given the risks it does seem reasonable. After all, it’s a very common tension within organisations as to whether the business units, tech developers or support teams can move more quickly on a given change project, depending on the challenges involved. So, why should the world outside AI development businesses move at the speed of the tech developers as opposed to other stakeholders (without holding AI businesses to account)? As pointed out to the House of Lords committee, developers have greatest insight into, and control over, an AI’s base model, yet downstream deployers and users may have no idea what data an AI was trained on, the nature of any testing and potential limitations on its use.

Meanwhile, the UK government’s do-nothing position is dressed up as being ‘pro-innovation’ but is at the very least a fig leaf for us being a rule-taker, and at worst demonstrates a dereliction of duty and/or regulatory capture.  Some of the UK’s 90 regulatory bodies are using their current powers to address the risks of AI (such as the ICO’s focus on the implications for privacy, as mentioned above). But the UK’s Intellectual Property Office has shelved a long-awaited code setting out rules on the training of artificial intelligence models using copyrighted material, dealing a blow to the creative industry.

How to Approach AI risk management

The following steps are involved in the process of understanding and managing the risks relating to AI:

      Perspective: developer, deployer or end-user?

      Context and end-to-end activity/processes affected

      Nature of AI system(s) involved

      Use/purpose of AI

      Sources, rights, integrity of training data

      Tolerances for inaccuracy/bias

      Sense-check for proposed human oversight/intervention

      Governance/oversight function (steering committee?)

      Testing, testing, testing

      Data licensing

      GDPR impact assessment, record of processing, privacy policy (data collected, purpose, lawful basis) and any consents

      Commercial contracts, addressing upstream and downstream rights, obligations, liability

      Controls (defect/error detection), fault analysis, complaints handling, dispute resolution

      Feedback loop for improvements

If you would like advice on any aspects of this post, please let me know.


Examples of regulatory measures from the EU, US and China

EU

EU Artificial Intelligence Act is expected to enter into force early in 2024 with a 2 year transition period. It proposes a risk-based framework for AI systems, with AI systems presenting unacceptable levels of risk being prohibited. The AI Act identifies, defines and creates detailed obligations and responsibilities for several new actors involved in the placing on the market, putting into service and use of AI systems. Perhaps the most significant of these are the definitions of “providers” and “deployers” of AI systems. The Act covers any AI output which is available within the EU and so would cover UK companies providing AI services in the EU. There is expected to be a transition period of two years before the Act is fully in force, but some provisions may come into effect earlier: six months for prohibited AI practices and 12 months for general purpose AI.

The AI Act defines an AI system as:


”...a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The AI Act prohibits ‘placing on the market’ AI systems that: use subliminal techniques, exploit vulnerabilities of specific groups of people, create a social score for a person that leads to certain types of detrimental or unfavourable treatment, or which categorise a person based on classification of their biometric data; assess persons for their likelihood to commit a criminal offence based on an assessment of their personality traits; as well as the use of real-time, remote biometric identification systems in publicly accessible spaces by or on behalf of law enforcement authorities (except to preserve life). There are also compliance requirements for high risk AI systems.

The draft AI Liability Directive and revised Product Liability Directive will clarify the rules on making claims for damage caused by an AI systemand impose a rebuttable presumption of causality on an AI system, subject to certain conditions. The two directives are intended to operate together in a complementary manner. The Directive is likely to be formally approved in early 2024 and will apply to products placed on the market 24 months after it enters into force.

EU Digital Services Act entered into force on 16 November 2022 and imposes obligations on providers of various online intermediary services, such as social media and online marketplaces. It is aimed at ensuring a safer and more open digital space for users and a level playing field for companies, including provisions banning dark patterns.

EU Digital Markets Act became fully applicable on 2 May 2023 and the European Commission has received notifications from seven companies who consider that they meet the gatekeeper thresholds

EU Machinery Products Regulation covers emerging technologies (for example, internet of things (IoT)). Although AI system risks will be regulated by the proposed AI Act (see EU Artificial Intelligence Act), the Machinery Regulation will look at whether the machinery as a whole is safe, taking into account the interactions between machinery components including AI systems. In-scope machinery and products imported into the EU from third countries (such as the UK) will need to adhere to the Machinery Regulation.

EU General Product Safety Regulation will apply from apply from 13 December 2024.

EU Data Governance Act, with effect from 23 September 2023, establishes mechanisms to enable the reuse of some public sector data. The availability of data within a controlled mechanism will be of benefit to the development of AI solutions.

The EU Data Act requires providers of products and related services to make the data generated by their products (for example, IoT devices) or services easily accessible to the user, regardless of whether the user is a business or a consumer. The user will then be able to provide the data to third parties or use it for their own purposes, including for AI purposes. The EU Data Act was published in the Official Journal on 22 December 2023 and applies from 12 September 2025.

US

In October the White House published mandatory requirements for sharing safety testing information before “the most powerful AI systems” are made public; and there are some very interesting remedies are coming out of the Federal Trade Commission such as:  

·       inquiries into Big AI activity;

·       aligning liability with ability and control (upstream liability);

·       Remedies to address incentives, ‘bright line’ rules on data/purposes:

·       AI trained on illegal data to be deleted;

·       action on voice impersonation fraud and models that harm consumers; and

·       cannot retain children’s data indefinitely, especially to train models.

China

China has addressed generative AI by requiring:

·       license to provide gen AI to the public

·       security assessment if public opinion attributes or social mobilization capabilities in the model

·       uphold integrity of state power, not incite secession, safeguard national unity, preserve economic/social order, align with socialist values

·       Additional interim measures that also focus on other countries’ concerns around AI impact:

o   IP protection

o   Transparency, and

o   Non-discrimination

While we might not agree with the sort of cultural control being imposed by Chinese legislators in the context of generative AI, they perhaps point to a model for how to introduce western civil society concepts into our legislation.

A version of this post has since been published by the Society for Computers and Law.