Search This Blog

Friday, 12 July 2019

Explainability Remains The Biggest Challenge To Artificial Intelligence

You might think that understanding and explaining artificial intelligence is becoming a job in itself, but it has actually become part of everyone's job. This struck me particularly hard while reading the recent report from UK Finance (and Microsoft), on the role of artificial intelligence in financial services. It shows that organisations are treating AI as a project or programme in itself, and struggling with where to pin responsibility for it, when actually their use of AI (and existing exposure to it through ad networks etc) means it's already loose in the world. That makes "explainability" - of AI itself and its outcomes - absolutely critical.

What is AI?

One first challenge is understanding what is meant by "AI" in any given context. In this report, the authors generally mean "a set of technologies that enable computers to perceive, learn, reason and assist in decision making to solve problems in ways that mimic human thinking."

We seem to have moved on from the debate about whether AI will ever move far beyond "narrow AI" (better than humans at some tasks like chess, Go or parsing vast quantities of data) to "general AI" (as good as a human mind) to superintelligence (better than humans, to the point where the machines do away with us altogether).

It seems widely accepted that we are (still) developing narrow AI and applying it to more and more data and situations, with the vague expectation (and concern) that one day it might become "general". 

The next major challenge is explaining each technology in the "set of technologies" that encompass AI. Not all are spelt out in the report, but I understand these technologies to include machine learning, neural networks, deep learning networks, natural language processing, speech recognition, image and facial recognition, speech and acoustic recognition. The report notes they are often used in conjunction  (e.g. scanning documents for hints of fraud, robotic process automation ("RPA") and personalising services for individuals or groups of customers). And it's important to understand that one or more technologies will be combined with devices or other machines in the course of biometrics, robotics and the operation and co-ordination of autonomous vehicles, aircraft, vessels and the 'Internet of things' - not ordinarily thought of in terms of financial services, but the data and decision-making in the context of these uses will be relevant for many financial institutions.

Each new report seems to bring a nugget or two of new jargon to understand, and this one alerted me to the use of "Random forests". 

What is a good use-case for AI?

The good news for the human race is that the authors recommend combining artificial and human intelligence rather than allowing the machines to work alone toward our extinction. AI can build on human intelligence by recognising patterns and anomalies in large amounts of data (think fraud detection) and can scale and automate repetitive tasks in a more predictable way to analyse and try to predict risks. The report suggests that AI Nirvana for UK financial institutions is fully automated customer on-boarding, personalised customer experience, retail advice and proactive financial management.

You might have spotted that the last two aspirations will be particularly exciting for fans of financial 'scandals'... and it's worth noting that the report on the health and motor insurance sectors added pricing, underwriting, claims handling, sales and distribution...

UK Finance rightly points out that organisations need to consider the implications of AI beyond the technical (or technological), particularly when used in the core of their businesses. Specifically, there are implications for culture, behaviour and governance from the business, social and economic perspectives. Privacy, safety, reliability, fairness (lack of bias and discrimination) are critical to safeguard, as well as adapting the workforce, communities and society for the impact on employment and skills. Again, AI can't be treated as separate or managed in a silo; and it's a challenge for all stakeholders, including regulators and governments.

Yet, while AI might be pervasive in its impact and effects, that does not mean it is ripe to be deployed in every situation (as is the case with applying process improvement methodologies like Six Sigma). The report provides some insight into identifying where AI is the right solution, as well as high-value use cases, levels of AI maturity and capabilities; and how to scale and measure returns on investment and business impact.

The Thorny Issue of Explainability...

While the UK Finance report is intended as an overview, a major criticism I have is that it only sounds a note of caution on the worrying issue of "explainability" without pointing out that explainability is not possible with technologies that have "hidden" layers of computing, such as artificial neural networks and deep learning. The report merely cautions that: 
"Where firms identify a trade-off between the level of explainability and accuracy, firms will need to consider customer outcomes carefully. Explainabilty of AI/ML is vital for customer reassurance and increasingly it is required by regulators." 
This is the point where the fans of financial scandals start stockpiling popcorn.

The relevant shortcomings and concerns associated with explainability are covered in more detail in my post on the report into the health and motor insurance sectors, including the South Square chambers report. But in summary, these mean that neural and deep learning networks, for example, are currently only really appropriate for automating decision-making where "the level of accuracy only needs to be "tolerable" for commercial parties interested only in the financial consequences... than for... issues touching on fundamental rights." 

Yet the UK Finance warning not only assumes that the use of AI and its outcomes is known by or can be explained to people within the organisation (when that may not be the case), but also assumes that organisations understand what the trade-off between explainability and accuracy means; the implications of that; and therefore whether a given use-case is actually appropriate for the application of AI technologies. A critical issue in that analysis is how to resolve any resulting disputes, whether in the courts or at the Financial Ombudsman, including identifying who is responsible where AI computing has been been outsourced and/or there are multiple external sources of data.

None of this is to say, "Stop!" (even if that were possible), but it's important to proceed with caution and for those deploying and relying on AI to be realistic in their expectations of what it can achieve and the risks it presents...

Monday, 24 June 2019

EBA Gives Some Leeway On SCA

There has been increasing concern that the e-commerce world won't be ready for the introduction of "strong customer authentication" (or two-factor authentication) for electronic and remote payments on 14 September 2019. The checks apply to electronic and remote payments, which include payments online, as well via mobile devices, kiosks or other machines. It is feared many aren't aware of the new checks or the potential that checks will lead to failed or abandoned transactions, causing a hit to retailers' and payment service providers' revenues. The European Banking Authority now says local financial regulators may provide limited additional time to payment service providers to introduce compliant processes “on an exceptional basis and in order to avoid unintended negative consequences for some payment service users" on that date. 

Specifically, the PSPs must have agreed a migration plan with their regulator and execute it "in an expedited manner." The regulator should monitor the execution of the plans "to ensure swift compliance..." 

The opinion also contains tables listing the types of features that will (or, in marginal cases, will not) constitute compliant elements for the purpose of SCA (two of either "inherence", "possession" or "knowledge" - i.e. what the customer is, what the customer possesses, or what the customer knows).

There is also guidance on how to satisfy the additional requirements for "dynamic linking" (to ensure the SCA elements link the transaction to an amount and the specified payee when initiating the transaction) and that the SCA elements be independent of each other.

The EBA issued an earlier opinion and a Q&A on how all this applies, but it remains to be seen how many retailers are aware of the new requirements at all, let alone the potential impact on customer experience and 'conversion' (customers dropping out at the payment step when asked to complete one or more additional authentication steps).

Whether payments are affected depends on whether PSD2 applies - some may be out of scope based on currency or location, while others may be within the scope of PSD2 but excluded. There is then a question whether the transaction is interpreted to be one caught by the SCA requirement. Is it remote or electronic and initiated by the payer (rather than being a 'merchant initiated transaction')? Even transactions that are in scope may not be caught if the issuer (not the merchant or acquirer) of the payment instrument/account applies any of the potential exemptions:
    Low-value transactions: up to €30 per transaction (limit of five separate transactions or €100);
    Recurring transactions: e.g. subscriptions for the same amount and payee (SCA applied to the first transaction);
    Whitelisted: payers can add payees to a whitelist of trusted beneficiaries with the issuer, but payees can't request this;
    Corporate payment processes: dedicated process for non-consumers, approved by the regulator (member states may exclude micro-enterprises as consumers);
    Contactless: up to €50 (limit of five separate transactions or €150 without an SCA check);
    Unattended terminals: only for paying transport fares or parking fees;
    Low-risk of fraud: as determined by the issuer, depending on its average fraud levels for the relevant acquirer (not by merchant/channel), with different limit for cards and credit transfers.
The FCA will apply the SCA standards in the UK even if Brexit occurs.

Thursday, 20 June 2019

Buy Now Pay... Earlier

Retailers have always been fond of "buy now pay later" offers, but perceived abuses (or lack of clarity) discovered by the FCA must end by November. 

Typically, a BNPL credit offering involves a 'promotional period' of 3 to 12 months in which no repayments are required and no interest is payable at all if the consumer repays in full during the promotional period. After the end of that period, repayment obligations begin; and interest is charged for the promotional period. Uncertainty arises where the consumer only makes partial repayments during the promotional period. Some creditors ignore partial repayments and still charge interest on the entire amount of credit for the promotional period, at least until the date of the partial repayment(s). This means that consumers who are uncertain whether they'll be able to repay in full during the promotional period also don't know whether to make part repayments. Even only narrowly missing full repayment in the promotional period could mean paying interest on the full amount of credit anyway - or nearly the full amount.

These types of offers could be either on a 'fixed sum' basis for a one-off purchase, or on a running account basis for each product purchased during any given month on a store card, for example, making it tough to understand which partial payments are credited to which purchase.

BNPL offers are not the same as a genuine "payment holiday" features, where interest is still being charged during the 'holiday' period when no repayments are due. Nor are they necessarily the same as a 0%APR offer that you see on cars, for example, where no interest at all is payable for a certain period (so long as there is no discrimination against partial repayments before the interest-free period expires). Credit cards effectively have a much shorter interest-free period of up to two months on each purchase (so you need to clear the whole running balance in that time).

The FCA wants to BNPL consumers to be free to repay as much as possible during the promotional period, so they incur less interest. Changes to the FCA's consumer credit rules (CONC) will require clear information to be given about BNPL credit offers, so consumers know the consequences of not repaying the full balance by the end of the offer period, with a reminder that the offer period is about to end.

The FCA will also prevent creditors claiming more interest on any amounts repaid during the promotional period than would be payable if they repaid the full amount, so consumers get the benefit of making partial repayments even if they don't clear the full amount of credit in that time. 

Creditors must comply with the disclosure rules by 12 September 2019, and must stop claiming interest on partially repaid amounts by 12 November 2019 for purchases after that date.


Wednesday, 19 June 2019

Extension of FCA Principles And Marketing Rules To Payment Service Providers

From 1 August, the Financial Conduct Authority will begin to enforce its Principles of Business and certain rules on marketing and communications against the payment service providers that it regulates.

The FCA explained its approach in a policy statement earlier this year, but it was likely put off as a summer project, and Brexit will have been a distraction for many. At any rate, chapters 2, 3 and the rules in Annexes A-C are the key parts to read.

Some Key Points

Because many PSPs also provide unregulated services that are allied to their regulated activity (e.g. gateway services and other "technical services" as well as unregulated foreign exchange and e-commerce services), it's important to note that the FCA's high level Principles will also apply to unregulated activities that are "connected" to regulated e-money or payment services. The FCA is refusing to clarify exactly what that means, since the list is long, and this may lead to 'regulatory creep' to the extent PSPs err on the side of caution. 

Equally, a PSP's compliance with the Principles (and even the marketing rules) can be affected by the activities of other group companies - e.g. faulty centralised fraud or risk management systems or other outsourced support services; or misleading ads for an unregulated service that is deemed to be "connected" with the PSP's regulated service.

The FCA is particularly anxious about the misleading promotion of currency transfer services (and 'connected' foreign exchange services, even if unregulated).

The FCA does not care that there is overlap with other advertising and communications requirements - as there is for banks (the 'new' rules on marketing and communications are created by applying the FCA's existing Banking Conduct of Business (BCOB) rules to PSPs). But the FCA does confirm that these rules cannot cut across EU-derived regulations (wither Brexit?).

Next Steps

The extension of the Princples and the marketing rules to PSPs means they will likely need to update various in internal policies and procedures, e.g. those dealing with: 
  • Governance (reporting lines and responsibilities to control operational risks);
  • Marketing and communications (the policy and procedures for sign off on your ads and communications to ensure they are clear, fair and not misleading) particularly for payment services involving currency transfer services - and any "connected" unregulated activities; and
  • Treating Customers Fairly (with appropriate cross references to other policies). 
That summer project starts now!

Sunday, 16 June 2019

Of Caution And Realistic Expectations: AI, ANN, BDA, ML, DL, UBI, PAYD, PHYD, PAYL...

A recent report into the use of data and data analysis by the insurance industry provides some excellent insights into the pros and cons of using artificial intelligence (AI) and machine learning (ML) - or Big Data Analytics (BDA). The overall message is to proceed with caution and realistic expectations...

The report starts by contrasting in detail the old and new types of data being used by the motor and health segments in the European insurance industry: 
  • Existing data sources include medical files, demographics, population data, information about the item/person insured ('exposure data') and loss data; behavioural data, frequency of hazards occuring and so on;
  • New data sources include data from vehicles and other machines or devices like phones, clothing and other 'wearables' (Internet of things); social media services; call centres; location co-ordinates; genetics; and payment data.
Then the report explains the analytical tools being used, since "AI" is a term used to refer to many things (including some not mentioned in the report, like automation, robotics and autonomous vehicles). Here, we're talking algorithms, ML, artificial neural networks (ANN) and deep learning networks (DLN) - the last two being the main focus of the report.

The difference between your garden variety ANN and DLN, is the number of "hidden" layers of processing that the inputs undergo before the results pop out the other end. In a traditional computing scenario you can more readily discover that the wrong result was caused by bad data ("shit in, shit out", as the saying goes) but this may be impracticable with a single hidden layer of computing in an ANN, let alone in a DLN with its multiple hidden layers and greater "challenges in terms of accuracy, transparency, explainability and auditability of the models... which are often correlational and not causative...".

Of course, this criticism could be levelled at the human decision-making process in any major financial institution, but let's not go there...

In addition, "fair use" of algorithms relies on data that has no inherent bias. Everyone knows the story about the Amazon recruitment tool that had to be shut down because they couldn't figure out how to kill its bias against women. The challenge (I'm told) is to reintroduce randomness to data sets. Also:
As data scientists find themselves working with larger and large data sets and working harder and harder to find results that are just slightly better than random, they will also have to spend significantly more time and effort in accurately determining what exactly constitutes true randomness in the first place.
Alarmingly, the insurers are mainly using BDA tools for pricing and underwriting, claims handling, sales and distribution - so you'd think it pretty important that their processes are accurate, transparent, explainable and auditable; and that they understand what results are merely correlated as opposed to causative...

There's also a desire to use data science throughout the insurance value chain, particularly on product development using much more granular data about each potential customer (see data sources above). The Holy Grail is usage-based insurance (UBI), which could soon represent about 10% of gross premiums: 
  • pay-as-you-drive (PAYD): premium based on kms driven;
  • pay-how-you-drive (PHYD): premium based on driving behaviour; and
  • pay-as-you-live (PAYL): premium based on lifestyle, tracking.
This can enable "micro-segmentation" - many small risk pools with more accurate risk assessments and relevant 'rating factors' for each pool - so pricing is more risk-based with less cross-subsidy from consumers who are less likely to make claims. A majority of motor insurers think the number of risk pools will increase by up to 25%, while few health insurers see that happening. 

Of course, micro-segmentation could also identify customers who insurers decide not to offer insurance (though many countries have rules requiring inclusion, or public schemes for motorists who can't otherwise get insurance, like Spain, Netherlands, Luxembourg, Belgium, Romania and Austria). Some insurers say it's just a matter of price - e.g. using telematics to allow young high risk drivers to literally 'drive down' their premiums by showing they are sensible behind the wheel. 

Increases in the number of 'rating factors' is likely to be more prevalent in the motor insurance segment, where 80% (vs 67%) are said to have a direct causal link to premium (currently driver/vehicle details, or age in health insurance), rather than indirect (such as location or affluence).

Tailoring prices ('price optimisation') has also been banned or restricted on the basis that it can be unfair - indeed the FCA has explained the factors it considers when deciding whether not price discrimination in unfair

Apparently 2% of firms apply BDA to the sales process, resulting in "robo-advice" (advice to customers with little or no human intervention).  BDA is also used for "chatbots" that to help customers through initial inquiries; to forcecast volumes and design loyalty programmes to retain customers; prevent fraud; to assist with post-sales assistance and complaints handling; and even to try to "introduce some demand analytics models to predict consumer behaviour into the claims settlement offer."

Key issues include how to determine when a chatbot becomes a robo-adviser; and the fact that some data is normally distributed (data about human physiology) while other data is not (human behaviour).

All of which begs the question: how you govern the use of BDA?

Naturally, firms who responded to the report claim they have no data accuracy issues and have robust governance processes in place. They don't use discriminatory variables and outputs are unbiased. But some firms say third party data is less reliable and only use it for marketing, while others outsource BDA altogether. But none of this was verified for the report, let alone whether or not outputs of ANN or DLN were 'correct' or 'accurate'.

Some firms claim they 'smoothed' the output of ML with human intervention or caps to prevent unethical outcomes.

Others were concerned that it may not be possible to meet the privacy law (GDPR) requirements to explain the means of processing or the output where ANN or DLN is used.

All of the concerns lead some expert legal commentators to suggest that ANN and DLN are more likely to be used to automate decision-making where "the level of accuracy only needs to be "tolerable" for commercial parties [who are] interested only in the financial consequences... than for individuals concerned with issues touching on fundamental rights." And there remain vast challenges in how to resolve disputes arising from the use of BDA, whether in the courts or at the Financial Ombudsman.

None of this is to say, "Stop!" But it is important to proceed with caution and for its users to be realistic in their expectations of what BDA can achieve...


Tuesday, 11 June 2019

New Rules For P2P Lending And Crowd-Investment

A year after consulting on its proposals, the FCA has issued new rules for P2P lending and crowd-investment platform operators from 9 December 2019 (and certain mortgage rules immediately). I'm trawling through the detail, but have summarised the changes below. Let me know if I can help.

Originally, the FCA proposed to:
  • set out the minimum information that P2P platforms need to provide to investors; 
  • clarify what systems and controls platforms need to have in place to support the outcomes platforms advertise - particularly on credit risk assessment, risk management and fair valuation practices; 
  • ensure arrangements are in place that take account of the practical challenges that platforms could face in a wind-down scenario; 
  • extend marketing restrictions that already apply to investment-based crowdfunding to P2P platforms; 
  • apply Mortgage and Home Finance: Conduct of Business sourcebook (MCOB) and other Handbook requirements to P2P platforms that offer home finance products, where at least one of the investors is not an authorised home finance provider - to address a potential gap in protections for home finance customers who undertake transactions through a P2P platform.
Sure enough, the new rules:
  • Clarify what governance arrangements, systems and controls must be in place to support advertised performance (especially credit risk assessment, risk management and fair valuation practices);
  • Strengthen plans for the wind-down of P2P platforms;
  • Apply marketing restrictions to protect less experienced investors in loans;
  • Introducing an appropriateness test for an investor’s knowledge and experience of P2P investments where no advice has been given to the investor, and what the assessment should include; and
  • Specify minimum information that P2P platforms need to provide to investors. 
In addition, P2P platforms that offer home finance products (where none of the investors is an FCA authorised home finance provider) must comply the FCA's Mortgage and Home Finance Conduct of Business sourcebook (MCOB) and other Handbook rules from now. 

Monday, 27 May 2019

Let's Not Confuse E-money Agents and Distributors

The European Banking Authority has issued an opinion that goes some way to clarifying when e-money institutions create an "establishment" when dealing through "agents" and "distributors", though it does not go far enough to be terribly useful (to be covered in another post...). In reaching that opinion, however, it has managed to create confusion over the distinction between agents and distributors. This is unfortunate, given the very significant difference in legal responsibility for the EMI and the time it takes to set up such arrangements - sometimes on a large scale, where chains of small retail outlets or multiple independent online retailers offer prepaid cards, top-up vouchers etc for the issuer.

The EBA accepts that e-money institutions (EMIs) can operate through either:
  • 'agents' who provide regulated payment services on the EMI's behalf and must be registered by the EMI with the regulator; or
  • 'distributors' who do not provide regulated payment services on the EMI's behalf, so the EMI merely has to notify the regulator that the distributor is being used rather than register it.
But the EBA then states that: 
"...if a distributor receives funds from an end-customer in exchange for e-money, the funds are considered to have been received by the issuer itself, considering that the distributor is acting on behalf of the issuer. The safeguarding obligation of the issuer starts as soon as the distributor receives the funds from the customers, and remains with the issuer/EMI (not with the distributor), so that the customer does not bear any consequence of the funds not being transferred from the distributor to the issuer, including in the event of the distributor's insolvency."
I also notice this has also been picked up by the FCA in its guidance on safeguarding in the Approach document, for example:
"10.28 An institution may receive and hold funds through an agent or (in the case of EMIs and small EMIs) a distributor. The institution must safeguard the funds as soon as funds are received by the agent or distributor and continue to safeguard until those funds are paid out to the payee, the payee’s PSP or another PSP in the payment chain that is not acting on behalf of the institution. The obligation to safeguard in such circumstances remains with the institution (not with the agent or distributor). Institutions are responsible, to the same extent as if they had expressly permitted it, for anything done or not done by their agents or distributors (as per regulation 36 in the EMRs and regulation 36 in the PSRs 2017)...
10.34 Where relevant funds are held on an institution’s behalf by agents or distributors, the institution remains responsible for ensuring that the agent or distributor segregates the funds. "
Elsewhere, the FCA states that
5.6...In our view, a person who simply loads or redeems e-money on behalf of an EMI would, in principle, be considered to be a distributor.

However, the FCA states:
8.338 It is important to recognise that if an agent of an e-money issuer receives funds, the funds are considered to have been received by the issuer itself. It is not, therefore, acceptable for an e-money issuer to delay in enabling the customer to begin spending the e-money because the issuer is waiting to receive funds from its agent or distributor.
These passages might be read as supporting the notion that a distributor is entitled to hold funds on behalf of an EMI, albeit in a segregated bank account, and the EMI is entitled to rely on the distributor to transfer those funds to the EMI's account. 

But in my view, if a distributor were to act in that way it would be operating a payment service (e.g. money remittance) and would therefore need to be either authorised in its own right or registered as an agent of the EMI. In other words, there would be no distinction between an agent and a distributor.

In fact, the role of distributor was created in order to avoid the need for agency registration in a particular scenario (e.g. small retailers whom the EMI would find it difficult to be responsible for registering and supervising); or for the distributor to concern itself with regulatory risk and responsibilities. 

The EMI's obligation to register an agent (and, more importantly, liability for the agent's activities on the EMI's behalf) is avoided by requiring the distributor to keep a 'float' of a minimum amount of funds in an account which the distributor agrees the EMI will draw upon whenever the distributor's system reports to the EMI's system that a customer in one of the distributor's outlets has bought a prepaid card or otherwise loaded funds onto a card or wallet issued by the EMI. 

In that scenario, neither the customer nor the EMI is taking any risk at all that the distributor might fail to transfer funds paid by the customer. The EMI has instant access to the float of funds previously paid by the distributor, and safeguards those funds if the e-money issued to the customer is not spent within the next business day.  Meanwhile, the distributor retains any money paid by the customer as effectively reimbursement for the amount that the EMI has deducted from the distributor's float.