Search This Blog

Showing posts with label behavioural targeting. Show all posts
Showing posts with label behavioural targeting. Show all posts

Tuesday, 5 March 2024

Pay-or-Consent Ignores the Elephant-in-the-Room

European consumer bodies have united to file 8 local data protection complaints against Meta, claiming that "to ask consumers using Facebook and Instagram to give their consent to the processing of their personal data for advertising purposes or alternatively to pay a fee of up to €311 per year" does not cure various problems under the General Data Protection Regulation in the way it processes their customers' personal data. This also likely affects the status of training data that Meta has drawn from Facebook and Instagram to power it artificial intelligence systems. Previous complaints have resulted in changes to Meta privacy policies, but no real change in the underlying data collection and processing. Customers' investment of time and effort in their accounts and Meta's market dominance makes switching unrealistic. If the complaints are successful, it would suggest both free and paid-for functionality will be much more limited in future, but perhaps subscription revenue might make up for any lost ad revenue. Meta obviously disputes the claims.

The consumer bodies say that Meta collects way more personal data about its users than is necessary for the purposes claimed, such as performing its contracts with users, and this also fails to meet the GDPR requirement to minimise the personal data collected. 

In addition, there is too little transparency and explanation of the use or purpose for collecting each type of personal data, and the legal basis relied upon. That would mean Meta isn't clear what types of data must be processed for contractual purposes and which types are covered by user consent, for example. It would also mean that any consent relied upon was not fully informed and therefore was not validly given (similarly, it would also be unclear what type of data collection and processing you are paying a fee to avoid - and whether you had really avoided what you did not wish to consent to).

While this calls into question the ability for Facebook and Instagram can use their customer's personal data to power behavioural advertising and the related revenues, it would also taint the use of such personal data as training data for Meta's AI tools and systems.

The claims in more detail (which Meta obviously would deny strenuously) are:

  • Meta’s personal data processing for advertising purposes lacks a valid legal basis because it relies on consent which has not been validly collected for the purposes of the GDPR; 
  • Some of Meta’s processing for advertising purposes appears to rely invalidly on contract; 
  • Meta cannot account for the lawfulness of its processing for content personalisation since it is not clear – and there is no way to verify – that all of Meta’s profiling for that purpose is (a) necessary for the relevant contract and (b) consistent with the principle of data minimisation; 
  • It is not clear – and there is no way to verify – that all of Meta’s profiling for advertising purposes is necessary for that purpose and therefore consistent with the principle of data minimisation; 
  • Meta’s processing in general is not consistent with the principles of transparency and purpose limitation; and 
  • Meta’s lack of transparency, unexpected processing, use of its dominant position to force consent, and switching of legal bases in ways which frustrate the exercise of data subject rights, are not consistent with the principle of fairness.

Previous complaints have resulted in changes to privacy policies, to try to clarify the purpose and legal basis of processing, but the consumer bodies say this has not interrupted the underlying processing that they say is illegal. Meta would obviously dispute this. 

While it's tempting to think users can simply vote with their feet, the amount of time consumers have invested in their accounts - and Meta's market dominance - means that is not a realistic option.

If the complaints are successful, it would suggest both free and paid-for functionality will be much more limited in future, but perhaps subscription revenue might make up for any lost ad revenue...

What this space.


Saturday, 7 March 2015

Artificial Intelligence, Computer Misuse and Human Welfare

The big question of 2015 is how humans can reap the benefit of artificial intelligence without being wiped out. Believers in 'The Singularity' reckon machines will develop their own superintelligence and eventually out-compete humans to the point of extinction. Needless to say, we humans aren't taking this lying down, and the Society for Computers and Law is doing its bit by hosting a conference in June on the challenges and opportunities that artificial intelligence presents. However, it's also timely that the Serious Crime Act 2015 has just introduced an offence under the UK's Computer Misuse Act for unauthorised acts causing or creating the risk of serious damage to "human welfare", not to mention the environment and the economy. Specifically, section 3ZA now provides that: 
(1) A person is guilty of an offence if—
(a) the person does any unauthorised act in relation to a computer;
(b) at the time of doing the act the person knows that it is unauthorised;
(c) the act causes, or creates a sign ificant risk of, serious damage of a material kind; and
(d) the person intends by doing the act to cause serious damage of a material kind or is reckless as to whether such damage is caused.

(2) Damage is of a “material kind” for th e purposes of this section if it is—
(a) damage to human welfare in any country;
(b) damage to the environment in any country;
(c) damage to the economy of any country; or
(d) damage to the national security of any country.

(3) For the purposes of subsection (2)(a) an act causes damage to human welfare only if it causes—
(a) loss to human life;
(b) human illness or injury;
(c) disruption of a supply of money, food, water, energy or fuel;
(d) disruption of a system of communication;
(e) disruption of facilities for transport; or
(f) disruption of services relating to health.
I wonder how this has gone down in Silicon Valley...


Saturday, 16 June 2012

Rethinking Personal Data

On Thursday I joined a World Economic Forum 'tiger team' focused on rethinking personal data, a process that aims to build on reports revealing personal data as a new asset class, and meeting the challenges this evolution brings. My thanks to Liz Brandt at Ctrl Shift for inviting me along. Apparently, as one non-legal delegate put it, "there are not enough lawyers at these sorts of events."

In essence, we are moving from a world where data about each of us is compiled into large national databases by corporations and governments (since they are the only ones with the vast resources required to do it); to a world where personal data is highly distributed and grows with every interaction with or about each of us, so that no one can keep up with it, let alone store it in a single place. 

It's therefore important to understand that a "personal data store" is not envisaged as your own personal database of all personal information about you. "Store" is not used here in the sense of 'storage' but in the retail sense of controlling what is offered or sold (which is also not exactly appropriate but does the job for now). So a 'personal data store' is really just a set of rules that determine whether and how data about you can be used - wherever that data sits. It's another type of 'personal information management service'.

The WEF process involves first 'unpacking' the big notions of 'identity', 'privacy' and the imagined benefits to be gained from sharing personal data. These concepts are too static, theoretical - and too emotive - to use as the basis for establishing detailed rules for the responsible use of personal data. The significance and value of personal data can't be captured in a single dollar amount or 'yes'/'no' answer to whether it can be used. Instead, the value and utility of personal data is a hugely complex dynamic that varies by: 
  • the context or the activity we are engaged in, 
  • which persona we are using at that moment, 
  • the actual data being used or provided, 
  • the permissions given, 
  • the rights that flow from those permissions, and 
  • the various parties involved.
So in order to ensure that our transactions and other day-to-day activities are as frictionless and seamless as possible, we need a global set of rules that are flexible enough to address all these variables, with the protection of a person's rights at the centre. And those rules must be readable at various levels by humans, lawyers (legislature, courts, regulators, governance panels) and machines (computers, microchips).  

A previous tiger team session identified business, legal and technology as the three primary stakeholders or perspectives in agreeing such a set of rules. The business rules must first be established clearly at the outset, then vetted from a legal and governance standpoint, then coded in such a way that everyone can be confident machines will handle the data in accordance with the rules.

The current ambition is to agree a 'simple' set of common licences or sets of permissions which any individual can nominate to govern the use of their data in a given context (like the creative commons copyright system). The technological solution is a 'personal data mark-up language' that will enable anyone holding the consumer's data to 'mark-up' items of data in their existing databases to correspond to the permissions they've been given.

The legal aspect of this breaks down into a set of rights and duties from which liability and accountability can flow in a way that doesn't represent a deal-breaker for anyone in the overall process. Those rights and duties will obviously vary according to whether you are the individual data subject, the provider of a personal data store/service, a business customer relying on data about the individual or acting in a governance role. They must be compatible with public law, yet fill in many gaps where rights and duties are missing or unclear.

An earlier tiger team had proposed a useful set of rights and duties from the standpoint of the data subject. So we focused on the rights and duties of the service provider operating the personal data store on that data subject's behalf. We also made a start on the rights and duties for the governance role. The full write-up is due in the next few weeks, but some of the key issues we covered were: 
  • the need for transparency as to whether the provider of a personal data store is acting as a full agent in the fiduciary sense or as a lesser form of agent or broker; 
  • the need to ensure co-operation in the timeliness, accuracy, integrity and authenticity of the personal data accessible via the service; and
  • security protocols for data access and sharing. 
From a governance standpoint, it seemed critical to have both the public and private sector represented on the governance panel - just as they were both represented in the tiger team process itself - to ensure not only that the public laws are obeyed at a minimum, but that official guidance can support the additional contractual standards that are agreed to 'fill in the gaps'.

The most immediate next steps would be to flesh out the governance aspects and to address the rights and duties of businesses relying on the data. Having allocated all the necessary rights and duties amongst each of the participants should make the final step of determining the liability and accountability for each of the participants a far less combative process than I've seen in other forums ;-)

Overall, I'm very optimistic that a cohesive global framework for the responsible use of personal data is achievable. Specifically, it was very encouraging to witness how much easier it is to address the overall personal data challenge when you commit to 'unpacking' the big notions of identity, privacy and public benefit, as described above. It was also a huge relief to hear that it is considered feasible by those who've introduced data standards previously to implement a personal data mark-up language to link the flow of personal data to a set of permissions and rules. I'm also hoping this can help achieve dynamic, momentary user identification that minimises the need for large, vulnerable repositories of personal identity material.

Of course, political and commercial acceptance and 'take-up' are where all this rubber hits the road. But the fact the discussions are taking place globally via the WEF is clearly very helpful. 

Tuesday, 13 March 2012

Privacy Must Be A Core Business Competence

The European Commission's proposed General Data Protection Regulation is just that: general regulation. No longer can businesses afford to treat data protection compliance as a 'bolt-on' to their marketing department, or even the compliance department. CEO's need to understand how the demands of personal data privacy are going to re-shape their business.

Just ask yourself whether you think the following rights go to the heart of any business that deals with individuals: the "right to be forgotten", "data portability", "data protection by design and by default", the logging/reporting of personal data security breaches, personal data processing impact assessments, prior consultation and regulatory consent for potentially risky processing. Not to mention requirements for enhanced internal controls, numerous enforcement and compliance burdens, and the obligation to appoint a data protection officer.

The trouble is, none of these concepts is straightforward, nor are the rules easily digested.

But digest them you must. Even if they don't make it onto the statute books, the genie is out of the bottle. Many of these 'rights' reflect the current concerns of at least some consumers (albeit most of them probably also happen to work for the European Commission and various consumer groups). Existing services will be judged against them as 'best practice'. Some businesses and new entrants without legacy systems will factor them into new services. And if they do make it onto the UK's statute books, you can bet they'll be gold-plated.

The Society for Computers and Law has done a great job of stimulating debate on the EC's proposals, and helping identify the implications for businesses generally. But there's a long way to go before the practical implications for businesses and business models are understood and fed back to the authorities in time for a new directive to be finalised in 2014. In fact, bitter experience suggests this won't happen at all.


At a recent seminar, Mark Watts, Chair of SCL's Privacy and Data Protection Group, polled about 100 delegates on the questions asked in the 4 week Ministry of Justice consultation on the EC's plans. The results can be downloaded via the Society for Computers and Law web site. One response made a telling point:
'Writing wide-ranging, broadly applicable laws that affect almost everything a business does but which can only be interpreted and implemented with the assistance of specialist data protection lawyers is surely not the best way to go. Laws that potentially affect so much of what ordinary business does on a day to day basis should be capable of being understood by "ordinary businessmen". The Regulation is a long way from this and will keep data protection lawyers in business for years.'
Further, As Dr Kieron O'Hara explains in relation to the technological challenges presented by the 'right to be forgotten' in his excellent article in this month's Computers & Law magazine, the EC's ambitious plan for personal privacy requires "a socio-legal construct, not a technical fix."