Post GDPR: Responsible AI Legislation | DMA

Filter By

Show All
X

Connect to

X

Post GDPR: Responsible AI Legislation

T-5af967d03bc2f-puppet_5af967d03bb15-2.png

Legislation follows innovation like hangovers follow red wine, and the impact can be extensive. In January this year, there were only rumblings of new cryptocurrency legislation in South Korea, and the value of Bitcoin fell by more than 15%, a drop from which it has not recovered. The 2018 General Data Protection Regulation (GDPR) come as a long overdue update of the 1998 Data Protection Act and the Privacy and Electronic Communications Regulations (PECR). The updates include conditions like the right to be removed from a given database, and the right to be notified of data sales to third parties.

Now we are in the midst of the Cambridge Analytica data scandal, and the amount of personal data being collected has become a central matter of public debate. Thousands of individuals are requesting to download the information that Facebook holds on them, and finding that the records extend far beyond what they expected (going so far as to include call and SMS activity), though not beyond the permissions they granted to the app (whether through active opt-in or default).

Given that artificial intelligence, (particularly big data analytics and machine learning), has become an essential tool in data collection and interpretation, it is reasonable to expect AI specific legislation in the near future, designed to sit alongside the GDPR.

Within the GDPR framework, AI is referenced in Article 22(1), in the form of automatic processing, and it states that any person — the data subject — has the right:

“not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”

The terminology is the fascinating aspect here, as neither ‘legal’ or ‘significant’ effects are defined by the legislation. The Article 29 Data Protection Working Party (WP 29), working together with the EU data protection authorities, have adopted the Guidelines on Automated Decision Making, which offer the following broad definitions of the two terms:

Legal: “a legal effect suggests a processing activity that has an impact on someone’s legal rights, such as the freedom to associate with others, vote in an election or take legal action”
Significant: “for data processing to significantly affect someone the effects of the processing must be more than trivial and must be sufficiently great or important to be worthy of attention”

Legal cases determining which circumstances constitute ‘significant’ will provide an interesting insight into what degree of impact upon the circumstances, behaviours and choices of an individual are considered acceptable within the framework.

So, what’s the impact for marketers?

The purpose of marketing is solely to influence the behaviours and choices of any given individual. The more information a brand has about you, your preferences and your lifestyle, the closer they can get to delivering the message that will convert you.

Users of the internet generate, on average, 2.5 quintillion bytes of data each day, across a huge range of sites and applications. The challenge for brands is in determining what data is valuable to them in accurately profiling you.

Under GDPR, companies can no longer collect any data on the basis that it might be useful. Every piece of information they hold must considered necessary for the specified, explicit and legitimate purposes for which it was collected. Data can also only be held for as long as it is relevant and necessary for the consented purpose.

A second limitation is upon the sale of data to third parties, which can now only be done with a data subjects express permission. Gone are the days of buying data trawled from the backwaters of the internet, and marketing managers everywhere are starting to get a bit hot under the collar.

How, then, are brands to get close to their consumers? The aloe vera balm for marketers comes in the form of GDPR 22(2), which, as with much of GDPR, states that the automatic processing and profiling is acceptable provided the data subject has consented to it (and they have the right to contend any decision made about them). To gain this consent, GDPR requires data controllers to inform the subject of the activity, provide a meaningful explanation of the logic, and explain the significance and envisaged consequence of the processing. Anyone who thinks this sounds simple is almost certainly not the one doing the job.

The difficulty is that, as AI systems become more sophisticated, they also demonstrate a tendency towards becoming more opaque. It isn’t easy to distill an increasingly complex algorithm into a simple explanation, particularly within the constraints of proprietary data sets, a lack of data logs, and the importance of factoring in learning components when explaining algorithms.

Consequently, all B2C companies should be examining their data collection strategy, and ensuring that they have a framework for seeking the specific pieces of information that are relevant to the purpose, and most valuable in profiling a consumer in relation to their brand offer. This means that, when they ask individuals for permission, every aspect is entirely transparent, and the regulations are factored in at the AI product development level.

You’ve cleansed your data. You’ve identified what’s relevant. And you’ve found a simple way of explaining it. Now what?

We return now to the question of ‘significant impact’. One driving force of AI development is consumer lethargy; people are fundamentally lazy. The desire for convenience has so far trumped (in the majority of cases) concern for personal privacy. However, as automated decision making brings brands closer, and enables them to exert greater influence over individuals via a vast array of channels, what will constitute moral responsibility in the arena of AI application?

By 2020, in China, every one of the 1.3 billion residents will be required to participate in a Social Credit System, designed to evaluate trustworthiness derived from every type of personal data, including information on demographics, preferences and behaviours. Structures like this, including credit scores, exist now across the globe, but there has not, so far, been one which is so all encompassing. The fundamental question is eternal, quis custodiet ipsos custodes? Who owns the algorithm, what elements does it contain, and which information does it privilege? The argument against mandatory disclosure of these factors is that the system might be gamed by individuals, but the consequences of nondisclosure are much more severe.

Under the proposals, individuals who are deemed trustworthy will receive rewards including taking out loans, renting cars without deposits, faster check-ins, travel without supporting documents and fast tracked visa applications. Higher scores are a status symbol, supporting Rachel Botsman’s conclusion that the system is a form of gamified obedience based on distributed trust. The dystopian aspect comes in being deemed ‘untrustworthy’ by an undisclosed standard. In February 2017, 6.15 million citizens had been banned from taking flights for ‘social misdeeds’, and another 1.65 million blacklisted people cannot take trains.

This is an extreme example of the application of automated processing, but these outcomes clearly constitute ‘significant impacts’ upon an individual. Is it enough, then, that the data subjects opted in as the SCS is currently voluntary? This question must be asked at every level and reassessed for every application case. In the case of gambling advertisements, does it constitute a significant effect to use hyper targeting to serve up a message to an individual, to which they consented, which will prompt them to step into a betting shop even though they are on their last dollar? It isn’t new, but it’s absolutely critical that these factors are considered at the design and legislation level of applications of developing technologies under the umbrella of artificial intelligence.

If you’d like to know more, come speak to us at Adeptiv UK. We are a data-driven dialogue agency in London which specialises in helping clients get, keep and grow customers at scale.

Image credit: 360training.com
Hear more from the DMA

Please login to comment.

Comments