FOLLOW

FOLLOW

SHARE

Does GDPR Mean The End Of Machine Learning In Advertising?

Automated data processing for profiling could be a thing of the past

25Oct

Machine learning has had a profound impact on the advertising industry, enabling unprecedented analysis of data for consumer profiling and the distribution of personalized campaign materials. However, this could all be under threat from new European laws around data privacy - laws that are set to force an almost total re-evaluation of how many digital enterprises have been doing business.

On the 25th May 2018, the EU General Data Protection Regulation (GDPR) comes into full effect after years in the making. The GDPR is the EU's latest rewrite of its data privacy laws. Its impact will be felt by organizations across the globe, applying to everyone regardless of whether the data has been captured and analyzed inside or outside of the EU. This has had tech giants up in arms at what they perceive to be a war on their power, with the majority making billions from targeting ads using machine learning techniques. And they will have their work cut out for them continuing to do so with the new rules.

GDPR essentially regulates the use of all personal data in advertising. This includes both personally identifiable information collected through cookies and advertising IDs as well as what the EU terms 'sensitive information', such as that pertaining to racial or ethnic origin, health, finances, political opinions, and sexual preferences. The GDPR requires companies erase personal data on request unless there's a legitimate reason to retain it, that those affected by data breaches are alerted, and that data protection is designed into their products and services. Essentially, it is giving people control over who can use their data and ensuring transparency as to what they are doing with it and why.

This presents a real issue when it comes to machine learning, which is often done in a black box out of the way of prying eyes. The transparency that GDPR is trying to achieve around how consumer data is for coming to a particular decision is arguably impossible because of the complexity of high-quality machine learning. Organizations also often keep their algorithms a very closely guarded secret as it is their competitive edge, so to the extent that they can be transparent about it, it is rarely in their interests to be so.

The two most important sections of the GDPR for machine learning are Articles 13 and 22, which specifically spell out a data subject’s rights in relation to automated processing. Article 22 states that 'the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.' GDPR essentially only allows profiling and automated decisions with the express consent of the subject, when expressly authorized by EU or member state law (including fraud and tax evasion detection), when required to ensure the security and reliability of a service provided by the controller, or when obligated to by a contract between controller and subject. Additionally, article 13 states that an individual has the right to a 'meaningful explanation of the logic involved.' So even when permitted to carry out profiling, automated decision-making must ensure fair and transparent processing, use appropriate mathematical and statistical procedures, and measures must be established to ensure the accuracy of subject data employed in decisions.

The hazy language set out here makes it even harder for organizations to abide by the rules, and there is a significant amount of confusion. Firstly, 'significantly affects' is an extremely broad term and ill-defined in the context. Secondly, 'data subject’ is also open to interpretation as it could mean exclusively automated decision-making is not prohibited if the data processed is anonymized. Furthermore, 'data subject' is a singular term, so there is a question as to whether group profiling is allowed.

There is also an issue in what GDPR neglects to mention, particularly how individual rights to privacy should be balanced with the societal benefits that arise from AI innovation. In areas such as healthcare, the impact of such technology could be game changing for billions and this would surely outweigh the individual rights to privacy. Article 89 of the GDPR passes the responsibility for this balancing act to individual member states, but any major discoveries based on AI will likely require the use of personal data, so it is hard not to see clashes arising.

Regardless of the issues around the wording of GDPR and the practicalities, companies need to be prepared. Those who fail to comply with the new regulations face hefty fines of up to 4% of global turnover. One way around the ruling is for human decision makers to intervene or override automated algorithms, which they already do in many cases. It could also just mean simplifying machine learning projects so it is easier to explain how analytics results are generated, and putting a compliance process in place to ensure fair and proper use according to the regulations. Ultimately, though, machine learning is still the future of marketing, GDPR simply means the free-for-all is over. Public trust in how organizations are using data is used is incredibly low, with nearly 80% of telling a recent survey by Quartz that they don’t trust Facebook with their personal data. This has to change if we are going to realize the technology's potential for good, and regulations are vital for gaining such trust. It may be easy to complain, but ultimately companies will be better off if they can find a way to benefit from the new rules.

Comments

comments powered byDisqus
Turkeyss

Read next:

What’s On Data Analysts’ Plates This Thanksgiving

i