AI is currently one of the most used acronyms in the EU tech bubble.
Just last month the European Commission published its strategy on artificial intelligence, while several European countries are working on their own AI approaches: Finland revealed its plan on 18 December 2017, France published the Villani Report on 29 March 2018, and the UK government announced an Artificial Intelligence Sector Deal on 27 April 2018.
This all comes just a few weeks before the entry into force of the General Data Protection Regulation (GDPR) – the EU’s sweeping new rules on data protection that will impact companies based not only in the EU but also those established in other jurisdictions who manage data of EU citizens.
While in its AI strategy the Commission stressed the need to have a data-rich environment, the GDPR sets clear boundaries and conditions on access to personal data. Does one impede the effective implementation of the other? That was one of the questions I tried to answer during a recent panel discussion organised by the Center for Data Innovation.
During the debate I raised the following points:
- Artificial intelligence is a technology with the ability to take autonomous decisions or actions aimed at maximising the chances of success, based on experience. The fuel of this technology is data. The more data available for processing, the more accurate an AI application will be. China is today leading the data access race. With 1.4 billion mobile phone subscriptions and 800 million internet users, which represents more than the US and EU combined, the Chinese generate vast amounts of personal data that are used with relative ease for the development of Chinese AI products and services. However, this does not necessarily mean that China will win the AI race. This is not just about access to quantities of data; a sustainable framework with ‘quality’ data will be needed to win in the long term.
- Citizens around the globe are increasingly interested and concerned about their privacy, and about how their data is used. Through the GDPR, European policy-makers have decided that industry must be clearer and more precise in explaining to their users how they collect and use personal data. Companies must spell out why the data is being collected and how it will be used. Consumers will have the right to access the data companies hold about them, the right to correct inaccurate information, as well as the right to limit the use of decisions made by algorithms. The upshot of this is that GDPR will strengthen trust between consumers and industries. In the long run, this will contribute to business prosperity and innovation.
- Many industries have not yet started to digitise. If they were to do so, and develop AI systems, GDPR would set a framework. The European data protection law has left some room for innovation as businesses can process personal data without consent for limited reasons including legitimate interest (including direct marketing). GDPR creates legal certainty as well as a basis for quality data. For those companies starting to digitise now, there will be a clear framework of rules, while others are having to adapt in the middle of their transition.
Europe has the talent, the industrial base, and the willingness to lead in the field of AI. With Europe’s recent AI strategies, the continent is showing it now also has the appetite to spend heavily to meet its ambitions in the field. The European Commission should increase its investment to at least € 20 billion by the end of 2020 and then aim for more than € 20 billion per year over the following decade.
The challenge for businesses that want to deploy artificial intelligence is not the GDPR, but ensuring their consumers understand what they are consenting. GDPR might create more challenges but it also offers a sustainable framework to operate in and to access quality data. Of course, it remains to be seen how GDPR will be implemented, and how concretely it will interplay with AI. Everybody agrees that striking the right balance between values, such as privacy, and competitiveness is needed for sustainable innovation. Without such a balance the risk of public backlash is too high and could negatively impact the future adoption of AI product and services.
Victoria de Posson, Senior Consultant, Member of FTI’s AI Taskforce as well as a member of the Technology-Media-Telecommunication team at FTI Consulting Brussels.