What legal risks are you likely to encounter if you use artificial intelligence?
As soon as you are exposed to the use of data and the programming of automated reasoning, you will come up against a number of legal issues. The following is a non-exhaustive list of examples.
Firstly, artificial intelligence helps decision-making on the basis of collected and processed information. As with all decisions, the economic player can thus be rendered liable by AI as regards third parties. Resorting to artificial intelligence thus raises a number of questions on the applicable liability regime, and anticipating this issue is paramount.
Using AI also highlights certain potential loopholes in an organisation when it is not sufficiently prepared or structured. For instance, some people believe that voice recognition based on a sufficiently advanced AI system could already be easily distorted from its initial design and be used for certain industrial spying methods and listening into private online and offline conversations. Economic players must thus not only understand the legal risks of AI as regards their liability to third parties, but also be aware of the risks brought by AI into their own organisation.
Lastly, AI requires the collection, use and mining of data that can be personal. To this end, the new European legislation on data protection (regulation no. 2016/679, known as GDPR) that entered into force on 25 May 2018 imposes new rules on companies. Not respecting these rules could result in substantial penalties.
The EU definition of “personal data” in this regulation is very broad, as it covers classic information pertaining to a person’s identity, data sent over the Internet (IP addresses, cookies etc.), as well as particularly personal data pertaining to a person’s life and way of life (biometric data, political opinions, etc.).
All companies (data controller or processor) that are established in the EU and that process personal data are subject to GPDR1. In addition, all companies (data controller or processor) that are not established in the EU but that process personal data in the view of offering goods or services to data subjects in the EU and in the view of monitoring their behaviour in the EU are also subject to GDPR2.
Since they must ultimately provide an “acceptable” level of personal data protection, those companies targeted by GDPR must question the origin of the personal data they use and the way they “process” them (in accordance with GDPR). They must for instance secure the data, in particular, by preventing unauthorised third parties from accessing it.
Recent scandals have demonstrated the importance of precisely monitoring the data economic that players are likely to collect. Thus, above and beyond the restrictions levied on players, this regulation could represent an opportunity for them, by driving them to improve the visibility and monitoring of the personal data they use. Such data is, quite often, stored on various systems located in several countries, via datacentres and cloud servers that question certain concepts of territoriality.
It is therefore absolutely crucial to have a good understanding of the regulation prior to using and creating an artificial intelligence tool, as such tool would rely on personal data potentially covered by the scope of this fundamental text. An initial step should be the creation of a world map of personal data subject to GDPR, in order to ultimately guarantee the compliance of a whole set of data spread out among several servers.
Lastly, considering its modus operandi, AI is likely to give rise to bias issues (in particular “exclusion”) introduced in its setting up. Such issues illustrate ethics dilemmas peppering the rise of this technology, whether weak (solely processing data) or strong (integrating machine learning), which must now be integrated to developing business models.
1) Article 3.1 of GDPR (“establishment” criteria).
2) Article 3.2 of GDPR (“monitoring” criteria).