Trusting the use of AI systems in a law enforcement context: a question of good data?
The use Artificial Intelligence (AI) systems is on the rise in all aspects of public administration within the European Union (EU), from tax administration and social benefits to law enforcement.
Author: Dr Laura Drechsler, Research Fellow, Centre for IT & IP Law, KU Leuven
The use Artificial Intelligence (AI) systems is on the rise in all aspects of public administration within the European Union (EU), from tax administration and social benefits to law enforcement. While all such uses are and should be under legal and ethical scrutiny, this is especially the case for the law enforcement context. Law enforcement authorities have been granted much power in our society to ensure public security by preventing, investigating and prosecuting crimes - an essential public interest. Accordingly, when AI systems get deployed in a law enforcement context, such systems should be the top of the shelf in terms of quality, also to maintain societal trust in the authorities and their powers. This requires such AI systems to be based on especially thorough and detailed research and testing. Yet, obtaining access to good (personal) data for law enforcement AI research is made complex by the difficult interaction of research and data protection law in the EU.
Law enforcement and the Draft AI Act
The Draft AI Act published by the European Commission in 2021 considers many AI systems used by law enforcement authorities as ‘high risk’ (Annex III). It also prohibits, with certain exceptions, real-time AI-based facial recognition in public spaces for law enforcement purposes (Art. 5(1)(d) Draft AI Act). The use of AI systems by law enforcement is therefore inherently classified as more sensitive, since law enforcement actions are ‘characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter’ (recital 38 AI Act proposal).
One factor to mitigate the risks of law enforcement’s deployment of AI, recognised at least in recitals of the Draft AI Act, is to ensure that such systems are of the best ‘quality’. As recital 38 of the proposal states: ‘if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner’.
Access to ‘good’ data in terms of quality appears therefore a crucial prerequisite of managing the risks of law enforcement’s reliance on AI tools to ensure that these are of high quality. Moreover, such access needs to occur at the stage of researching AI tools, so that it can help prevent AI-related harms to individuals.
Law enforcement research on AI: which data protection rules apply?
Within the EU, the involvement of ‘data’ quickly leads to checking the scope of application of any act of EU data protection law. EU data protection law applies whenever ‘personal data’ are undergoing ‘processing’, whereby both terms are defined in a broad manner to ensure effective protection of individuals (see for example Article 4(1) and (2) General Data Protection Regulation – GDPR). Any actions performed by AI would be considered as processing, and it is usually likely that the large set of data AI is operating on contains personal data (albeit not exclusively), unless specific measures were taken to prevent this.
Since 2018, law enforcement authorities in the EU have their own set of data protection rules laid down in the Law Enforcement Directive (LED), which was adopted together with the much more famous GDPR as a package. As a Directive, it depends on Member States’ transposition into national law. Unlike the GDPR, which has a very broad material and territorial scope, the LED is limited to situations where law enforcement authorities (or other entities officially entrusted with law enforcement tasks) process personal data for law enforcement purposes (Article 3(7) LED). Law enforcement purposes are thereby defined as ‘the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security’ (Article 1(1) LED).
Research is notably not listed among the purposes for which law enforcement authorities process personal data under the LED. This could mean that law enforcement authorities fall under the GDPR when they process personal data for research purposes, since the GDPR only excludes from its material scope processing activities for ‘law enforcement’ purposes (Article 2(2)(d) GDPR). Such an assessment appears confirmed by the LED itself, which specifies in Article 9 that as soon as law enforcement authorities are processing personal data outside the law enforcement purposes, the GDPR becomes applicable, including when such processing happens for ‘scientific or historical research purposes’ (Article 9(1) and (2) LED).
Yet, there are other provisions in the LED suggesting otherwise. Like the GDPR, the LED contains the core principles of ‘purpose specification’ and ‘purpose limitation’ (Article 4(1)(b) LED). Unlike the GDPR, the exception to these principles does not require a ‘compatibility test’ – meaning that the new purpose needs to be compatible with the original purpose – but a necessity and proportionality test and another law enforcement purpose (Article 4(2) LED). For the latter, the LED provides that ‘archiving in the public interest, scientific, statistical or historical use’ would fulfil that condition. In the GDPR ‘archiving in the public interest, scientific, statistical or historical use’ is understood as research and linked to various research exemptions (see for example Article 5(1)(b) GDPR).
The LED is therefore unclear about what regime applies when law enforcement authorities undertake research activities, and there is for the moment no guidance by case law of the Court of Justice of the EU or the European Data Protection Board clarifying this aspect. A systematic interpretation suggests that research conducted in-house by law enforcement authorities (including on AI tools) would fall in the scope of the LED whenever it is closely linked to their ongoing law enforcement tasks, whereas broader research activities, especially activities involving external partners such as research institutions, would instead be in the scope of the GDPR.
The border between these different research activities is however anything but clearly delineated. Not being able to verify which legal regime applies to different activities in the context of AI law enforcement research complexifies the task of ensuring that personal data can be used to improve the quality of AI tools where this can be justified under the law. It adds thus a layer of complexity to the already difficult interaction of research and EU data protection law.
The Draft AI Act as it was proposed by the European Commission clearly recognises that the use of AI systems by law enforcement authorities requires special care. Being classified as high-risk, such AI systems need to meet certain quality standards. The quality of AI systems is dependent on sufficient research and testing, which in turn also requires sufficient access to data. Within the EU, personal data, which are likely to be involved in any AI law enforcement research project, are protected by a set of data protection rules to ensure the safeguarding of individuals’ fundamental rights whenever their data are processed. The interaction of these rules with the research context is generally complex, but the fact that the LED appears to include contradictory provisions renders it even more unclear which standards apply.
If AI systems are to be used in a trustworthy manner for law enforcement purposes, it is crucial as a first step to gain more clarity on which data protection regime applies to which aspect of a law enforcement AI research project. Only then can it be ensured that those research activities happen with the utmost respect for the legal framework, which in turn should enable not only more access to personal data, but also better tested and researched AI tools.
Dr Laura Drechsler is a research fellow at the Centre for IT & IP Law of the KU Leuven, focusing on the protection of fundamental rights when data are used (Twitter, LinkedIn). The research was funded by the European Union. Grant Agreement No. 101073951 (LAGO project). The views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
Note: This blogpost was first published on the on the Law, Ethics & Policy of AI Blog on 18 April 2023.)