2023-01-24 15:30 - 17:00 TBD - for non iHub members please register beforehand at: email@example.com
In this talk Maša will explore whether AI can -as claimed- really be used to protecting privileged information in criminal investigations involving large data sets
Technological advances in the past few decades have significantly increased the array of digital tools and powers for the prevention and investigation of crimes, providing law enforcement with almost endless possibilities for electronic surveillance. Yet, adequate regulation of these intrusive powers – particularly in relation to the right to privacy – remains difficult. One particularly thorny issue concerns the protection of ‘privileged information’ during the search of seized electronic devices and other gathered data (e.g., data seized through hacking or other secret surveillance measures). Privileged information is a special type of private information, referring to confidential communications between an individual and a holder of a legal privilege (such as a lawyer, doctor or priest), which may not be disclosed or used in a judicial proceeding. These communications are granted such high legal protection in order to preserve the general societal interest that any person wanting to consult a lawyer or a doctor can be free to do so under conditions, which favour full and uninhibited discussion.
Until recently, protecting this privilege in criminal investigations was a relatively simple affair. For instance, during the search of a lawyer’s office (with paper and computer files), it is generally the lawyer herself who determines what counts as privileged information and where it is to be found, so that it can be excluded from the search. The matter becomes much more complex in the case of large (or huge) sets of data, gathered through the use of secret surveillance measures, such as seizing whole servers (e.g., Ennetcom) or hacking (e.g., EncroChat, SkyECC). In this case, it is unknown whether the gathered data include privileged information and there is no one to indicate when and where this would be the case. After all, secret surveillance measures are meant to take place without the suspect’s knowledge. This begs the question: who and in what way is to determine what counts as privileged information in such cases? Can privileged information be filtered from the data set in such a way that the police would not get acquainted with its content?
In this talk, I will focus on the future legislative plans concerning the protection of privileged information in the context of large data sets in the Netherlands. As can be discerned from the Explanatory Memorandum of the draft Dutch Code of Criminal Procedure, AI is seen as the solution for protecting privileged information, which is to be (more or less) automatically filtered out of data sets (see Stevens and Galič 2021). Yet, Dutch lawmakers seem to have a rather poor understanding of AI, resulting in a blind trust in technology to solve this important legal and practical issue. My talk will end with an invitation to computer scientists and other tech folk to help myself and other criminal law scholars explore the actual possibilities and limitations of AI when it comes to filtering privileged information out of large data sets in criminal investigations.
This will be an hybrid event. Please send an email to firstname.lastname@example.org if you would like to join.