Stanley Greenstein, a researcher at Stockholm university, recently defended his PhD thesis Our humanity exposed – predictive modeling in a legal context. Greenstein describes how companies use a mix of big data, machine learning and other technologies not only to describe but to predict consumer behaviour. The information gathered can, Greenstein argues, be used to manipulate us and undermine the individual's autonomy.
First off, why did you pick this topic?
I took quite some time to land in this topic. It materialized slowly over the 5 years of PhD studies, with contributions from my deputy supervisor who has a technical background, says Greenstein.
The technologies discussed in the thesis are employed by courts, police departments and other authorities to make predictions about, among other things, where crimes will be committed and by whom - professor Andrew Guthrie Ferguson discusses this in his upcoming book The Rise of Big Data Policing. Greenstein has however narrowed down the scope and focuses for the most part on decision making in commercial contexts:
- Authorities in the US are making an increased use of algorithms, e.g. to determine whether parole should be granted or whether a suspect should be set free pending a criminal trial. There are initiatives that have started to investigate the racial bias included in these systems. I did not explore this too deeply considering that my focus was on private companies, but I do refer to such examples, says Greenstein.
What are some of the risks involved with predictive analytics in this context?
- I would say that the biggest risk is that which is referred to as the ’filter bubble’. In other words, individuals are fed content based on their preferences. For example, newspapers can potentially tailor a digital newspaper to each and every individual. Firstly, people are not aware of this. Also, the filter bubble is very comfortable as it reinforces the individuals ideas and there is no need to leave. For example, there is a point to individuals being confronted with content that they do not agree with. This shapes one’s identity and in the end shapes his/her view of the world. In the end, it boils down to the worth of democracy, which is potentially threatened by this (as was seen in the recent US elections).
What about the "black box problem" - is it a common problem that not even developers know how their algorithms work?
- I would say that the developers have a pretty good idea of how the technology works and the hidden biases/risks. However, those using the technology may not. For example, a bank relying on the technology would not be able to say what a decision was arrived at regarding the granting of a loan. In addition, the bank would be nowhere close to the developers and would consequently only be able to adhere to the advice of the black box.
What is your stance on the GDPR and the rules regarding profiling and automated decision-making?
- The GDPR is vague and complex in general. The same goes for Art. 22 on profiling. It is based on principles that are dated in relation to technology and it is unrealistic in its attitude that individuals alone can control their own circumstances in relation to this technology.
You think the regulation is dated, that it doesn't take modern tech into account?
- The principles of data protection make their way back to the thinking of Alan Westin and the HEW Commission in the US. The main idea was that technology was disruptive as far as privacy was concerned, but that the law could rectify the balance by providing the individual with greater control. These principles slowly made their way into EU data protection regimes. The principles arise from an era where the technology was not nearly as complex and where it could be understood by humans. This is no longer the case. I would go so far as to say that in the big data era, one cannot distinguish between different types of data, e.g., all data is potentially sensitive.
One of your goals is to raise awareness about these issues. What is the situation right now - how aware are swedish lawyers and legislators when it comes to emerging technologies, privacy and data protection?
- I would say that only a limited number of lawyers are aware of this technology. In addition, many of them have not seen the commercial implications that this technology could potentially have, which makes it a little irrelevant to practicing lawyers.
Your opponent said subjects such as predictive modeling and the law demand in-depth knowledge about legal matters as well as a broader understanding of technology. Some say law students will need to learn tech skills in order to be competitive in the future. What are your thoughts on this?
- Absolutely. More technology is required at all levels. In the autumn I will be giving the students studying ‘rättsinformatik’ a lecture purely about technology. Generally I feel that students study law because they are afraid of technology/mathematics/statistics. However, in this day and age, a limited amount of knowledge of the basics of technology are required considering that technology is seeping into all areas of law.
There are AI-based IT security tools on the market today. Can the same technologies that are used to profile us be used to protect us?
Absolutely! Technology is a major part of the solution. However, first, what is required is the acknowledgement that there is a problem in need of solving. Also, only very few people have the technical capability to use technology to their advantage, hence the need for assistance.
Fredrik Svärd
[email protected]
Läs också
Artificiell intelligens ska avlasta Europadomstolen
Riskerna med digitala domstolar
Sci-fi för jurister - 20 visioner om framtiden