After fully completing the Detain/Release task, I was left with the sense that I had been manipulated by the AI-informed recommendations in each case, yet at the same time I felt I wasn’t being properly or comprehensively informed. The AI recommendations were based on existing data that was not always predictive of what the defendant might do if they were released (allowing, of course, that many or most human actions can be unpredictable). Here is an example:

Figure 1
The data presented a strong rationale for a “release” decision. The prosecution recomemded release, the defendant claimed to be motivated to return to her job, and the three risk assesments — Fail to appear, Commit a crime and Violence — were all rated as “low”. This appeared to present a strong rationale for releasing the defendant, yet this decision resulted in the defendant being charged with a different crime, drug possession, which was not predicted by the AI data (Figure 2).

Figure 2
Reflecting on my own decision-making process during these 24 cases, I found that after each case, I began to be less and less concerned with the defendant’s statement, even disregarding it entirely, and more concerned with the three risk assessments — even though those risk assessments were only graded as low, medium or high and did not provide any additional data about how they were determined. Although I noticed some suggestions of racial differences in the distorted illustrations of each defendant, I paid almost no attention at all to them.
One difficulty I had with the decision and consequences process in this game was connecting my decisions to release defendants (which were based on very limited information) to any of the media reports afterward about the recidivism of the defendant. At first I did not capture screenshots of the defendant’s names and the recommendations that I had based my decisions on; when I saw a media story about some of them a few cases later, I realized I had forgotten them and why I had made a decision to release them. Partway through, I decided to capture screenshots to remind me of the details of each defendant. If there was an easily viewable record of the defendant’s profile corresponding to the news story, it would have been easier to reflect on my past decisions and perhaps learn from I was misled or informed by the AI data.
It seems that a judge who has the authority to detain or release a defendant should probably not base their decisions on algorithms that have the appearance of quantitative data and should instead take time to review deeper, qualitative data about each defendant. This means the judge must make decisions within a state of complexity but at least they can be fully accountable for each decision rather than subsuming that decision within an AI algorithm.
References:
- O’Neil, C. (2017, July 16). How can we stop algorithms telling lies?Links to an external site. The Observer.
- Porcaro, K. (2019, January 8). Detain/Release: simulating algorithmic risk assessments at pretrial.Links to an external site. Medium.