
As artificial intelligence plays a growing role across our institutions, how can we make sure courts and justice systems use it responsibly?
Artificial intelligence is showing up in our lives more and more by the day, from our smartphones to search engines to social media. AI is estimated to be a $750 billion industry, with millions of users in homes, businesses, and our institutions—including the criminal justice system.
Enthusiasm for what these new technologies can accomplish is running high, but so are concerns about what they mean for people’s lives. As use of AI continues to spread, how can we make sure courts and justice systems use it responsibly? Our new policy brief makes the case for drawing “a line in the sand” when it comes to using AI in decisions that could impact people’s liberty or cause serious harm.
The decisions made in courts every day have life-changing consequences. Being sent to jail while awaiting trial can cost a person their job, housing, and the chance to get access to much-needed services. The effects of a criminal conviction and jail time can follow people for years after the fact, potentially leading to cycles of trauma and limiting opportunities down the road.
These decisions cannot be taken lightly. While some see the potential for AI to increase fairness by minimizing human error and saving time per case, it can also do the opposite—leaving people’s lives in the hands of technologies we know little about.
Building on our previous report Minding the Machines, where we called for prioritizing human values—what do we want to use AI for—before rushing to implement new technologies, our new brief shares insights from a working group we hosted with leaders in both criminal justice and tech. The discussion covered a range of questions, but one core point stood out: Given how little we know about artificial intelligence, and the challenges with managing it, it’s crucial to avoid using AI to make decisions that could drastically change the course of a person’s life.
“The criminal legal system deprives people of liberty. It shouldn’t be using AI to do this,” said Sara Friedman of the Council of State Governments Justice Center. “There is a line when you are responsible for people’s lives; these are things you shouldn’t do.”
If the dangers of using AI to make these crucial decisions outweigh any potential gains, where can it be used safely? One promising example can be found in a recent study we supported with IBM in Jefferson County, Alabama, which used AI to uncover disparities in how fines and fees are used in the local legal system. The report found that low-income and Black communities in Jefferson County are disproportionately impacted by fines and fees, with middle-aged Black men in particular facing higher penalties than their white counterparts for many charges.
Courts, justice systems, and researchers can similarly use AI to analyze data and keep track of how existing policies and programs are impacting the communities they aim to reach. That, in turn, puts us in a position to strengthen our approaches and better meet people’s needs.
Our policy brief also points to the potential for AI to reduce administrative burdens and free up time for more direct, person-to-person support. And it might be able to help case managers, social workers, and other support staff more effectively connect people to community-based services that can put them on a better path.
Still, even these relatively low-risk decisions need human oversight and strong guidelines rooted in shared values and goals to ensure AI is used safely, responsibly, and with dignity for everyone in the justice system. “The data may not be able to solve the problem here,” said one tech executive who participated in the working group. “We’re talking about human nature.” Whatever the future holds for AI in the justice system, it should be guided by the needs and values of the human beings on both sides of the bench.