The late John Goldkamp served as a Professor in the Department of Criminal Justice at Temple University for over 25 years.
What do you think about the idea of shedding light on failure in criminal justice?
I think it’s very important. Quite often, when we develop new ideas and test those ideas, the larger audience of policy officials, funders, and government just want to know about the approaches that “work.” That’s only half the picture. I actually just finished a report for the governor, in which I recommend a study of people who fail in parole—to learn about why they failed. We’re all very quantitatively oriented now. I’m guilty of that too. But qualitative data about the people who don’t do well could teach us a lot about failure. We label people as “failures” and attribute it to the various interventions they did or didn’t receive, when it may have been something else altogether, something unmeasured. We need a more complete picture of the kinds of obstacles that those individuals have faced that may have nothing to do with the interventions we are examining.
Have you noticed any trends in criminal justice that make it easier or harder now to address failure?
Not really. I’d say the general trend over the past half century is to ignore failure. The real trend I’ve seen has been in watching one promising movement after another fall off course, which is maybe it’s own type of failure—how a reform initiative gets distracted from its original aims. Back in the 1960s, my “first” movement was bail reform. Then came drug courts and community courts. I was a research bystander when drug court started in Miami and then was there and played a role when the drug court was created in my own town of Philadelphia. Now, new drug court judges with no memory of the court’s history offer to explain the court to me—and it sounds so different from how it was understood when it started. When I see the super commercial organization that has taken place, it makes me wonder if the original reason for having it in the first place is long forgotten--and the court is now just becoming another routine and comfortable institution. Don’t get me wrong, I’m still a huge fan of drug courts and recognize the needs they are trying to fill, but I’m somewhat disappointed that my earlier high hopes for them haven’t been realized. I suppose it’s characteristic of change movements – that challenging and promising ideas will take significant detours, get absorbed or diluted, and lose some of their excitement and meaning.
Do you think it’s possible to institutionalize success while avoiding the detours that lead to failure?
I think the key is having the leadership to really follow through after the initial success. New York experienced some success in the old days because of the caliber of the leadership that was associated with the early innovations—and seems to have maintained that tradition. A lot of the early developers of drug court, from Miami to Portland, Las Vegas and elsewhere drew on major system leaders who were pathbreakers. Unfortunately, successes are too often attributed to the charisma of specific leaders, not to leadership and education generally. You need reinforcing leadership and continuing education in order to keep the original message of the reform alive.
As a researcher, I’m sure you’ve delivered your share of bad results or failures to various projects. Is there a typical response?
Absolutely. First, people have given a lot of effort personally to developing an innovation; they are likely to be defensive about research results that might show that it’s not perfect. People are committed to succeeding; they are not going to be happy if they do not get the good news they expected. As a researcher, it’s sometimes an implicit message, if you want to be well regarded, you need to emphasize good news, if you want to be invited back as a trusted researcher. I have had a number of unpleasant experiences when I’ve had to report “bad” findings. The people involved have a tendency to be defensive and, almost instinctively, want to find something wrong with what you have done. “This couldn’t be right—you have made a mistake!” But, that’s part of the research process and that’s when trust starts to build, if it is going to. I’ve learned to warn my colleagues to expect the “explosion” and not to take it personally. If you have done the best job possible under the circumstances, stand your ground, but be willing to include the questions and limitations that site officials may legitimately wish you to include. That said, the really “good” places who are in it to bring about change will receive the bad news and say, “thanks, let’s think about how to fix it.” They seem to understand that it’s important to know the bad and the good. It’s like the prescription drug ads on television. There are drawbacks and side effects, but still good effects occur. We could have that same outlook in criminal justice.
Why do you think people are so resistant to admitting failure?
I think people don’t see failure because they’re too busy with the day-to-day needs of their jobs, dealing with the emergencies of each day. High level leaders in particular are isolated because of their responsibilities. They do not want to dwell on the unsupportive feedback. Also, the normal criminal process is fragmented, so most players in criminal justice don’t know how a given case turns out and the larger picture is not easily seen. I think what is missing is the ability for reflection; that is something the research community can offer.
Do you think part of the problem is that it’s hard to define what success or failure will look like?
No, I don’t think so. Even when people are clear about what they hope to see, there’s an unreasonable expectation that their enthusiasm will translate into positive results. They think that a well-intended project is guaranteed to work. When it doesn’t, it’s the fault of the stupid, misguided and “un-tuned in” researcher who doesn’t know anything (doesn’t “get it”). So next time, they pick a different researcher. They check the Yellow Pages for researchers who have come out with the “right” findings.
How would you describe the reactions to high-profile failures, like parolee recidivism?
High-profile failures represent emergencies and demand some kind of immediate response. The tendency is to overreact to rare disasters and shut down all doors. That response is understandable, as long as it doesn’t last very long. Some communities call for abolition of parole or the creation of longer maximum sentences, either with no supervision or with long, mandatory supervision as a “knee-jerk” reaction to a horrible event, but those reactions—understandable as they are—are a huge waste of resources and do not resolve the underlying issues.
Do you think communities can plan for high-profile tragedies?
They certainly should. The need to manage society’s overreaction to an emergency is predictable to anyone in a position of responsibility. I think there are some lessons to be learned from the emergencies that have already happened. It would be informative to do so-called backward autopsies of those events to try to see how the emergency response could have been better managed.
So how do you think failure should best be framed by the research community going forward?
I’ve been thinking recently about a different way to look at failure. Instead of focusing on the specific examples of failure, we should be looking at the examples where a “high-risk” individual didn’t fail. We classify people as failures at various stages, but lots of them turn out to be quite fine. Thus, this is a failure of a different kind—of conventional wisdom. In fact, a commercial classification system for high-risk offenders only gets it right 22 percent of the time. Now, there’s a failure. We need to understand what happens with the other 78 percent and why we misclassify. It’s not enough to accept that we misclassify people and say “too bad.” With our current emphasis on predictions, we could learn from not only when our predictions are wrong in the usual sense (when we predict that an individual will fail) but in both senses, including when we are willing to assume failure when that turns out to be wrong. It won’t be easy, but I think it’s possible to design such a study and consider these implications more broadly.
Are there any risks you see to a continued investment in research-based solutions?
I’ve noticed what’s almost a commercial interest in selling people on research-based models. If you can get someone to “buy” into (in the sense of actually “purchasing”) the model, then anything that’s not working can be explained away. You can blame failure on poor implementation. The core model never gets challenged further. That’s a really big and fundamental problem in the sense that our knowledge base therefore cannot grow.