The late Carol Weiss was a Professor of Educational Policy at Harvard Graduate School of Education where she taught courses on evaluation, organizational decision making, and research methods. Her ongoing research dealt with educational policymaking, the uses of research in policymaking, and the influence of ideology, interests, information, and institutional rules and structures on policy.
Does failure play a role in evaluation research?
Not as much as it should. I have written four books and about 60 articles on the subject of the effects of research and evaluation on decision making. In the early 1970s, I proposed establishing a journal called Null, which would publish articles about programs that had no effect. Nobody would support my idea! There’s all this knowledge about what doesn’t work that is not getting out there. The problem is that people tend to repeat what sounds appealing without knowing if it works in the first place. For example, in last 20 years in education, things have changed so fast, the old things come back and get another shot!
What impact do negative findings of findings of no effect have on practice?
It depends on what people expect. If practitioners are in favor of some action and they find an evaluation doesn’t show positive effects, they tend to disregard it or make up excuses. They say the study isn’t very good, or the program hasn’t been running long enough, or the people operating it weren’t very skillful. On the other hand, if they’re against the program or the policy, and the study shows it wasn’t effective, they are apt to champion the findings.
You’ve spent some time looking at the drug education program DARE. Why pick DARE?
I picked DARE as part of a series of studies on why evaluation wasn’t having more effect on policy. At the time, DARE had been evaluated a number of times, and studies showed that it wasn’t effective in stopping kids from taking drugs in the long term. The program was still very popular, and I initially thought it was because practitioners were not paying attention to evaluation. When we got into the field, however, we found out they were paying attention. A number of school districts dropped the program. Others kept it because they valued the relationships they developed with police officers.
What do we know about drug abuse prevention programs like DARE?
It’s a discouraging case study. Curriculum evaluations over the past 40 years teach us that while one may be marginally better than another, we don’t really get any blockbuster successes. The main reason people keep doing what they’re doing is that they don’t know what else to do. We simply don’t have a lot of solutions on the shelf.
What does that say about what kinds of results the public should expect?
Realistic expectations are important. With criminal justice programs, it’s hard, slow work. It’s a little odd that people expect so much from them. When you run an advertising campaign for Toyota, changing sales by a percentage point or a two is considered a huge success. The same is true in running a big election campaign. Why is that different in criminal justice?
What role can researchers play in spreading the gospel about the value of small effects?
The most important thing is if you’ve found something that’s really promising, you have to be able to stick to it and reach a broad audience – not just the small number of people who are making next week’s decision. In order for a message to percolate, there has to be constant, steady work. The problem is that it’s usually not anyone’s job to do that.