By Ken Fullerton and Jade Maloney
Sandra Mathison, Professor of Education at the University of British Colombia, kicked-started some soul searching among Australasian evaluators when she opened the Australasian Evaluation Society (AES) International Evaluation Conference in Canberra in September 2017. She told the audience that evaluation is not delivering on its promise to support the public good because it is constrained by dominant ideologies, is a service for those with resources, and works in closed systems that tend to maintain the status quo. Then the Presidential Strand of the American Evaluation Association Conference focused on how evaluation can support public good through principles-based approaches.
If evaluation is built in upfront and asks the right questions (not only how is this program working, but how does it compare to alternatives?), it has the potential to support the public good. It can be used to identify improvements to a program’s structure and implementation that support better outcomes, inform decision-making about whether a program should be expanded to benefit new communities or be discontinued, so that resources can be allocated to other public programs that are achieving a greater impact.
For example, an evaluation of an air pollution reduction initiative might inform adjustments in program delivery that result in better outcomes from which everyone stands to benefit. According to the World Health Organization (WHO), “Outdoor air pollution is a major environmental health problem affecting everyone in developed and developing countries alike” and reductions in air pollution “Can reduce the burden of disease from stroke, heart disease, lung cancer, and both chronic and acute respiratory diseases, including asthma.” This could, in turn, have other positive flow-on effects, such as reallocation of expenditure savings to other beneficial programs.
However, evaluation can only support the public good if it is useful and used. A recent study by Jade Maloney, a Director of ARTD Consultants in Australia, entitled Evaluation: what’s the use? (Evaluation Journal of Australia, in press), indicates that AES members perceive the non-use of evaluations as a significant problem in the region. This finding is consistent with the broader literature from North America and Europe, which suggests that many evaluation reports are sitting on shelves gathering dust instead of being used for (public) good.
Then there’s the question of whether evaluation can be considered a public good in and of itself. (The AES Conference debate on whether we should think of evaluation in terms of capital didn’t settle this, as amusing as the comparisons between evaluations and washing machines were).
A public good is one that is both non-excludable and non-rivalrous. This means that no individual can be excluded from using that good and use by one individual does not reduce the availability of the good to others. Fresh clean air, as illustrated above, and street lighting are two examples of public goods.
If an evaluation report identifies broad learnings about supporting a particular target group or addressing a certain policy problem, it can be used by multiple organisations. And one organisation using an evaluation report does not prevent another organisation from also using the insights to inform their work.
The hitch comes on the ‘non-excludable’ criteria. Commissioning organisations often don’t publicly release evaluation reports, which limits the capacity of other organisations to benefit from the insights gained into what works and how and, thus, the potential of evaluation to be a ‘public good’. Evaluators interviewed by Maloney identified the lack of sharing of evaluation findings as a barrier to the broader use of evaluation.
In recent years, across Australia, there has been a trend among government agencies to release more evaluation reports to the public. This increased transparency may enable evaluation to be a public good as it means researchers can access a more fulsome range of evidence about program models in action and, in the case of realist evaluation, learn more about what works for whom in what circumstances and how.
On the flipside, as identified by some evaluators in Maloney’s research, there’s a need to ensure that the push to publication doesn’t affect willingness to have open discussions about things that have not worked as intended because this would limit the capacity of evaluation to support improvements for good. Lastly, when reports are not published, government agencies and evaluators could consider what learnings can be shared through conferences and online discussions. In this way, we can all support evaluation to live up to its potential and be both a public good and for the public good.
Jade Maloney is a Director at ARTD Consultants in Sydney, Australia. She has a Master of Policy Studies from the University of Sydney and a Master of Arts degree in Creative Writing from the University of Technology Sydney.