Impact evaluation studies are often called upon to put a price on something that is impossible to value. Evaluators ought to take this limitation seriously when designing their research methods. This post discusses the problem and suggests some responses.

The burden of proof is said to lie with the party making a claim. In order to convince someone of some truth, you are obliged to support your claim with evidence. The audience hearing your claim will make a judgement by scrutinising the evidence you present in support.

This obligation should not be treated lightly. In some cases, it isn’t possible to marshall evidence to support our intuitions. Indeed we can’t always know what we don’t know and so we shouldn’t act like we can. Recognition of this limitation is known as epistemological modesty. This perspective doesn’t mean that we have to admit defeat. Instead it is basis for action. Action that takes place with an awareness of our necessarily limited understanding.

This approach has important lessons for evaluation. It suggests that we should avoid the temptation to reduce the nuances of impact evaluation to a single result (i.e. an absolute figure in pounds sterling for the overall impact or a summary cost-benefit ratio). Although it might seem appealing to be able to boast about a large absolute impact figure or to point to a high return on investment, the evidence required to support those definite figures may not be credible. We should not invite the audience to question and undermine the claims being made. Instead we ought to make modest claims that allow for the greatest degree of flexibility over the evidence. It’s better to be confident in a modest conclusion than unconfident in a bold one. Our efforts should be focussed on validating our assumptions and understanding situations under which our conclusions might be undermined. Thus the burden of proof is lightened and the claims more credibile.

Our ability to do this will, of course, depend upon the circumstance but examples of how this might work in practice include:

  • establishing the minimum level of evidence required to demonstrate that benefits are at least equal to costs (rather than straining evidence to calculate the maximum plausible ratio of benefits to costs),
  • comparing projects on the basis of the credibility of the evidence required to demonstrate that break-even position,
  • deriving relative rather than absolute measures (so that we be certain that the comparisons are fair and reasonable,)
  • using confidence-intervals rather than point-estimates to make inferences, and
  • using evaluation to guide research (to identify otherwise implicit assumptions that need to be validated).