3 Myths You Were Brainwashed to Believe About Evaluations
top of page
Featured Posts

3 Myths You Were Brainwashed to Believe About Evaluations


Well, it may be a bit extreme to say that you were 'brainwashed', but you may have very well been misled to believe the following myths about evaluations.


Myth # 1: Evaluations only take place in the middle or at the end of a project


The truth is that an evaluation can take place at any point before, during or after the project cycle.


Formative evaluations take place in the lead up to the project, as well as during the project in order to improve the project design as it is being implemented (continual improvement).


While summative evaluations occurs at the end of a program cycle and provides an overall description of program effectiveness.


As a matter of fact, not only can evaluations be done at any point in the project cycle, but in recent years real time evaluation has emerged from the humanitarian sector.


'Real time Evaluations run fast "


Just as the name suggests, Real time evaluation (RTE) is an approach with 'the primary objective is to provide feedback in a participatory way in real time (i.e.during the evaluation fieldwork) to those executing and managing the programme'. J. Cosgrave, Ramalingam & Tony Beck, 2009


RTE can affect programming as it happens and this makes it similar to monitoring. This type of evaluation allows for quick feedback on operational performance,and identifying systemic issues. RTE rarely takes more than 2 weeks – usually 8-10 days (a week in the field and a couple of days in the country office) to complete.


Table comparing RTE with Traditional Evaluations




Myth #2: The types of evaluation are limited


The truth is that there are different types of evaluation available depending on context, the purpose you hope to achieve and the available information and resources.



'Cornucopia of evaluation types'



Formative evaluation includes several evaluation types:

  • needs assessment determines who needs the program, how great the need is, and what might work to meet the need

  • evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness

  • structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes

  • implementation evaluation monitors the precision/accuracy of the program or technology delivery

  • process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures


Summative evaluation include the following evaluation types:

  • outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes

  • impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole

  • cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values

  • secondary analysis re-examines existing data to address new questions or use methods not previously employed

  • meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question


Myth # 3: Only the success stories from an evaluation have value


Oftentimes the motivation behind an evaluation is a desire to show that a project was a success. This is to prove that public funds or donor monies were well spent and communities were well served. However, most evaluations show mixed results and the 'brilliant failures' of a project are just as useful as the successes.


"What may look like a failure on the surface, may potentially yield more"


One example comes from a health, education and water programme in Mali. The evaluation findings revealed that failure to establish terms of engagement led to divisions and miscommunication affecting the programme's implementation.


As a result of this finding, the different stakeholders developed a Memorandum of Understanding. https://www.admittingfailure.org/failure/nicole-mclellan-greg-madeley/


The moral of the story is that a 'failure' that is unearthed by an evaluation can (and should be) be used for learning and programme improvement.


Do you know of other myths about evaluations that are held to be true? Please share in the "Comments"section below.


Recent Posts
Search By Tags

​​​Ann-Murray Brown

Monitoring, Evaluation and
Facilitation
bottom of page