It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System
top of page
Featured Posts

It is No Secret: 10 Steps To Design a Monitoring and evaluation (M&E) System


It's No Secret

In the last article I introduced the Monitoring and Evaluation (M&E) system. In this article the 10 steps to designing a M&E system is discussed.


However, before launching into the steps, please note that the development of a M&E system is a participatory exercise. Staff at different levels of the organisation who will be expected to maintain or use the new M&E system should always be consulted. This might include staff at head offices or secretariats, staff in regional or country offices, and staff at programme or project level.


Step 1: Define the scope and purpose


This step involves identifying the evaluation audience and the purpose of the M&E system. M&E purposes include supporting management and decision-making, learning, accountability and stakeholder engagement.


Will the M&E be done mostly for learning purposes with less emphasis on accountability? If this is the case, then the M&E system would be designed in such a way as to promote ongoing reflection for continuous programme improvement.


If the emphasis is more on accountability, then the M&E system could then collect and analyse data with more rigor and to coincide with the reporting calendar of a donor.

It is important that the M&E scope and purpose be defined beforehand, so that the appropriate M&E system is designed. It is of no use to have a M&E system that collects mostly qualitative data on an annual basis while your ‘evaluation audience’ (read: 'donor') is keen to see the quantitative results of Randomised Controlled Trials (RCTs) twice a year.


'Be on the same page as the ‘evaluation audience''


Step 2: Define the evaluation questions

Evaluation questions should be developed up-front and in collaboration with the primary audience(s) and other stakeholders who you intend to report to. Evaluation questions go beyond measurements to ask the higher order questions such as whether the intervention is worth it or if it could have been achieved in another way (see examples below).



Step 3: Identify the monitoring questions

For example, for an evaluation question pertaining to 'Learnings', such as "What worked and what did not?" you may have several monitoring questions such as "Did the workshops lead to increased knowledge on energy efficiency in the home?" or "Did the participants have any issues with the training materials?".



The monitoring questions will ideally be answered through the collection of quantitative and qualitative data. It is important to not start collecting data without thinking about the evaluation and monitoring questions. This may lead to collecting data just for the sake of collecting data (that provides no relevant information to the programme).



Step 4: Identify the indicators and data sources

In this step you identify what information is needed to answer your monitoring questions and where this information will come from (data sources). It is important to consider data collection in terms of the type of data and any types of research design. Data sources could be from primary sources, like from participant themselves or from secondary sources like existing literature. You can then decide on the most appropriate method to collect the data from each data source.

“Data, data and more data”


Step 5: Identify who is responsible for data collection, data storage, reporting, budget and timelines

It is advisable to assign responsibility for the data collection and reporting so that everyone is clear of their roles and responsibilities.



Collection of monitoring data may occur regularly over short intervals, or less regularly, such as half-yearly or annually. Likewise the timing of evaluations (internal and external) should be noted.

You may also want to note any requirements that are needed to collect the data (staff, budget etc.). It is advisable to have some idea of the cost associated with monitoring, as you may have great ideas to collect a lot of information, only to find out that you cannot afford it all.


Additionally, it is good to determine how the collected data will be stored. A centralised electronic M&E database should be available for all project staff to use. The M&E database options range from a simple Excel file to the use of a comprehensive M&E software such as LogAlto.


LogAlto is a user-friendly cloud-based M&E software that stores all information related to the programme such as the entire log frame (showing the inputs, activities, outputs, outcomes) as well as the quantitative and qualitative indicators with baseline, target and milestone values. LogAlto also allows for the generation of tables, scorecards, charts and maps. Quarterly Progress reports can also be produced from LogAlto.


Step 6: Identify who will evaluate the data and how it will be reported

In most programmes there will be an internal and an independent evaluation (conducted by an external consultant).


For an evaluation to be used (and therefore useful) it is important to present the findings in a format that is appropriate to the audience. A 'Marketing and Dissemination Strategy’ for the reporting of evaluation results should be designed as part of the M&E system. See my article, ‘4 Reasons Why No One Reads Your Evaluation Report’ for more information on this.


‘Have a strategy to prevent persons from falling asleep during the presentation of evaluation findings’


Step 7: Decide on standard forms and procedures


Once the M&E system is designed there will be a need for planning templates, designing or adapting information collection and analysis tools, developing organisational indicators, developing protocols or methodologies for service-user participation, designing report templates, developing protocols for when and how evaluations and impact assessments are carried out, developing learning mechanisms, designing databases and the list goes on Simister, 2009.


However, there is no need to re-invent the wheel. There may already be examples of best practice within an organisation that could be exported to different locations or replicated more widely. This leads to step 9.



Step 8: Use the information derived from Steps 1- 7 above to fill in the 'M&E System'template

You can choose from any of the templates presented in this article to capture the information. Remember, they are templates, not cast in stone. Feel free to add extra columns or categories as you see fit.


Step 9: Integrate the M&E system horizontally and vertically


Where possible, integrate the M&E system horizontally (with other organisational systems and processes) and vertically (with the needs and requirements of other agencies). Simister, 2009

Try as much as possible to align the M&E system with existing planning systems, reporting systems, financial or administrative monitoring systems, management information systems, human resources systems or any other systems that might influence (or be influenced by) the M&E system.


Step 10: Pilot and then roll-out the system


Once everything is in place, the M&E system may be first rolled out on a small scale, perhaps just at the Country Office level. This will give the opportunity for feedback and for the ‘kinks to be ironed out’ before a full scale launch.


Staff at every levels be should be aware of the overall purpose(s), general overview and the key focus areas of the M&E system.


It is also good to inform persons on which areas they are free to develop their own solutions and in which areas they are not. People will need detailed information and guidance in the areas of the system where everyone is expected to do the same thing, or carry out M&E work consistently.


This could include guides, training manuals, mentoring approaches, staff exchanges, interactive media, training days or workshops.


Final Thoughts


In conclusion, my view is that a good M&E system should be robust enough to answer the evaluation questions, promote learning and satisfy accountability needs without being so rigid and inflexible that it stifles the emergence of unexpected (and surprising!) results.


Kruno Karlovcec, a fellow blogger made a valid observation that the 10 steps should be envisioned as a loop, with the last step feeding back into Step 1. This is better than a sequentially ordered process. A feedback loop facilitates continuous development and improvement. I quite agree! Thanks Kruno.


References:


Recent Posts
Search By Tags

​​​Ann-Murray Brown

Monitoring, Evaluation and
Facilitation
bottom of page