Design a monitoring and evaluation plan for your projects, learn what evidence to collect and methods to better track progress.
Tracking your impact is about how you gather evidence to reveal the difference that you and your collaborators have made. Developing a systematic way to track this will help you report on outcomes and impacts that occurred because of your research. It will also help improve future impact pathways by revealing what works, and what gets in the way.
Tracking your impact is the part of the research impact cycle that is most like research. It is about understanding what data you need to evidence things, and then setting up a monitoring and evaluation (M&E) system to make capturing or gathering it easier. This module continues the 'joined up' approach to being able to identify and then report on research impact, by building on the tools used in earlier modules to develop a fully fleshed out M&E and plan.
Takeaway points
- Your programme logic is one of the best starting places when designing your framework. For each 'box' consider 'how would we know' if that has happened or been achieved? This will help you start thinking about the data / evidence you need.
- Mixed methods and triangulation are fundamental design elements for M&E. That means gathering quantitative and qualitative data to build a credible and compelling evidence base of your impact and looking for connections between the two.
- Inputs, activities, and outputs are generally easier to evidence because they are more tangible, and are therefore more likely to be things you can 'count'.
- Start by identifying what you need, not what you have. It is tempting to build a system around the data you know you have or can easily access, but this doesn't mean it's the right data. Instead, think about what data or evidence you need, and then consider where and how you can locate it.
- Context and need differ from project to project. So the data and evidence you need to collect may change. There are some 'standard' data sources or types you can routinely use or access, but expect (and plan) that you will also need some 'bespoke' data collection tools and methods.
- If you don't have a baseline (or what the situation was before you undertook the project), it's not the end of the world. There are many ways you can credibly estimate baselines, especially when impacts are usually being judged by your stakeholders, not you.
- Data collection doesn't happen by magic. Part of planning how you will track your impact also includes being clear about who is responsible for key activities.
- Many M&E activities are closely linked to project management or project delivery - look for opportunities to build additional data collection into existing activities, e.g. every time you have a project team meeting, ask for evidence and activities that shed light on your impact journey .
- Some data collection activities may need to be conducted by experts or others outside of your core project team, particularly for things like validating with stakeholders if outcomes and impacts have been achieved.
- It's better to start smaller and make your efforts achievable than design a Rolls Royce that is too complex or expensive to implement.
- Get expert advice and guidance! M&E is a profession, and requires expertise and practice to do well. While CRIs are well placed to understand these kinds of systems, designing them well always benefits from some expert guidance or 'peer review'. This doesn't need to be an external consultant, you may have in-house expertise who can help. Getting in touch with your iPEN rep is a good starting place if you're not sure.
What's the difference between monitoring and evaluation?
Monitoring and evaluation are often combined when discussing the approach and process to tracking the difference a project, programme, initiative, policy, or in our case research makes. However, they mean slightly different things.
Understanding the differences between the two is helpful as you start to explore the resources below, and on our impact resources page. iPEN supports the inclusion of BOTH of these complementary activities.
Understanding the differences between the two is helpful as you start to explore the resources below, and on our impact resources page. iPEN supports the inclusion of BOTH of these complementary activities.
Monitoring refers to data collection activities that 'track' the progress toward and achievement of outcomes and impacts.
At the end of a research project, all data collected in this way provides an evidence base for use in reports on what you delivered, and the extent to which you achieved your intended outcomes and impacts.
At the end of a research project, all data collected in this way provides an evidence base for use in reports on what you delivered, and the extent to which you achieved your intended outcomes and impacts.
The evidence is generally gathered by developing a set of indicators and measures, typically linked to your programme logic (and ideally your evaluation questions).
Better Evaluation is one of the most trusted sources of information on M&E. Here you can find a couple of really useful pages to help you understand what indicators and measures are (see here, and here), as well as some really clear guidelines on ways to collect data.
Evaluation focuses more on asking important questions about the project, programme, or research.
Traditionally, evaluation has focused on asking questions about the 'merit' or 'worth' of something. However recently evaluation questions focus more on reflective questions, which support ongoing learning and improvement. These questions are then answered by the data and evidence that are available, which will always start with monitoring data.
While there are no 'rules' about what evaluation questions should focus on, the OECD-DAC has developed a set of Key Evaluation Questions framed around relevant concepts (criteria). These are coherence, relevance, efficiency, effectiveness, impact, and sustainability.
Traditionally, evaluation has focused on asking questions about the 'merit' or 'worth' of something. However recently evaluation questions focus more on reflective questions, which support ongoing learning and improvement. These questions are then answered by the data and evidence that are available, which will always start with monitoring data.
While there are no 'rules' about what evaluation questions should focus on, the OECD-DAC has developed a set of Key Evaluation Questions framed around relevant concepts (criteria). These are coherence, relevance, efficiency, effectiveness, impact, and sustainability.
It is also important to note that what is useful and meaningful to ask of a project or piece of research will change depending on what stage it is at.
For this reason evaluation is traditionally broken into formative, process, and summative/impact, as you will focus on different things at different stages. For example, in the early stages of a project it is nonsense to focus on outcomes. Instead, checking if the design and early implementation seems likely to meet the identified need or problem is more important and useful.
For this reason evaluation is traditionally broken into formative, process, and summative/impact, as you will focus on different things at different stages. For example, in the early stages of a project it is nonsense to focus on outcomes. Instead, checking if the design and early implementation seems likely to meet the identified need or problem is more important and useful.
Evaluation is usually included for:
- Accountability purposes - to provide a more robust (and often more independent) way to verify that what was planned, commissioned, and funded happened
- To support and strengthen delivery - a 'utilisation focus' where data / evidence is gathered around questions that support reflection, learning, and improvement.
How do you combine them?
Combining M&E is relatively straightforward as they are complementary activities, but it is advisable to get some expert help at the start.
Tools like your programme logic will help identify key areas you will need to be able to 'tick off' and report on when it comes to progress. These, combined with your evaluation questions, will help you assess your progress and achievements more broadly.
Combining M&E is relatively straightforward as they are complementary activities, but it is advisable to get some expert help at the start.
Tools like your programme logic will help identify key areas you will need to be able to 'tick off' and report on when it comes to progress. These, combined with your evaluation questions, will help you assess your progress and achievements more broadly.
Where evaluation can really support your monitoring, is by asking reflective questions that help you check your assumptions, and ensure your approach is still appropriate. Remember, your programme logic is a 'living document' so use it to keep the team and your stakeholders united around a common set of outcomes as the project progresses.
Key resources
M&E is a huge area, and you can easily find yourself going down rabbit holes. Here are some websites we think are particularly helpful and that are relatively straightforward to understand.
Remember this is a specialist area, and one that encompasses approaches and data collection methods from a diverse range of disciplines and professions, so if you're starting to feel overwhelmed, try to bring it back to the basics, and focus on what you need to have first.
Remember this is a specialist area, and one that encompasses approaches and data collection methods from a diverse range of disciplines and professions, so if you're starting to feel overwhelmed, try to bring it back to the basics, and focus on what you need to have first.
A layperson's guide to evaluation by the Social Policy Evaluation and Research Unit (superu). Superu is now defunct. This, and other related resources are now hosted by the Social Wellbeing Agency.
It was developed to support the New Zealand public sector better understand what evaluation is, and how to make the best use of it to improve policy, services, and initiatives. |
Another companion handbook to the one above, this one focuses specifically on understanding evidence.
While the audience is policy makers, it has been written to be relevant and accessible to all with an interest in what counts as evidence when evaluating interventions and programmes that are funded with public money for our collective benefit. |
The Beyond Results website has some brief background on Monitoring and Evaluation, as well as some ideas and explanation of potential methods you might consider.
On this page they also have an Evaluation Planning Template which is especially useful in helping you consider and develop evaluation questions. |
Wageningen University's Centre of Development Innovation has been advocating and supporting good quality monitoring and evaluation practices to support and manage for sustainable development impact (M4SDI). They have developed a M4SDI portal with a variety of resources and guidance material, including a comprehensive text book which is free to download.
|
As noted earlier, the Better Evaluation website is one of the most widely used online resources by the Evaluation Community. It is an Australian based global not-for-profit organisation, managed by Patricia Rogers, a well respected member of the international evaluation community. If you're interested to read more about 'who' Better Evaluation is, click here, otherwise start exploring the rainbow framework for ideas, advice, and guidance.
|
Other resources
There is literally thousands of books and websites that you can trawl through to get ideas on how to monitor and evaluation project and programmes including research.
We continue to refer to some of the most widely used references and guidance material developed by our multilateral agencies who have decades experience in this area, including the World Bank and the OECD most of which are free to download.
The Research Impact Handbook is another useful guide, as is the M4SDI book (above). The Developing Monitoring and Evaluation Frameworks textbook is also a worthwhile reference book.
In all cases much of the guidance will come down to first establishing what you are trying to do, and then considering what evidence is most important to know if you are on track or achieving what you planned.
|
Interested in more?
Check out our impact resources and impact news pages, sign up to the networks or professional organisations we've listed here, or subscribe to updates from iPEN.
Check out our impact resources and impact news pages, sign up to the networks or professional organisations we've listed here, or subscribe to updates from iPEN.
Don't forget to check our impact glossary if you need to demystify some of the terminology used!