Keys to Successful Policy Advocacy M&E

After moving through the policy advocacy planning exercises, M&E can be conceptualized as a continuous cycle that each stakeholder contributes to. This page introduces you to the key components of a successful policy advocacy M&E program. The checklist, found below, should be used as a guide, both during policy advocacy planning, and continuously throughout the M&E program cycle.

Keys to Successful Policy Advocacy Monitoring and Evaluation:

Your policy advocacy M&E program must be operational at the outset of your policy advocacy programming. This means research, development and implementation of an M&E program should be happening simultaneously with policy advocacy planning. Use the policy advocacy planning exercises you have completed to conduct a SWOT Analysis and create a spreadsheet of SMART Objectives. Advocacy programs require the flexibility to operate in a fast-changing environment, yet close attention to and record of the direction of programmatic changes–making project planning and M&E all the more essential.

Once you are ready to begin designing your M&E program, the following checklist, “The Principles for Policy Advocacy Monitoring and Evaluation,” is a useful guide to develop your policy advocacy M&E program:

  • In building a monitoring and evaluation program, your organization should:

  • Develop clear programs that can be easily adapted for various locales and levels of advocacy.
  • Focus on testing the links in the chain of policy change, rather than merely assessing the elements in isolation.
  • Develop programs that fully contextualize contribution, including understanding the intervention of other actors and an overall sense of the complex dynamics at play.
  • Design monitoring and evaluation systems to fit around existing advocacy programs, establishing a firm link to planning and budgeting processes.
  • Secure active involvement of senior managers in review and analysis processes.
  • Prioritize the facilitative role of monitoring and evaluation professionals in building evaluative capacity organization-wide, including through design of ways of working that make it easy for people to engage meaningfully in monitoring and evaluation processes.
  • Take active steps to re-balance accountability where necessary, countering the tendency to prioritize upwards accountability, to funders, in particular.
  • Pay attention to building capacity for strategic, as well as tactical learning and adaptation.
  • Develop an overarching approach to monitoring and evaluation that is intentionally designed to challenge and test strategy and the assumption underlying it, as well as to improve implementation of existing strategy.
  • Gather evidence of monitoring and evaluation costs and benefits.
  • Constantly look for ways to simplify your policy advocacy M&E program.

Common Issues and Solutions for Monitoring and Evaluation:


  • One way to mitigate this difficulty is to set aside time from actual program work and dedicate it to M&E activities.
  • For your organization to benefit from any effort put into monitoring and evaluation activities, the leadership needs to actively participate.
  • It can be difficult and time consuming to design surveys, databases, train staff, and create tables to consolidate and analyze data. Plan ahead, leave ample time and monitor and evaluate your M&E program to eliminate time inefficient M&E activities.
  • A grave mistake is taking the time to design surveys, but not leaving time to do data entry and data analysis. It may be wise to choose two pieces of information to meaningfully gather and analyze. This is ultimately better than implementing a large monitoring plan that your organization will never have the time to evaluate.

Monitoring and evaluation focus tends to be on serving the purpose of “upward” accountability, with space for strategic learning somewhat constrained:

  • The tendency is for M&E to primarily serve the needs of funders, donors, grantmakers and others to whom practitioners are “upwardly” accountable.
  • M&E processes appear to be less oriented towards uncovering strategic limitations than tactical ones. This could be because identifying operational learning carries less risk, whereas exposing strategic flaws or weaknesses could come at a cost in terms of reputation or future funding.
  • One consequence of this is that monitoring and evaluation tends to be more directed towards considering how existing strategies can be delivered more effectively, rather than calling those strategies into question altogether.
  • To actively avoid these issues and improve your monitoring and evaluation program to yield strategic benefits, continuously re-evaluate how clearly defined your objectives, strategies, and tactics are within an overall theory of change.

Quantity vs. Quality of Analysis:

  • In M&E programs there is often tension between the desire for numerous indicators and the need for more meaningful analysis of progress and achievements.
    • M&E staff often perceive a growing demand from funders and senior managers for advocacy results to be represented in quantified form.
    • Implications of this trend toward quantification can be that:
      • Reporting focuses on what is inherently quantifiable (generally, inputs and outputs), with the result that information about advocacy can be presented in somewhat underwhelming ways, falling short of making the strategic case for investment.
      • Attempts to ‘quantify’ qualitative information (e.g. giving rating scores to levels of support among targets): such approaches rely, necessarily, on subjective assessment and are not robust.
  • Seeking to contextualize data by pairing quantitative and qualitative information is one approach to mitigating these potential disadvantages. (e.g. interviews or surveys can show trends in implementation failures).
  • In addition, ensure that the quality of data and consistency of reporting is recognized as crucial to M&E effectiveness.

Distinguishing your organization’s advocacy efforts:

  • There is no universal method with which to monitor and evaluate policy advocacy success in a way that will be meaningful to philanthropists, foundations and grantmakers.
  • Sophisticated, formally designed evaluation mechanisms are often unsuccessful at showing progress toward achieving a policy project’s goals.
  • Relationships between advocacy work and indicators of progress are complex and often missing a causal link.
  • Successful advocacy evaluation requires a deep knowledge of local political issues, strong networks and trustworthy allies, an ability to assess organizational quality, and accurate management of time and funds.


  • The main goal of your M&E program should be to make your organization’s policy advocacy programs more efficient. A tedious, time-consuming monitoring and evaluation program will produce the exact opposite results.
  • The unpredictable nature of outcomes for policy advocacy programs means that focusing on why changes need to be made is more important than allocating too many resources to collecting data on pre-determined indicators.
  • Do not get bogged down in attempting to quantify and track every move you make. In order for your M&E program to maintain effectiveness, it must combine simple, easy to use processes with a thorough, systematic approach to the collection and tracking of policy advocacy progress.