Section One: Introduction to Public Diplomacy monitoring
Introduction to public diplomacy monitoring: what it is vs. evaluation, why it matters, and where to start—aligned with 18 FAM 300 guidance.
R/PPR developed this Monitoring Toolkit for Public Diplomacy (PD) staff and implementing-partner organizations conducting PD-funded activities to plan and monitor initiatives and activities. It incorporates and contextualizes guidance from 18 FAM 300, Managing for Results, A/OPE grant requirements, and other R Family research units into monitoring recommendations for PD sections.
Monitoring should be useful, practical, and avoid overburdening staff, while still maintaining high standards for accountability, transparency, and good stewardship of U.S. taxpayer dollars. Recognizing the enormous levels of contextual, cultural, logistical, and programmatic diversity within PD, as well as the high level of autonomy and discretion PD staff have at post, this toolkit promotes a flexible approach to monitoring.
As individual practitioners and PD sections develop capacity as monitoring practitioners, skills will accrue and develop over time. We recommend new monitoring practitioners focus on the following topics first; these are the foundational sections that will set the team as a whole up for success.
a) Understand the PD monitoring process overview, and
b) Use the monitoring selection criteria to determine which section activities to monitor.
Once a section has mastered these two topics, those assigned to monitor the PD section’s work should delve further into the “PD Monitoring Toolkit,” which contains instructions on how to complete a PD logic model, a monitoring plan, and an after-action review (AAR) for the PD section has chosen to monitor. Together, these three components are known as the “PD monitoring process.”
Monitoring is only one small part of the work PD sections are asked to do, so within a section, specific practitioners may be asked to specialize in monitoring and to develop specific skills and capacity to complete the monitoring process. This guide includes in- depth information and resources about each part of the PD monitoring process that will help sections and individuals develop capacity to meet their monitoring needs.
What is monitoring and evaluation (M&E)?
Monitoring and Evaluation (M&E) are terms often used together, but they are conceptually and operationally distinct. They complement each other and share some conceptual and practical linkages, namely evaluation is nearly impossible without sound monitoring. In the simplest terms, monitoring tells us what happened while evaluation helps us understand why it happened. The main focus of this document is on monitoring; however, it is important to understand how the two concepts work together.
Monitoring is the continuous process of collecting and reviewing data to measure what is happening during a section activity or initiative. Monitoring focuses on whether a section activity or initiative is meeting its planned outputs and outcomes. Ideally, PD sections should build some level of monitoring into most section activities, using the monitoring selection criteria to guide prioritization and decision making.
Monitoring describes what happened during section activity implementation and often focuses on answering factual questions: who, what, when, and where. That is, monitoring provides basic descriptive data. You should use monitoring to inform decisions to improve efficiency and effectiveness. Monitoring data can indicate when (or if) you need an evaluation to understand how or why you observe certain results. Monitoring activities also provide vital inputs for planning or conducting an evaluation. Monitoring can also assist in defining environmental and audience conditions, which in turn can inform future section activity design and implementation decisions.
Evaluation is a one-time assessment to determine whether, how, or why a section activity met its stated goals and objectives. It is intended to understand larger processes or outcomes, asking and answering “how” and “why” questions. Conducting an evaluation can help you answer questions about the effectiveness of your planning and implementation or why your section activity design did or did not work as intended. Like monitoring data, evaluations can strengthen both current and future initiative designs.
Monitoring is an essential partner to evaluation because monitoring data can tell you when an evaluation is needed, can help you understand why you observe certain results, and can help you plan and conduct an evaluation. It requires significantly more resources and expertise to evaluate than to monitor PD interventions.
Importantly, not all PD initiatives, section activities, programs, campaigns, etc. need an evaluation. Indeed, very few section activities are good candidates for a formal evaluation. Formal evaluations can be time-consuming, sometimes costly, and technically complex. Independent experts from outside the PD section typically conduct evaluations. We recommend that you consult with R/PPR evaluation specialists before conducting or commissioning a formal evaluation. For this reason, the Monitoring Toolkit focuses only on monitoring.
Why is monitoring important?
We strategize and plan to align our goals, resources, and methods to use them most effectively. We monitor our work, so we can improve our strategy and planning. Each part of the monitoring process contributes to increased effectiveness over time. Logic models outline how section activities lead to expected changes in priority audience groups and thus to the overall goal. Monitoring plans detail how you will track progress, identifying indicators and data sources to measure actual versus expected results. Integrating monitoring data into an after-action review (AAR) allows PD teams to assess implementation, identify strengths and weaknesses, and provides opportunities to recalibrate and improve future PD activities.
By incorporating monitoring from the early planning stage, PD practitioners can:
- Plan deliberately for section activities, with clearly defined goals and a road map;
- Identify unexpected performance gaps quickly to make corrections and improvements;
- Determine whether or not an activity met its goals;
- Apply lessons learned to design future initiatives and section activities, thereby increasing effectiveness; and
- Share lessons learned and best practices with other posts.