Impact, a complex concept. The result of monitoring and evaluation activity is clearly and directly appreciable when it concerns operational aspects: activities and outputs realized, beneficiaries reached, and immediate and visible consequences of a project that has just been completed. These data, which are mainly obtained through monitoring activities, are also the most important for project implementers, who are held directly accountable.
It is, on the other hand, more complex to answer broader and more strategic questions typical of evaluation work: what was (and what will be) the impact of the project? Has the project achieved its ultimate goal, that is, its overall objective? Did it produce the change it set out to do when it was drafted?
Answering these questions with objective evidence requires collecting data in a period after the project has ended and using resources that may go beyond what is made available under a single European project.
Moreover, impact is a conceptually and statistically-mathematically complex concept because many factors contribute to its realization: it is not easy to “isolate” the project’s contribution from a plurality of other concomitant factors. For example: how appreciable are the effects of a poverty reduction project on a community, and how do we isolate them from a plurality of other factors (positive or negative) such as the effects of the economic situation, industrial policies, other parallel projects, and the initiative of community members?
However, measuring impact remains a legitimate concern: impact is an integral part of the project’s rationale and its monitoring and evaluation framework; it is the starting and ending point for anyone implementing or funding a project; it is what defines in the broadest terms the actual success of the project.
Again, the following discussion makes no claim to scientific rigor or comprehensiveness, but aims to translate the concept of “impact” into some insights that may be “within the reach” of European project implementers.
Impact as counterfactual analysis. Counterfactual analysis defines impact as the difference between data collected at the end of an intervention (“factual” data) and data collected in a situation characterized by no intervention (“counterfactual” data). This is the most “scientific” approach to impact evaluation: in fact, it is used in medical research, which compares “treatment subject” groups with “control” groups.
This approach is difficult to use in the social field, as it assumes:
- The existence of indicators that are uniquely verifiable with analytical tools and have an equally verifiable and unambiguous link to the dimension they are intended to measure.
- The possibility of identifying a “control group” with characteristics and dynamics fully comparable with those of the project’s target group.
These are not easy conditions for many projects involving “human” and social aspects, in which:
- The correlation between data and measured phenomenon may be stronger or weaker, but it is hardly unique and depends on the intervention of multiple factors.
- The situations of groups and communities are very varied, complex and (upon close analysis) difficult to compare.
Despite its limitations, counterfactual analysis remains a useful “ideal benchmark” for measuring impact.
Impact as a change in a trend. Counterfactual analysis can be used in an attenuated form by defining impact in simpler, more general terms as “the ability to produce a change in trajectory” in a trend or phenomenon.
While not totally quantitative and scientific, the analysis of the project’s data against some benchmark trends provides a measure of its impact, that is, how successful the project was in “changing” an existing trend. This type of analysis can be traced in more formal terms to the “difference-in-differences” method, which analyzes the dual variation of a variable: over time (before, after, ex-post”) and between subjects (recipients and non-recipients).
This method can be applied with a greater or lesser degree of complexity and rigor depending on the ambitions and resources available. May be applicable:
- Circumscribing the scope of the phenomenon being measured to that to which the project contributed most strongly and directly (to increase the level of correlation between indicator and measured objective);
- Comparing the evolution recorded by project data against reference points as “close” as possible to the project’s population-target (a “quasi-counterfactual” situation );
- Conjugating different and complementary comparison references (or “triangulating” different data and viewpoints to increase the reliability of the results), if possible;
- Including in the analysis, if possible, multiple moments of measurement (to set a trend), including “follow-up” measurements (e.g., after one, two or three years after the conclusion of the project);
- Accompanying the analysis with an assessment of the factors (positive or negative) that may have influenced the data and “trends” of the project and the references used.
For example, on a project dedicated to job placement for young people in the 15-24 age group, residing in an urban area prone to social problems, changes in employment data of young people in the 15-24 age group recorded can be compared:
- From the project on its beneficiaries (baseline vs. final data: “factual” data).
- In the project intervention area (or in another urban area subject to social problems), during the same period (a “quasi-counterfactual” figure).
A more detailed and specific example is provided at the end of theMonitoring and Evaluation Framework example.
The choice of comparative metric (or the simultaneous use of multiple comparative references) may vary depending on the availability of data. Differences between “factual data” and “quasi-counterfactual data” can be analyzed (and possibly weighted, or corrected) in light of other factors and variables that may have affected the two reference populations:
- Positive factors-for example, positive results obtained from parallel initiatives in the area (e.g., professionalizing courses, support for internships, tools for “matching” labor supply and demand…).
- Negative factors-for example, economic difficulties of businesses in the area or worsening enabling conditions (e.g., decrease in resources allocated by government to education or social welfare).
The Theory of Change can assist in this weighting activity, as it provides a “mapping” of all the conditions necessary to bring about a desired change.
Impact as “stories” of change. What has been illustrated so far follows a logical and structured pattern, more or less quantitative, based on the concept of “measuring” the change achieved against what the project aims for.
In some projects this scheme may be complex or insufficient to correctly and fully illustrate qualitative changes, unexpected phenomena and effects not defined in the initial metrics. For this reason, there are broader qualitative or untethered methods of measuring impact against initial “goals” (e.g. “goal-free” evaluation).
Again, a comprehensive, exhaustive and rigorous treatment of the topic is beyond the ambitions of this Guide. However, it is important to draw attention to the importance of qualitative and less structured aspects in measuring the impact of a project.
In operational terms, this means asking the following questions: how have the lives of beneficiaries (or beneficiary organizations) changed as a result of the project? What role did the project play in their evolution, their “history,” and their individual experience? In the perception of the beneficiaries (or beneficiary organizations), what would their lives and history have been like without the project intervention? Can these small individual “stories” in turn produce new small and striking “stories of change”? Through individual “stories” and points of view, is it possible to draw a line that identifies the project’s parameters of success and its weaknesses?
“Stories” can be collected and evaluated through various methods of qualitative analysis, already mentioned in the previous sections: interviews and focus groups; case study writing and narrative surveys; and more specific methods, such as “most significant change” and systems for analyzing and graphically representing trends and qualitative changes.This type of analysis adopts an empirical approach based on “induction,” that is, on formulating general conclusions from particular cases. It should not be considered a “plan B” compared to other methodologies, as it may be able to capture different, deeper or at least complementary elements than more structured systems of analysis.
An analysis through “stories” of various kinds also makes it possible to develop communication and dissemination material that is interesting and usable by a broad audience of specialists (by virtue of its depth of analysis), by partners and stakeholders (who can in turn make it their own and disseminate it), and by the broader audience of laymen.
These aspects are relevant and appreciated in the context of European projects. Reporting and communication are interrelated aspects that respond to a common goal of accountability and transparency (accountability) towards institutions, citizens and its target community.
Deepening concepts and approaches on impact. For those who wish to approach impact measurement and management methodologies from an alternative and complementary point of view, we recommend anextensive review of guides and tools produced by specialized organizations in the impact investing field, to which we have devoted a separate in-depth article.
L’impact investing is characterized by methodical and conscious mobilization of resources to achieve measurable impact in areas where there is a shortage of it (principles of intentionality, measurability and additionality). While the proposed guides and tools do not have a specific focus on the scope of our Guide, they have points in common with what is described in this chapter and can provide additional insights for measuring and managing impact in European projects.
We highlight two more guides devoted to project evaluation.
They are not recent and come from particular fields, but they may provide interesting insights for those working with European projects.
1. A guide developed as part of CIVITAS, a European Union initiative dedicated to urban mobility.
Although with examples dedicated to the specific sector, it provides a very clear, comprehensive and general treatment of the topic of project and program evaluation.
2. A “user friendly” project evaluation manual developed in the U.S. (government agency National Science Foundation), which has a systematic, comprehensive and scientific approach to the topic of project evaluation.