Measuring What Matters: Designing, Experimenting, and Learning Our Way to an Impact System

📌 Quick Overview

AspectWhat We Did
ApproachProactive planning + continuous experimentation + structured learning
Key ExperimentsTreble survey notifications, FUPA form for Agile Products, embedding measurement in modules
Tech BackboneAlchemer, Salesforce, Tableau, R, Google Sheets, Data Warehouse
InnovationsUnified indicators, standardized instruments, IRIS+ inspired catalogue
DimensionsEntrepreneur capacity, Business performance, Sustainability, Community, Access to markets & finance
My RoleArchitect of the system: applied systems thinking, built the team, designed the Theory of Change

Introduction
Impact systems are often presented as if they were born fully-formed: neat logframes, perfect indicators, and smooth dashboards. The reality is far more dynamic. At Agora Partnerships, change was constant—program structures, delivery models, and donor priorities evolved all the time. In the Impact team, we responded by combining proactive planning with ongoing experimentation and structured learning.

For us, design was not about freezing a blueprint. It was about anticipating what could emerge and ensuring the system could adapt. Experimentation was a deliberate choice: testing new ideas, refining processes, and integrating insights quickly. And learning was never passive—we actively built mechanisms to capture and use knowledge, so every cycle of work became the foundation for the next.


From Planning to Experimentation
When I stepped into the role of Director of Impact, Agora lacked a unified way to track evidence across programs. Each project had its own indicators and formats, which made comparison—and decision-making—difficult. Instead of seeing that fragmentation as a problem to fix, we treated it as an opportunity to design something coherent and forward-looking.

Some of our early experiments included:

  • Embedding measurement modules directly into products as they launched, even without long exploratory studies, to capture immediate feedback.
  • Creating FUPA, a short form to track Agile Products (bootcamps, workshops, masterclasses). It allowed us to compare products of very different lengths while measuring pertinence and user acceptance.
  • Testing Treble notifications to articulate a data collection system and improve survey response rates from entrepreneurs.

Each experiment was grounded in design, but we never waited for perfect information. We planned enough to act, tested quickly, and refined continuously.


Building the Technical Backbone
The backbone of the Agora Measurement System (AMS) was technological, but its strength came from strategic decisions rather than just tools. We integrated Alchemer, Salesforce, Tableau, R, Google Sheets, and eventually a Data Warehouse. What made the difference was how we used them:

  • We established a unified set of indicators across the organization, ensuring consistency and eliminating the need to reinvent metrics for every donor request.
  • Standardizing instruments and questions, ensuring that data from different programs could be connected in a coherent database.
  • Developing a catalogue of indicators, inspired by IRIS+, that helped us map what we had, what we were missing, and how to grow systematically.

These decisions turned disparate data points into a system capable of telling a consistent story about impact.


Anchoring in Five Dimensions
We organized measurement around five key dimensions:

  1. Entrepreneur capacity development.
  2. Business economic performance.
  3. Environmental and social sustainability.
  4. Community and networks.
  5. Access to markets and finance.

This framing forced us—and our donors—to go beyond generic indicators. For example, instead of reporting a single “number of jobs created,” we distinguished between full-time and part-time employment. This specificity mattered: it made our reporting more credible and allowed entrepreneurs to see their own progress more clearly.

Donor negotiations often revealed a tension between simplicity and rigor. Our role was to bridge both, ensuring data remained credible while meeting external expectations.


Organizational Realities
Was the process participatory? To some extent—but in truth, many decisions reflected top leadership’s priorities. My role often required internal lobbying: translating between the CEO’s vision, donor expectations, and what the data team could actually deliver. We did validate indicators with entrepreneurs, but the heart of the work was aligning internal actors around a shared architecture.

This is why planning mattered so much. Without a proactive design framework, we would have been stuck reacting to shifting demands. Instead, we could experiment deliberately, adapt quickly, and still hold onto a coherent long-term vision.


My Role: The Architect
Looking back, my contribution was not only to coordinate but to design the architecture of the system. I applied systems thinking so that measurement wasn’t an afterthought—it was embedded from the first application form and baseline survey through to monitoring, outcomes, and impact evaluations.

I built and trained a junior team, expanded it with senior specialists as the system matured, and orchestrated the integration of different actors across Agora. The Theory of Change I co-developed became the spine of the measurement system, ensuring that everything—from survey design to dashboards—was connected to the organization’s larger mission.


Reflexions
Accountability was a starting point, not the finish line. At Agora, we built a system that not only met compliance needs but also fostered adaptability, continuous learning, and a shared vision of impact that could grow with the organization.

Publicaciones Similares

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *