Yesterday I was grateful to attend a panel presentation by Beth Kanter (Packard Foundation Fellow), Paul Shoemaker (Social Venture Partners), Jane Meseck (Microsoft Giving) and Eric Stowe (Splash.org) moderated by Erica Mills (Claxon). First of all, from a confessed short attention spanner, the hour went FAST. Eric tossed great questions for the first hour, then the audience added theirs in the second half. As usual, Beth got a Storify of the Tweets and a blog post up before we could blink. (Uncurated Tweets here.)
There was much good basic insight on monitoring for non profits and NGOs. Some of may favorite soundbites include:
- What is your impact model? (Paul Shoemaker I think. I need to learn more about impact models)
- Are you measuring to prove, or to improve (Beth Kanter)
- Evaluation as a comparative practice (I think that was Beth)
- Benchmark across your organization (I think Eric)
- Transparency = Failing Out Loud (Eric)
- “Joyful Funeral” to learn from and stop doing things that didn’t work out (from Mom’s Rising via Beth)
- Mission statement does not equal IMPACT NOW. What outcomes are really happening RIGHT NOW (Eric)
- Ditch the “just in case” data (Beth)
- We need to redefine capacity (audience)
- How do we create access to and use all the data (big data) being produced out of all the M&E happening in the sector (Nathaniel James at Philanthrogeek)
But I want to pick out a few themes that were emerging for me as I listened. These were not the themes of the terrific panelists — but I’d sure wonder what they have to say about them.
A Portfolio Mindset on Monitoring and Evaluation
There were a number of threads about the impact of funders and their monitoring and evaluation (M&E) expectations. Beyond the challenge of what a funder does or doesn’t understand about M&E, they clearly need to think beyond evaluation at the individual grant or project level. This suggests making sense across data from multiple grantees –> something I have not seen a lot of from funders. I am reminded of the significant difference between managing a project and managing a portfolio of projects (learned from my clients at the Project Management Institute. Yeah, you Doc!) IF I understand correctly, portfolio project management is about the business case –> the impacts (in NGO language), not the operational management issues. Here is the Wikipedia definition:
Project Portfolio Management (PPM) is the centralized management of processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage a group of current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals ― while honouring constraints imposed by customers, strategic objectives, or external real-world factors.
There is a little bell ringing in my head that there is an important distinction between how we do project M&E — which is often process heavy and too short term to look at impact in a complex environment — and being able to look strategically at our M&E across our projects. This is where we use the “fail forward” opportunities, the iterating towards improvements AND investing in a longer view of how we measure the change we hope to see in the world. I can’t quite articulate it. Maybe one of you has your finger on this pulse and can pull out more clarity. But the bell is ringing and I didn’t want to ignore it.
This idea also rubs up against something Eric said which I both internally applauded and recoiled from. It was something along the lines of “if you can’t prove you are creating impact, no one should fund you.” I love the accountability. I worry about actually how to meaningfully do this in a) very complex non profit and international development contexts, and for the next reason…
Who Owns Measurement and Data?
There is a very challenging paradigm in non profits and NGOs — the “helping syndrome.” The idea that we who “have” know what the “have nots” need or want. This model has failed over and over again and yet we still do it. I worry that this applies to M&E as well. So first of all, any efforts towards transparency (including owning and learning from failures) is stellar. I love what I see, for example, on Splash.org particularly their Proving.it technology. (In the run up to the event, Paul Shoemaker pointed to this article on the disconnect on information needs between funders and grantees.) Mostly I hear about the disconnect between funders information needs and those of the NPOs. But what about the stakeholders’ information needs and interests?
Some of the projects I’m learning from in agriculture (mostly in Africa and SE/S Asia) are looking towards finding the right mix of grant funding, public (government and international) investment and local ownership (vs. an extractive model). Some of the more common examples are marketing networks for farmers to get the best prices for their crops, lending clubs and using local entrepreneurs to fill new business niches associated with basics such as water, food, housing, etc. The key is the ownership at the level of stakeholders/people being served/impacted/etc. (I’m trying to avoid the word users as it has so many unintended other meanings for me!)
So if we are including these folks as drivers of the work, are they also the drivers of M&E and, in the end, the “owners” of the data produced. This is important not only because for years we have measured stakeholders and rarely been accountable to share that data, or actually USE it productive, but also because change is often motivated by being able to measure change and see improvement. 10 more kids got clean water in our neighborhood this week. 52 wells are now being regularly serviced and local business people are increasing their livelihoods by fulfilling those service contracts. The data is part of the on-the-ground workings of a project. Not a retrospective to be shoveled into YARTNR (yet another report that no one reads.)
In working with communities of practice, M&E is a form of community learning. In working with scouts, badges are incentives, learning measures and just plain fun. The ownership is not just at the sponsor level. It is embedded with those most intimately involved in the work.
So stepping back to Eric’s staunch support of accountability, I say yes AND the full ownership of that accountability with all involved, not just the NGO/NPO/Funder.
The Unintended Consequences of How We Measure
Related to ownership of M&E and the resulting data brings me back to the complexity lens. I’m a fan of the Cynefin Framework to help me suss out where I am working – simple, complicated, complex or chaotic domains. Using the framework may be a good diagnostic for M&E efforts because when we are working in a complex domain, predicting cause and effect may not be possible (now, or into the future.) If we expect M&E to determine if we are having impact, this implies we can predict cause and effect and focus our efforts there. But things such as local context may suggest that everything won’t play out the same way everywhere. What we are measuring may end up having unintended negative consequences (this HAS happened!) Learning from failures is one useful intervention, but I sense we have a lot more to learn here. Some of the threads about big data yesterday related to this — again a portfolio mentality looking across projects and data sets (calling Nathaniel James) We need to do more of the iterative monitoring until we know what we SHOULD be measuring. I’m getting out of my depth again here (Help! Patricia Rogers! Dave Snowden!) The point is, there is a risk of being simplistic in our M&E and a risk of missing unintended consequences. I think that is one reason I enjoyed the panel so much yesterday, as you could see the wheels turning in people’s heads as they listened to each other! 🙂
Arghhh, so much to think about and consider. Delicious possibilities…
Wednesday Edit: See this interesting article on causal chains… so much to learn about M&E! I think it reflects something Eric said (which is not captured above) about measuring what really happens NOW, not just this presumption of “we touched one person therefore it transformed their life!!”
Second edit: Here is a link with some questions about who owns the data… may be related http://www.downes.ca/cgi-bin/page.cgi?post=59975
Third edit: An interesting article on participation with some comments on data and evaluation http://philanthropy.blogspot.com/2013/02/the-people-affected-by-problem-have-to.html
Fourth Edit (I keep finding cool stuff)
- “Who Counts: the power of participatory statistics.” Edited by Jeremy Holland with an Afterword by Robert Chambers.
- NYTimes “Can Big Data From Epic Indian Pilgrimage Help Save Lives?“
The public health project is part of a larger pilgrimage by Harvard scholars to study the Kumbh Mela. You can follow their progress on Twitter, using the hashtag #HarvardKumbh.