Eugene Eric Kim on What I Call “Community Indicators”

Community indicators are everywhere, including embedded in how we design and participate in our online spaces. Read the whole blog post, ok?

Creating delightful, inviting spaces is simple, but not easy. Unfortunately, we often make it unnecessarily complicated. I don’t expect most workspaces to have wide open, reconfigurable spaces with natural light on two sides and moveable whitewalls and furniture. But why can’t all workspaces have signs like this? How many actually do?

via Eugene Eric Kim » Three Simple Hacks for Making Delightful Virtual Spaces.

I love the word “delightful!” Thanks, Eugene! (And Katie, the community person at the Standford D School!)

Community Indicators: Thank Yous

From a Facebook post the students in Eric Tsui’s class at Hong Kong Polytechnic sent me this amazing thank you note. (Paper version, I’m told, is on the way!) Now THIS is a great community indicator. I get up early and late to deliver online webinar guest presentations. Rarely do you get this kind of feedback. I love it. Click the images to see more detail! Thank Eric and to all in your class. I’m smiling in Seattle.

HongKongPolyUThankyou Thankyou2

 

Data, Transparency & Impact Panel –> a portfolio mindset?

KanterSEASketchnotesYesterday I was grateful to attend a panel presentation by Beth Kanter (Packard Foundation Fellow), Paul Shoemaker (Social Venture Partners), Jane Meseck (Microsoft Giving) and Eric Stowe (Splash.org) moderated by Erica Mills (Claxon). First of all, from a confessed short attention spanner, the hour went FAST. Eric tossed great questions for the first hour, then the audience added theirs in the second half. As usual, Beth got a Storify of the Tweets and a blog post up before we could blink. (Uncurated Tweets here.)

There was  much good basic insight on monitoring for non profits and NGOs. Some of may favorite soundbites include:

  • What is your impact model? (Paul Shoemaker I think. I need to learn more about impact models)
  • Are you measuring to prove, or to improve (Beth Kanter)
  • Evaluation as a comparative practice (I think that was Beth)
  • Benchmark across your organization (I think Eric)
  • Transparency = Failing Out Loud (Eric)
  • “Joyful Funeral” to learn from and stop doing things that didn’t work out (from Mom’s Rising via Beth)
  • Mission statement does not equal IMPACT NOW. What outcomes are really happening RIGHT NOW (Eric)
  • Ditch the “just in case” data (Beth)
  • We need to redefine capacity (audience)
  • How do we create access to and use all the data (big data) being produced out of all the M&E happening in the sector (Nathaniel James at Philanthrogeek)

But I want to pick out a few themes that were emerging for me as I listened. These were not the themes of the terrific panelists — but I’d sure wonder what they have to say about them.

A Portfolio Mindset on Monitoring and Evaluation

There were a number of threads about the impact of funders and their monitoring and evaluation (M&E) expectations. Beyond the challenge of what a funder does or doesn’t understand about M&E, they clearly need to think beyond evaluation at the individual grant or project level. This suggests making sense across data from multiple grantees –> something I have not seen a lot of from funders. I am reminded of the significant difference between managing a project and managing a portfolio of projects (learned from my clients at the Project Management Institute. Yeah, you Doc!) IF I understand correctly, portfolio project management is about the business case –> the impacts (in NGO language), not the operational management issues. Here is the Wikipedia definition:

Project Portfolio Management (PPM) is the centralized management of processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage a group of current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals ― while honouring constraints imposed by customers, strategic objectives, or external real-world factors.

There is a little bell ringing in my head that there is an important distinction between how we do project M&E — which is often process heavy and too short term to look at impact in a complex environment — and being able to look strategically at our M&E across our projects. This is where we use the “fail forward” opportunities, the iterating towards improvements AND investing in a longer view of how we measure the change we hope to see in the world. I can’t quite articulate it. Maybe one of you has your finger on this pulse and can pull out more clarity. But the bell is ringing and I didn’t want to ignore it.

This idea also rubs up against something Eric said which I both internally applauded and recoiled from. It was something along the lines of “if you can’t prove you are creating impact, no one should fund you.” I love the accountability. I worry about actually how to meaningfully do this in a)  very complex non profit and international development contexts, and for the next reason…

Who Owns Measurement and Data?

Chart from Effective Philanthropy 2/2013
Chart from Effective Philanthropy 2/2013

There is a very challenging paradigm in non profits and NGOs — the “helping syndrome.” The idea that we who “have” know what the “have nots” need or want. This model has failed over and over again and yet we still do it. I worry that this applies to M&E as well. So first of all, any efforts towards transparency (including owning and learning from failures) is stellar. I love what I see, for example, on Splash.org particularly their Proving.it technology. (In the run up to the event, Paul Shoemaker pointed to this article on the disconnect on information needs between funders and grantees.) Mostly I hear about the disconnect between funders information needs and those of the NPOs. But what about the stakeholders’ information needs and interests?

Some of the projects I’m learning from in agriculture (mostly in Africa and SE/S Asia) are looking towards finding the right mix of grant funding, public (government and international) investment and local ownership (vs. an extractive model). Some of the more common examples are marketing networks for farmers to get the best prices for their crops, lending clubs and using local entrepreneurs to fill new business niches associated with basics such as water, food, housing, etc. The key is the ownership at the level of stakeholders/people being served/impacted/etc. (I’m trying to avoid the word users as it has so many unintended other meanings for me!)

So if we are including these folks as drivers of the work, are they also the drivers of M&E and, in the end, the “owners” of the data produced. This is important not only because for years we have measured stakeholders and rarely been accountable to share that data, or actually USE it productive, but also because change is often motivated by being able to measure change and see improvement. 10 more kids got clean water in our neighborhood this week. 52 wells are now being regularly serviced and local business people are increasing their livelihoods by fulfilling those service contracts.  The data is part of the on-the-ground workings of a project. Not a retrospective to be shoveled into YARTNR (yet another report that no one reads.)

In working with communities of practice, M&E is a form of community learning. In working with scouts, badges are incentives, learning measures and just plain fun. The ownership is not just at the sponsor level. It is embedded with those most intimately involved in the work.

So stepping back to Eric’s staunch support of accountability, I say yes AND the full ownership of that accountability with all involved, not just the NGO/NPO/Funder.

The Unintended Consequences of How We Measure

Related to ownership of M&E and the resulting data brings me back to the complexity lens. I’m a fan of the Cynefin Framework to help me suss out where I am working – simple, complicated, complex or chaotic domains. Using the framework may be a good diagnostic for M&E efforts because when we are working in a complex domain, predicting cause and effect may not be possible (now, or into the future.) If we expect M&E to determine if we are having impact, this implies we can predict cause and effect and focus our efforts there. But things such as local context may suggest that everything won’t play out the same way everywhere.  What we are measuring may end up having unintended negative consequences (this HAS happened!) Learning from failures is one useful intervention, but I sense we have a lot more to learn here. Some of the threads about big data yesterday related to this — again a portfolio mentality looking across projects and data sets (calling Nathaniel James) We need to do more of the iterative monitoring until we know what we SHOULD be measuring.  I’m getting out of my depth again here (Help! Patricia Rogers! Dave Snowden!)  The point is, there is a risk of being simplistic in our M&E and a risk of missing unintended consequences. I think that is one reason I enjoyed the panel so much yesterday, as you could see the wheels turning in people’s heads as they listened to each other! 🙂

Arghhh, so much to think about and consider. Delicious possibilities…

 Wednesday Edit: See this interesting article on causal chains… so much to learn about M&E! I think it reflects something Eric said (which is not captured above) about measuring what really happens NOW, not just this presumption of “we touched one person therefore it transformed their life!!”

Second edit: Here is a link with some questions about who owns the data… may be related http://www.downes.ca/cgi-bin/page.cgi?post=59975

Third edit: An interesting article on participation with some comments on data and evaluation http://philanthropy.blogspot.com/2013/02/the-people-affected-by-problem-have-to.html

Fourth Edit (I keep finding cool stuff)

The public health project is part of a larger pilgrimage by Harvard scholars to study the Kumbh Mela. You can follow their progress on Twitter, using the hashtag #HarvardKumbh.

 

Building a Virtual Tour of Online Communities

This is the second post about touring existing online communities as a learning journey for those building or sustaining their own communities. (Part 1 is here.) This one is about the nuts and bolts of doing a live tour of online communities. The first post laid out purpose, identification of potential communities to tour, and criteria for review and evaluation. So now lets talk about HOW to run the tour. This is nuts and bolts time!

Planning

  1. Pick your web touring technology. For this sort of event, I like to have a tool with fairly easy screen sharing and a shared chat room for note taking. I use a white board or slides to share the initial overview and questions.
  2. Set the date. Let your “tourists” know date, time and any technical requirement. This may mean needing to be online, have a headset/mic or an appropriate telephone dial in option.Confirm your communities. Get permissions as appropriate if you plan to use your personal log in to tour any private communities!
  3. Set up a URL list that can work both within your web technology and on a separate web page as back up. Plan a SHORT intro narrative to each community. Decide what pages you will visit and why. See the first post!  I like to throw the URLs and short descriptions on to a Google doc and share it with the tourists in advance.
  4. Test your URLs within the web meeting tool. Should they be links? Preloaded? Do you need username/password to log on to any private sites?
  5. As backup, grab a basic set of screen shots of each community in case your web touring technology fails. Yes, it happens! Always have a plan B.
  6. If you have a co-facilitator, define each of your roles.
    • It is often useful to have one person help folks if they have any technical needs, while the other runs the tour.
  7. Consider how you want to capture questions as you go — sometimes you will need to research and come back later with answers.  Encourage the tourists to take notes if that fits your culture!
  8. Send an email with the login information and any preparation you would like the tourists to do. I often send a short piece on community PURPOSE and some of the questions I mentioned in the  first post.

Running the event

  1. Log in early and make sure everything is working. Have an email prepped to resend in case anyone contacts you saying “I lost the url/login/etc.
  2. If you decided to preload URLs on separate whiteboards, etc, get that all set up. Set up any polls or questions on other white board pages or have them handy to cut/paste in.
  3. If you are recording the tour, don’t forget to hit the old “record” button once you start.
  4. When you start with your participants, give an overview of the tour process. It might go something like this:
    • We are going to look at X different communities today. I’m going to use the screen sharing tool (or whatever you plan) so I’ll be “driving” the tour, but please, if you see something you’d like me to click on, let me know. There is a slight lag with the screen sharing so speak up as soon as you can!
    • I want to review a couple of questions we should keep in mind as we tour (then I review the questions.)
    • Encourage shared note taking (I often use the chat room in the webinar tool).
    • Do you have any questions? (Answer them..)
    • Start…
    • Pause often for questions, observations.
  5. Between communities, do a quick recap asking for observations and answers to questions. Sometimes it is worth going deeper and seeing fewer communities…
  6. Leave at least 25% of the time at the end for reflection and next steps.

Follow Up

  1. If you are recording the event, capture the recording and share the URL.
  2. Clean up and share any collective notes taken during the event.

Do you have any other suggestions or ideas? Resource pointers? Please, chime in!

Virtual Tours of Online Communities as Learning Journeys

Having been involved in online facilitation since 1997, I’m often asked for examples of “successful online communities.” People want to see them, tour them, and understand what they can learn from them as they embark upon or support their own communities. Sometimes they are interested in technology. Sometimes they want to know about how things are structured and organized, both content and activities. But mostly they want to see examples where people really DO interact. This is always a challenge for three main reasons:

  • How do we qualify “success?”
  • How do we extrapolate lessons across diverse needs and contexts?
  • How do we account for “success” as underlying technologies reshape the very nature of communities into less bounded, often larger networks?

I’m preparing for another of these tours so I wanted to do some renewed reflective homework before I started building the tour. (I’ll say more about the actual tour process in a subsequent post.) Plus, by sharing this post today, maybe you, dear readers, will have some insights, comments or pointers I can include. And as always, you are welcome to use anything here if you are giving someone else a tour!

Here are four areas I’m reflecting on to help me conceptualize,  frame and plan the tour.

Community Indicators of All Sorts

What do we mean when we say “successful” for an online community? What are the parameters  Are we talking about the success of a community’s online interactions, or the whole life of the community which is often a blend of online and offline? What are the boundaries? For some time I have been collecting examples of what I called “community indicators” the gave us some clue about the life of a community. (You can read more musings about community indicators here and some bookmarked examples here.)

What are the indicators of community activity? In other words, as we observe a community, and (ideally) interview some of its members, what signs of life are we specifically looking for? There are the process indicators, both quantitative and qualitative that are most easily seen.

  • Evidence of mechanisms and opportunities for community member participation (availability/opportunity). These are often predicated on the underlying technology and intentions of those stewarding the site. Sometimes community members bring in additional opportunities, something that is becoming more common in open networks and ad-hoc configurations.
    • Types of interaction options: discussions, blogs, commenting, rating, personal/instant messaging, other synchronous and asynchronous interaction mechanisms, linkages to F2F or offline events, etc. What is useful? Appropriate?
    • Evidence of appropriate choices about what is public/open and what is private as it relates to community purpose.
    • Clarity on how members find out and learn how to use these mechanisms. (Communications and technology stewardship)
  • Evidence of participation
    • Quantified activity – number of posts, page views, ratings (thumbs up/down, likes), comments, and contributed content.
    • Quality – what interaction patterns demonstrate that people are interacting with each other (vs simply publishing or broadcasting?) This could be looking for conversational threads, evidence of reading/responding to what others post instead of simply posting one’s views, how conflict is used either generatively or as a deterrent to further interaction.
    • Recency (i.e when was the last substantial set of interactions?) So often we see the telltale signs of a dead community…
    • Number of members – this gets a bit subjective as some communities are intended as small, others larger. Sometimes it is hard to find this data and the number of registered members rarely corresponds with number of active members.

That said, most organizations want to implement an online community for a reason. The purpose should be the driver. So how do we relate those success indicators to the mission or goal of the community? In other words, how do we look beyond process to impact?

  • What connection can we see between the activity indicators and community goals/purpose?
  • How do we discern this connection in contexts of open-ended or very diffuse purpose? What happens when purpose shifts (as it often does)?
  • What sorts of monitoring and evaluation strategies are in place (visible, or more often, invisible and we need to ask the community leaders!)?
  • Taking a communities of practice perspective, what is the interplay between the DOMAIN of the community (what it is interested in), it’s COMMUNITY (who is involved and engaged, how they play out in relationships, etc.) and PRACTICE (what they do together and how they use what they do together back out in their own work/lives, etc.)?
Finally, we are living in the era of networked social media. Rarely is “a site” the only vector for interaction. Many communities live and work on multiple platforms, or at the least, publicize community activity via other networks such as Twitter and Facebook. So we look for these connections as well, and try to understand if they support the community purpose. Or if they even dilute it. Again, it depends on the purpose. If a community is very inward looking, outward links would dilute. If it is really interested in sharing what it does/learns out to the world and bring in people and ideas from the world, then these linkages are critical.

Tapping My Network for Examples

We each may have an example or two of “successful communities,” but the fact is, we need a broader scan than what is available in our personal realm, so my first step was to tap my network and see if I could surface any new examples. Some of my known examples are great, but old. Really old. Tweeting requests on December 23rd, however, is not so smart. But here is what I received on first query about vibrant online communities (with a special interest in Drupal based sites for this instance):


The first concrete suggestion was the Buckminster Fuller Institute (http://bfi.org/). And that was the ONLY concrete suggestion. Cameron Cambell’s (@ronindotca) comment about following a Drupal Developer’s trail of tears may give you a sense of the challenge at hand! Looking at the BFI site, there is little evidence of online community interaction (see http://bfi.org/news-events/community-content). I don’t think Cameron’s observation is far off base!

So back to my own set of examples, I compiled the following options.

  • Share Your Story (http://www.shareyourstory.org) – a long time, well established community. (Technology:Webcrossing. Disclaimer: I was deeply involved w/ this site early on!) This is a great example of when an online community really fills a needed function that is not easily found elsewhere. And of loving community management!
  • CPSquare (http://www.cpsquare.org) – private, must be member, but I’m a member! (Technology: WebCrossing and Disclaimer, I’m a member!) This is a private community so no easy peeking, but a good example of some deep learning events.
  • BetterEvaluation (http://www.betterevaluation.org) – an example of a new, emerging community based on Drupal (Disclaimer: I’m involved w/ this site!) It is useful to see a site before it really launches its interactive features. (Beta)
  • Knowledge Management for Development (http://www.km4dev.org) as both a long lived and multi-platformed global community which uses DGroups, an email centric tool, NING and mediawiki.  (I had been on the Core group from its beginning until late last year.)
  • The KSToolkit Wiki (http://www.kstoolkit.org) which is about the artifact more than the community.
  • A couple of Facebook communities
    • RosViz – a community of interest on Facebook (I’m one of the community moderators) – open hearted resource sharing. A good example of focused domain in a very open, outward facing context.
    • Network Weaving (just a member!) – Vibrant due to some passionate leadership and blending of synch and asynchronous interaction.
    • SCoPE is another good one. This is their FB home https://www.facebook.com/SCoPEcommunity while their main home is a Moodle site.
  • I asked for some other Drupal examples and here are a couple:

Extrapolating Lessons

It is great to see a successful community and think what they did will automatically create conditions for success for a completely different community. We know this is rarely true. So we need some sort of mechanism to extrapolate the lessons. Perhaps a heuristic that says if X is your goal, patterns 1, 7 and 12 might be useful. This is much harder than it looks due to the lovely complexity of human behavior. Here is what I’m thinking so far, but I’d love your suggestions:

  • What visual elements drew you into a site? What “turned you off?” Why?
  • In terms of figuring out how to get involved, what was easy? What was challenging? What are the technical and communications aspects of getting people involved?
  • What community activities could inspire your community? Which would you avoid?
  • What community leadership/management functions did you note as important? Do you have time and skills (or someone else does) to fulfill these roles?
  • What surprised you? How can you use that insight in your community?

Reflecting on the Learning Journey

The final bit is thinking about how we apply what we learn on a field trip to our own work. The questions above are one trigger, but the final part of the tour will ask each person to consider the following “next steps.”

  • What will be the first/next thing you will do to steward your community based on today’s tour? Why?
  • Review your community plan draft and see if there is anything you want to change based on what you learned today.
  • Pick one community (from the tour or one of your choosing) and explore it on your own. What else can you learn by digging in a bit deeper? Consider contacting and interviewing the community facilitator/leader/manager. What would you ask them?

Resources for Virtual Online Community Field Trips