e-MOPs

Ethics & Engagement across the Wellcome Trust Major Overseas Programmes

Project report/blog from the March 2017 Evaluating Engagement Workshop

By Noni Mumba

(This blog post originally appeared on the mesh website here: https://mesh.tghn.org/articles/evaluating-community-engagement-reflections/

Thank you, Noni and mesh for allowing us to reproduce this blog post for the benefit of the e-MOPs community).

Evaluating Community Engagement: A coming together of great minds!

A participant reflection on the Mesh Evaluating Community Engagement Workshop 2017 which aimed to move forward the debates surrounding evaluation by drawing on practical experiences. 

One would think that a 2-day workshop on evaluation would be boring, mentally taxing and just tiresome. Indeed, I, being one of the delegates in that meeting, was uncertain about how it would turn out; after all, some people get squeamish when evaluation discussions start. However, the serene setting that was Kenya’s Sawela Lodges, the scenic gardens, good food and excellent ambiance helped to start us off on a positive note.

Jim Lavery set the tone of the workshop on a high. Starting off with using architecture to describe the evaluation process. Just like putting up a well-designed building, developing engagement approaches and evaluating them is challenging; and I agree. Perhaps before moving forward, I should state here that there is a continuing debate on whether Community and Public engagement are different. For purposes of this blog, I will use ‘community’ to mean both terms. So, back to challenges of evaluating community engagement; Jim (all protocol observed) gave us two premises: (i) that evaluating community engagement is complex: it has many goals, which are sometimes conflicting; there are many ethical issues that go with engagement for research; and that engagement must be responsive to the context within which it is being conducted; and, (ii) that perhaps it is high time we employed the same rigor to engagement, that is applied in science. These two premises grounded our discussions during the workshop.

We then moved on to listening to, and critiquing four case studies that were presented during the two days. We got an opportunity to showcase our (KWTRP) wonderful evaluation plan, which got an interesting description: a forest. What was meant was that there is very good work going on, and a great mix of wonderful approaches being used but we need to be able to tell our story better. Food for thought for our whole team.

We also got to hear about Malawi’s exhibitionsThailand’s puppet shows, and Vietnam’s schools project, all with great evaluation methodologies, and who also got an equal dose of good critiquing. Those who had specific evaluation challenges shared them with a ‘peer group’ for support and input in small group discussions, conducted out in the very lush gardens.  What was clear in all these presentations was that we have a rich array of engagement initiatives, and are obviously evaluating them and using findings to improve the activities.

As all these discussions and presentations were going on, the matter of which theoretical framework is best suited for our evaluations was discussed. There were some simple and some really complex diagrams describing the famous Theory of Change; and there were those amongst us who were brave enough to use the Realist Evaluation Theory. At a personal level, I think Realist Evaluation Theory is something I am going to have to read (in some depth) in order to grasp concepts such as CMOs (rhymes with GMOs? I will let you go and find out what this is!)

By the end of the two days, we all appreciated the fact that evaluating community engagement is no walk in the park! “Two or three people can look at one approach, and come to totally different conclusions,” said Jim, which is not a bad thing at all. One other thing that struck me in this workshop was the fact that we must feel comfortable to also share negative findings of our evaluation. Now that’s a tough one to do! But possible.

Generally, the workshop was excellent, in my opinion, fun, and all delegates were just superb. Our great facilitator Robin Vincent ensured that we all participated actively. His vast experience in evaluation of engagement came in handy during this workshop, and we all appreciated his efforts in developing a ‘Pathways to Impact’ diagram, baptized ‘The Snail’. The ‘snail’ and Jim’s ‘architecture’ diagram are perhaps frameworks that we may want to explore how to use, even as we reflect on moving forward our evaluation work in Kilifi.

Looking forward to developing these resources and having more interactions on MESH!

This resource resulted from the March 2017 Mesh Evaluation workshop. For more information and links to other resources that emerged from the workshop (which will be built upon over time) visit the workshop page.

For a comprehensive summary of Mesh's evaluation resources, and to learn how to navigate them, visit the Mesh evaluation page

Add a Comment

You need to be a member of e-MOPs to add comments!

Join e-MOPs

© 2017   Created by Dina Rippon.   Powered by

Badges  |  Report an Issue  |  Terms of Service