We were excited to collaborate with friends & co-conspirators on this post that appeared originally on the American Evaluation Association AEA365 blog.
Hi, we’re Liz Gordillo (Liz G Strategy), Rory Neuner (The Health Foundation of Central Massachusetts, formerly Michigan Health Endowment Fund), Leah Josephson (Emergence Collective), and Lauren Beriont (Emergence Collective). Since 2019, we’ve been working on building the evaluation ecosystem of Michigan.
First, let’s back up to why we thought this was important. One worldview is that evaluation consultants compete with one another for work, sometimes in service of entities that spend money on evaluation, like foundations. But social change work is driven by more than a profit motive. In fact, competition is not the entire story; we rely on one another. Social change work is interdependent; we operate within an ecosystem that includes foundations, grantees, community members, policymakers, and evaluation consultants, among others.
We’re a foundation evaluator and evaluation consultants, all working together to advance evaluation that focuses on complex issues facing our communities, with dozens of intertwined programs, peoples, and variables. On their own, human ecosystems aren’t inherently effective without intentional and strategic attention to the health of the individual actors and the system as a whole. Our goal is to promote co-created and anti-racist approaches to evaluation, rather than perpetuating traditional forms that were intentionally or unintentionally re-creating inequitable standards. This anti-racist goal comes from our conversations as social workers on critical race theory and person-in-environment frameworks, and continues to be inspired by the Equitable Evaluation Initiative.
As nerds of systems-thinking, we look to where Donella Meadows would say to start: at leverage points. As long-term practitioners we’ve begun to recognize several high-impact leverage points in our work in Michigan – places we can focus our efforts to build a collaborative evaluation ecosystem. For example, we recognized the need to build consistent and transparent information flows between consultants. We also tapped into the increasing conversations around power in philanthropy and wanted to increase our collective skills at more equity-based evaluation practices. We continue to build our understanding about which leverage points we can or cannot change as a small coalition.
Through discussions like these, this past year we’ve:
Considered ways to reframe philanthropic impact to center grantee success
Convened evaluation consultants to begin building a collaborative rather than competitive environment
Identified evaluation strategies, tools, and approaches that best support and help us understand intentional collaboration (e.g. ripple mapping, outcomes harvesting, participatory evaluation, culturally responsive evaluation)
We continue to wrestle with questions like: To what extent were we successful in shifting away from the mindset that evaluation is for reporting to funders? How are we re-focusing evaluation as a way to help organizations make informed decisions? What role do we see evaluation consultants playing in helping nonprofits and foundations pivot?
Rad Resources:
Lesson Learned: Just because we talk a big game about working together doesn’t mean it’s easy. There has been a lot of learning and unlearning about work styles, budgeting, and even collaborative writing that we’ve had to hash out to be successful in our partnership.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.