As I have grown professionally and academically, I find two particular roles that I have held feel at times like they are at odds: my role as an academic researcher and my role as an evaluation practitioner.
To provide some background, I am currently a doctoral candidate at the University of Michigan where I conduct research in the areas of Social Psychology and Social Work. Prior to my doctoral career I held several positions in academia/research, including teaching at the baccalaureate level, working as a research fellow for Rutgers School of Social Work, and more recently as a programmer and research assistant at Mathematica Policy Research.
I have also served in many consulting/practice capacities including working as a respite counselor for youth with mental and physical disabilities, a consultant for a return-on-investment firm, program planner at NBC Universal, and currently consultant at Emergence Collective.
As I began my academic career, the distance from practitioners – and authentic connection to communities – became more striking.
One experience at an academic research conference exemplified this distance. Like most academic conferences, this one had a special keynote speaker. The keynote speaker that year was an author, practitioner, social activist, and community organizer. She offered stories and perspectives from the communities she served and was a part of. She also challenged the academic audience; she insisted that impact is ill-defined, and even that academics are out of touch. Following the session, I approached several academic researchers to get their impressions of the keynote. Similarly, they challenged the speaker. They noted that she didn’t have any concrete evidence or data to back up her claims.
While we (the academic audience) and the keynote speaker had the same underlying objectives, to support communities and effect change, the divide in approach felt impassable. While I felt inspired by her words and the narratives of community members, I also felt the need to more deeply connect with community and practice. Soon after, I began my work as an evaluation consultant toward this end. With my hands in both worlds, I continue to perceive some distance.
This distance is based often on the idea that academics and practitioners have opposing approaches and goals. I’d like for us to challenge this idea and consider how academic and practitioner approaches are complementary rather than opposite. To better establish partnerships and support solid evaluative work, I challenge us to view this distance as a chance to more fully interrogate our differences, and therefore the ways we can strengthen each other's efforts.
Understanding and leveraging our differences
The tension underlying academic and practitioner partnerships is not unfounded. Many identifiable differences contribute to misalignment in partnerships. Here I will identify some (not all) of these differences, but also how they might be leveraged to the benefit of both parties.
Ideological approaches. Often, academics and practitioners ask and answer questions differently. This is based on differences in audience and frame of reference. In academic research, the frame of reference is often the evidence base, with a literature search further defining the research question itself. In practice, evaluation questions are shaped by personal experiences and community structures. The ultimate audience of interest is also different, with academics often writing to and for other academic researchers, and practitioners focusing on constituents, clients, employees, policymakers, services, or the like.
Time frames and scopes. Academic project timelines are often considerably longer than those of practitioners in the field. These longer time frames can conflict with practical needs for more immediate interventions or with shorter funding time frames for program improvement or evaluation.
Rigor and relevance. Differences in rigor and relevance stem both from differences in audience and approach, as well as differences in the way we define what is rigorous and what is relevant. For academic researchers, scholarly activity that contributes to knowledge, even without direct practical implications, might be deemed highly relevant. For the practitioner, however, such work might not be accessible nor relevant to the questions and issues practitioners want or need to address. There might also be differing perceptions of what is considered rigorous in the way of research methodology. Qualitative methods of inquiry are at times considered less rigorous in certain academic disciplines. Conversely, qualitative methods are often a primary approach in evaluative practice.
Motivations. Academics and practitioners are incentivized differently when it comes to the goals of their respective work. Not only is publishing in the peer-reviewed evidence base incentivized for academics, but it is also beneficial to career advancement. For the practitioner, publication might not function in the same way, and their careers might be driven by different markers for success such as conference presentations, referrals from partners, or recognition as a thought leader. Paywalls and thick, academic-style writing norms make the academic evidence base literally and figuratively out of reach for many practitioners and much of the general public. Further, academic journals rate articles by how much they are cited by other academics, without strong regard to the article’s relevance to practice, which favors on perceptions of rigor as defined by academics without much input from the field.
Thinking back to my academic conference experience, differences in perceptions of rigor and relevance definitely contributed to the tension that the audience and undoubtedly the speaker felt. The speaker and the audience had differing definitions of evidence and data which contributed to the perceived mismatch. Understanding these differences allows us to understand and honor the different roles of academics and practitioners to better strategize for coordinating our work.
Opportunities for successful partnerships
These differences notwithstanding, there is much opportunity for successful and mutually beneficial collaboration and partnership. Below I note some ideas for how we might approach partnerships:
Be aware of the systems we operate within and how they might bias our approaches and methods. Bias in and of itself is not negative; we all have biases that inform our preferences, keep us safe, etc. Bias can be problematic, however, when it prevents us from accepting different perspectives or being open to new ways of engaging. In addressing our professional (and personal) biases we can consider redefining or broadening our understanding of key concepts like rigor, relevance, data and even research.
Don’t start with methods! Often, tension around methodological approach is premature and highlights our biases. We should always start with the problem at hand or question. At Emergence Collective, in building evaluation and research plans we are thoughtful of what is needed, of team capacity and bandwidth, of skills and knowledge, and of our ultimate audience in ways that honor both the academic and practitioner.
It is also important to be thoughtful of outside forces and incentives that encourage use of methods and approaches that are not appropriate for our work. One way that we work to address bias and methodology differences at Emergence Collective is to host two kickoff meetings at the beginning of each project. One internal kickoff meeting focuses on team capacity, skills and strengths of individual team members, bias, and perceptions of audience goals. The second external meeting is a discussion with our partners about how we can leverage our experience, backgrounds, and partner expertise to best address evaluation needs.
Use the changing research and funding landscape to your advantage. Funding priorities have increasingly focused on evidence-based interventions, programs, and approaches. Academic access to and knowledge of the evidence base can support the development and refinement of programs. Also, there has been an increasing emphasis on impact, and the nuances of programmatic features that support impact are often understood well by practitioners.
Prioritize relationships. Building authentic relationships is key to healthy partnerships between academic researchers and practitioners. There are several areas where this can show up in your evaluation work. First, define what kinds of relationships you would like to build and how you hope to grow in doing so. Next, put your money there by including it in your budgeting. Ask yourself questions like: Are you or your staff allotted time to focus on building relationships? Are there training and professional development opportunities that allow staff to meet and learn from diverse professionals? Are relationships prioritized internally? When hiring new team members, is relationship-building skill a part of the review process? In all, consider what might it look like for your organization to invest in genuine relationships so there is shared understanding and vision to partnering toward an overarching vision. At Emergence Collective we value relationships and invest in them. We create time for internal staff to connect socially. Staff are also allotted professional development time and are encouraged to share with their colleagues as they learn.
Other ways to leverage academic/practitioner relationships:
Consider employing mixed methodologies that allow for more robust understanding of phenomena and incorporate multiple skills and perspectives.
Consider broader and more creative dissemination approaches that honor the versatility of evaluative scholarship. While the peer-reviewed publication is the career currency for the academic, barriers to access prevail. Often, practitioners, policy makers, and the general public can be reached more effectively via news media, social media, issue or policy briefs, workshops and seminars, and even talk media such as podcasts and vlogs.
Theory development and application in practice-based settings can be supported and advanced through academic partnership. Academic researchers often spent a lot of time understanding and even developing theoretical approaches that are applicable to the practitioner. Here is one place where relationships are vitally important.
Practitioners and community members might understand trends in data in ways that academics do not due to lack of exposure. Meaning making by practitioners in academic scholarship has been incredibly beneficial to advancing scholarship. Practitioners can also support in identifying implications within and across complex systems. For academic researchers this can look like inviting community practitioners to support with result interpretation and consideration of implications for both evaluative and basic research.
Comments