How to understand and evaluate social innovations

Barbara Szijarto, Kate Svenson and Peter Melley

Photo credit : CRECS

Social innovation has been attracting a lot of attention lately as a new way to address social problems and needs. But does it work? How could we find out?

These evaluative questions were the driving force behind recent research conducted by education professors and CRECS senior researchers Peter Milley and Brad Cousins, along with two doctoral candidates, Barbara Szijarto and Kate Svensson. Their findings were presented at two international conferences last fall and at a CRECS Colloquium on January 20, 2017.

“Social innovations emerge when people from different sectors and walks of life are brought together to co-create solutions,” says Milley, “Governments, philanthropists, social service providers and business people have been investing time, energy and resources to stimulate social innovation, believing that its adaptive, collaborative processes are well-suited to tackling complex social issues.”

According to Milley and his colleagues, evaluators have been actively experimenting with study designs and methods to help funders and innovators understand how and whether innovations are working. The efforts of evaluators mean practice is well ahead of research on evaluation in this new area.

“We wanted to make a contribution by taking stock of the lessons learned,” says Millley, “So we did a systematic review of empirical studies on evaluations conducted in social innovation contexts to what evaluation practices are used, what influences these practices, and how these practices affect innovation.”

Their study made the following obervations.

Similar evaluation practices used in different social innovation (SI) contexts:

Despite significant variation in the SIs, there was little diversity in evaluation approaches used (e.g., no programme theoretic approaches, only one experimental design). The majority were Developmental Evaluations and collaborative in nature. This suggests such approaches speak to the needs of stakeholders, but it also raises questions about the absence of a peer-reviewed, empirical knowledge base on other approaches known to be used in SI contexts.

Complexity calls for collaboration:

The sample offered many rich descriptions of multi-sectoral collaboration, which was seen to generate tensions and conflicts. This may explain why evaluators turned to approaches based in complexity thinking that allowed for deliberation, negotiation and learning. Such approaches are consistent with guidance about working on intractable issues in contexts featuring social complexity.

Collaboration can lead to conflict:

Although the cross-sectoral collaboration and inherent uncertainty of SI processes may be contributing factors to conflict, the use of flexible and emergent evaluation approaches like Developmental Evaluation may also create stress for some actors if not skillfully implemented.

Balancing learning and accountability:

Data in the sample also revealed a surprising number of funders were amenable to prioritizing learning over accountability. This provides a counterweight to critiques offered elsewhere that funders tend to focus on results and impacts in ways that can impede innovation, e.g., by expecting results too early in the process.

Mediating influences: Time, timing, capacity and relationships:

Effective adaptations to evaluation methods sometimes occurred after the evaluation teams learned from mistakes and actors in the SIs learned to work with evaluators. Solid relationships, timely and relevant inputs, and long-term engagement appeared to allow for evaluations to positively influence SI processes.

Bridging different ‘conversations’:

The study tapped into a ‘conversation’ in a community of scholars and practitioners focused on evaluation practices related to public- or donor-funded SIs. Elsewhere, there are ‘conversations’ taking place about privately funded initiatives (i.e., social enterprises), through other publication venues, and in languages other than English. Work is needed bridge these discussions so that greater conceptual nuance can be brought to this growing area of theory and practice.

Continuing to seek conceptual clarity:

Various actors have made progress in defining what constitutes SI and how SI processes unfold. Milley and colleagues encourage evaluators working in SI contexts to remain up-to-date on these conceptual developments. Misconceptions about SI have the potential to lead to the misapplication of evaluation approaches.

Building capacity:

Evaluators working in SI contexts need to be skilled at using non-conventional and conventional evaluation approaches and at recombining methods, tools and techniques. The ability to foster good relationships with and between actors in SI contexts is also important.

Conducting further research:

More peer-reviewed empirical research is needed on evaluation practices in SI contexts based on a broader range of study designs. More research is needed on how evaluation practices influence SI processes over time, space and scale.

For more on our findings, you can find our paper presented at the European Evaluation Society Conference.

Back to top