Skip to Main Content

The Creative Civic Change evaluation approach

Creative Civic Change is powered by stories – and so was the programme evaluation.

Wherever Creative Civic Change folk convene, the gathering quickly assumes the feel of sharing tales around a campfire. Big and small, epic and everyday, joyful and poignant… this is the way that we as evaluators learned what the programme truly meant to those who were involved.

The idea of storytelling as evaluation has underpinned our approach to assessing the impact of Creative Civic Change on communities, artists, arts organisations, and funders. We wanted to challenge the misconception that stories are a lesser, ‘anecdotal’ form of data, as opposed to ‘real’ (quantitative) data.

Valuing and making space for community stories has helped us to more quickly and concretely understand the key impacts of Creative Civic Change – and to support these with meaningful quantitative data.

Research principles
Our three big questions
About the evaluation team

From Creative Civic Change project Nudge Community Builders in Plymouth, Devon.


Telling the Bigger Story: final evaluation

Communi-Tree voices: final evaluation for Creative Civic Change
Communi-Tree: a creative survey tool
Final evaluation data

From the Creative Civic Change project, Filwood Fantastic in Bristol.


10 top tips for evaluating a community-led programme

The Creative Civic Change programme was an experimental programme that put communities in the lead, meaning that evaluation outcomes were not imposed in advance by funders.

These top tips condense our learning from the evaluation of the programme over the last four years.

  1. Let go of rigid ideas. Evaluating a programme that is defined by its flexibility and experimental approach is creative, messy and challenging. We have learned to let go of rigid ideas of what ‘good’ or ‘rigorous’ evaluation should look like – especially as COVID-19 prevented projects from ‘measuring’ change in a conventional way using baseline data.
  2. Evaluators are needed from the get-go. Creative Civic Change projects conducted six months of creative development and consultation in their areas, but the evaluation process did not start until the programme launched. Involving evaluators at the earliest possible stage maximises the chances of capturing valuable data from development phases.
  3. Put creativity at the heart of the process. Over the course of the programme, we increasingly threaded creativity through every aspect of our work. Collaging projects’ desired outcomes, using local artists to bring our reports to life, the Communi-Tree – these are just a few examples.
  4. Build an evaluative culture. A dedicated collective reflection space is essential to building an evaluative culture. Peer learning events were crucial for us in building a culture of reflection, as well as identifying emerging impacts and challenges.People stretching upwards
  5. Build a collective approach to evaluation. Where outcomes are decided by communities rather than by funders, a tension can develop between projects’ own evaluation practices and the programme evaluation. There is a need to actively build a collective approach to, and investment in, the programme-level evaluation. Introducing programme-wide quantitative data collection at the end of Creative Civic Change was a challenge, as it didn’t fit with the more flexible and open evaluation approach that projects were used to.
  6. Projects need evaluation resource, not just support. Project managers are overstretched and volunteer time is limited. There needs to be a dedicated budget for evaluation activities. This is especially the case for creative evaluation, which is people-intensive.
  7. Be responsive, but communicate what’s coming. It was important and necessary for our evaluation process to be emergent and responsive to the current conditions – e.g. COVID-19. However, this responsiveness needed to be complemented with transparency about the amount of time and resource needed for programme-level evaluation activities.
  8. Evaluation materials need to be accessible and inclusive. Whilst projects were able to adapt activities to their communities, surveying accessibility requirements and engaging specialist support would have assisted projects with self-directed evaluation, ensuring materials were accessible and inclusive for all.
  9. Make training as accessible as possible. Evaluation support needs to reach beyond the project manager to artists, community members and delivery partners that are running sessions. Although we recorded live online sessions and produced toolkits, neither had the impact or reach we hoped for.
  10. Be aware of other funders’ evaluation requirements. Where activities are co-funded, projects have the challenging task of meeting multiple evaluation requirements. Our approach of only asking a few collective questions (5), and providing 1-2-1 support, allowed us to help projects integrate these requirements.

More evaluation