Methodical Snark critical reflections on how we measure and assess civic tech

Measurement always goes bad


What Flowers found was that even the best performance measurement systems tend to ossify. In 2010, 11 state and local public interest associations joined together to form the National Performance Management Advisory Commission. In its report, A Performance Management Framework for State and Local Government, the commission singled out Nashville, Tenn.’s Results Matter system as an example of a robust success in managing for results. But when Kristine LaLonde became Nashville’s co-chief innovation officer two years after Results Matter was cited as a national model, she found a city government suffering from what she calls “Results Matter PTSD.”

That’s John Buntin (emphasis mine), writing about the histroy of performance measurement systems in US goverment last week in 25 Years Later, What Happened to ‘Reinventing Government’?.

It’s a good medium read about some of the follies of deliberate policy innovation, and cautionary tale about measurement and reward systems that’s just as relevant for small NGOs as lumbering public insitutions.

Because like government, today’s NGOs can’t avoid conversations about measurement, and most will find themselves adopting some sort of system at the gentle insistence of their donors. This often gets couched in rhetoric of learning and improving work, but the drive towards empirical objective indicators and comparable data is hard to seperate from the larger context of austerity and measurements for results in the aid and development sectors. And for tech-driven work, there’s often the subtle implication that, well, there ought to data on all of this.

The problem is that measurement systems imposed on program staff can breed resentment and box checking culture over time, especially if data collection and reporting doesn’t actually add value to the work they’re doing. Small organizations with good communication will sense this sooner than later and scrap or ammend systems, which is a good thing, but also challenging if measurement isn’t based on the premise of iteration.

Grounding measurement conversations with donors and staff in the idea of bottom up measurement systems and adaptive learning is a good place to start. But the devil’s in the details.

Buntin’s peice gives a good example of why it’s important, and how good intentions can get messy and skewed when systems are designed to meet internal rather than external incentives.



Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech