Methodical Snark critical reflections on how we measure and assess civic tech

Government response & citizen participation: newish research promises a lot

G

The title of a new PAR article from @fsjoberg,@jon_mellon and@participatory implies that it does, but deserves some caveats.

parart-responsiveness-participation

The research, originally published as a World Bank report last year,  looks at citizen reporting of potholes on the  Fix My Street platform in the UK. Specifically, the authors find that individuals are 57% more likely to submit a second report to the platform if the first report resulted in the the problem they reported being fixed. That’s a striking number, and while pothole reporting certainly falls within the nebulous label of “citizen participation,” it’s quite different from most of the ways the term gets used.

Those differences are important. Presumably, prior success will significantly motivate people’s decisions to politically engage in all kinds of ways (voting, starting a petition, community gardening), but will do so in dramatically different ways. This will likely be due in large part to how people feel about previous successes and whether or not they experience their activity as successful (a lack of perceived efficacy is often associated in with civic disengagement).

This introduces another key difference between this piece, which deals with objective political efficacy (whether or not political activity is actually effective) and mainstream research, which tends to look at perceived political efficacy (whether people think that their actions are effective). The latter is much easier to measure, which is why it’s been the standard for political science over the last half century. Surveys have their problems, but we know how to use them.

Measuring objective efficacy is harder because politics is messy. It will almost never be easy (or non-contentious) to causally attribute government policies or actions to traditional political participation like voting or mobilizing campaigns. But that’s also perhaps what’s most exciting about this research. It IS easy with citizen reporting platforms (or at least easier), and poses the question of how else objective efficacy measures might be applied.

This might be one of the few areas where big data and digital methods  provide unexpected opportunities. I’d love to see more work trying to measure objective political efficacy, perhaps by comparing the wording of campaigns and legislation/political platforms, or by analyzing changes in network composition in timelines related to policy changes. This would be a fantastic topic for one of the @digitalmethods summer or winter schools. With refined methods and a handful of studies, who know, we might even be able to start comparing the efficacy of different types of political participation (dare we think, transparency and accountability initiatives). It would at least be a first step towards that holy grail.

There’s a long way to go towards such work, but that seems to me to be the most profound contribution of this article. And while that dramatic number of 57% is certainly important to people running Fix My Street-style reporting platforms, it’s implications in this paper  demand a significant pinch of salt.

 

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags