Methodical Snark critical reflections on how we measure and assess civic tech

Why No One Cares About Your Stupid Research

W

Clarification: this title might read as if I’m calling individual research outputs stupid. I’m not, and feel kind of bad for giving that impression. It’s the collective failure to consider relevance that I’m taking issue with here.

The purpose of this article is to […] suggest that we as a community debate whether we want to do something about our irrelevance and the internal norms and institutions that contribute to it.

So begins a wonderfully concise recent essay on the irrelevance of political communication research. This is the field into which my PHD work has been lodged, and his arguments resonated strongly with me. It’s a field with a tremendous amount of useful insights and findings that are hard to access and communicate. This is in part because the polcom field is exceptionally introverted. As Kleis Nielsen puts it:

Both informally and formally, we privilege a certain way of producing peer-reviewed work for a narrow academic audience to a degree that risks relegating everything else—interdisciplinary collaboration, teaching, service, let alone various forms of public engagement—to the margins (2).

There’s something similar going on in civic tech research, though incentives towards introversion are more weirdly skewed by donor preferences. What the two fields have in common is an apparent failure to think critically about how to get their research gets used.

I recently heard about a major purveyor of civic tech research complaining that “practitioners aren’t engaging with the evidence.” This drives me nuts. Of course they aren’t, and for at least three good reasons:

  1. Because the evidence on offer is dense hard to read. This is just as true of popular grey literature and practitioner-research as it is of peer reviewed articles. In fact, it’s often worse. By trying  to please too many audiences, civic tech researchers make their outputs difficult for all audiences to engage with.
  2. Because people are busy working. Those people you want to benefit from your research, they tend to have too much to do with too little resources. You’re lucky to get them through a blogpost, much less a treatise.
  3. Because it doesn’t tell us anything new. People doing on the ground civic tech work are often the smartest people in the room, and action strategists have a lot of tested hunches about what works and what doesn’t. If research only confirms those assumptions, then it should not be targeting them in the first place, but rather the funders and decision-makers that support them. The most ridiculous aspect of all of this, is that we still don’t know what kind of evidence and research people doing program design and implementation actually want or would use.

These are surmountable hurdles that together represent a very basic design flaw. It’s not unlike the truism apps should be designed around user needs and preferences, rather than the flashiest tech. So should civic tech research be designed on user needs and preferences. Not asking people what they want, and then complaining when they don’t use what you produce, is just reproducing the worst of this field, and with ginormous budgets to boot.

This is something the civic tech research community needs to tackle head on. It’s about 6 years too late, but upcoming events like #TICTeC provide useful fora to get it started. The important thing now, in civic tech as in political communications research, is to simply have a conversation about whether we care that no one is using the research. Or whether we should just. Keep. Funding it.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags