Methodical Snark

critical reflections on how we measure and assess civic tech
This is a blog about how the civic tech and accountability field measures its work and its impact. It’s based on a critical perspective, but tries to be more productive than conference snark. It’s an effort to highlight how much work is being done in silos across the academic-NGO divide, to de-mystify important methods and findings, and to call out research that is sloppy or unhelpful. Scroll for the blog roll, or check out the core content types:
I READ THIS FOR YOU: summaries for those with more interest than time. 
MINI LIT REVIEWS: when I wonder about something, I check what the research says, and write it up.
RESEARCH ROUNDUP: review of recent research on topical issues.
RANDOM SNARK: reflections and questions about using evidence in civic tech design.

Latest stories

Click bait for accountability pundits: this month’s most misleading blog title

This blogpost describes an MAVC learning event, which in turn identified “7 streams of tech-enabled change that have proven to be effective in pursuing accountable governance.” Those seven streams are listed below, and while they represent a useful typology of tech for accountability programming, they do not represent activities that connect governments with their citizens.

research links w 40,17

Briefly: European governments are making decisions behind closed doors, according to research by Access Info.  A survey on citizen uptake of a reporting platform (Linz, Austria, n=773) finds mixed results on motivations for participation, but community disconnectedness and previous reporting experience seem strong predictors. A natural experiment with @openstreetmap‏ data suggests that data...

Evidence on social accountability programs

…social accountability processes almost always lead to better services, with services becoming more accessible and staff attendance improving. They work best in contexts where the state-citizen relationship is strong, but they can also work in contexts where this is not the case. In the latter, we found that social accountability initiatives are most effective when citizens are supported to...

research links w 38-39, 17

New Media and Society has a special issue coming up on digital activism. It looks like a collection of cases, with little synthetic analysis or commentary. See the intro article in post print here. There’s also a special issue of the Qualitative Research journal focused on how qualitative methods should respond to the onslaught of new social data, including ethnographic methods for...

Designing useful civic tech research at scale: why methods matter

The Hewlett Foundation has asked for help in crowdsourcing research design for citizen reporting on public services. This is great; it’s a fantastic way to design useful research, and shows that Hewlett is maintaining the strong commitment to evidence and rigorous question asking that is so important to this field. The post has already generated some useful discussion, and I’m sure that they are...

research links w 37 -17

I’m going to start prioritizing brevity, leaving out some of the absurdity and academic opps, let me know if you miss anything. Findings How to improve the quality of crowdsourced citizen science data? Technical measures help, but only when accompanied by instructions, according to an empirical study of four cases. Meanwhile, open data on public safety and transportation are the most...

Cosgrove goes to Washington

I’ve just finished my first week at Georgetown University, where I’ll be through the end of 2017 (locals can find me at cw986). I’m here to do field work for a case study on the institutional context of open government, and it’s an exciting theme to be digging into right now, as the US revamps work on it’s much speculated OGP participation.

The (other) problem with scholarship on digital politics

Update: My review of Analytical Activism is up at Information, Communication & Society (gated). Here’s a free e-print and the preprint. One of the great dangers of the digital moment we currently are liveing through is that the discipline as a whole will succumb to a particularly virulent form of availability bias. It is easy to gather Twitter data. It is harder to navigate the Facebook...

A belated summer dump (w 28-36)

So I’ve been away for a whopping 8 weeks, bouncing between holidays,  summer schools, consultancies and moving the fam to DC. Somehow the internet refused to stop while I was gone. So as I get back into the swing of things, here is an abbreviated summary of the summer’s findings in civic tech research, plus a couple of choice weeds and reflections.

Methodical Snark critical reflections on how we measure and assess civic tech

Tags