Methodical Snark

critical reflections on how we measure and assess civic tech
This is a blog about how the civic tech and accountability field measures its work and its impact. It’s based on a critical perspective, but tries to be more productive than conference snark. It’s an effort to highlight how much work is being done in silos across the academic-NGO divide, to de-mystify important methods and findings, and to call out research that is sloppy or unhelpful. Scroll for the blog roll, or check out the core content types:
WEEKLY RESEARCH ROUNDUP: findings, happenings, and absurdity in civic tech research. 
I read THIS FOR YOU: summaries for those with more interest than time. 
Mini Lit Reviews: when I wonder about something, I check what the research says, and write it up.

Latest stories

Why No One Cares About Your Stupid Research

Recent reflections about the irrelevance of academic political communication research should help prompt the civic tech community to think critically about why no one is using all the research that gets produced these days. It's time for a frank conversation that's frankly overdue.

Roundup: evidence on the power of knowing who’s watching, nothing disruptive about open data research, and wet string.

Highlights from civic tech research last week included calls for intermediaries to build safe spaces for government data, an unsurprising stocktaking on open data research, and a productive research takedown by someone who's not me. Plus, there's piles of almost useful learnings, useful help for contribution analysis and data analysis with visualization, and tips for making research useful. Also...

Short Summary of the Bank case study on participatory rule-making

The title of this report promised a lot,, so I was disappointed to see how little the document had to offer. It's essentially a read of the Bank's GIRG data relevant to participatory rule-making, but fails to offer much insight. This is disappointing given so much dynamic work being done in the field, like GovLab's crowdlaw project.

Roundup: why people participate in politics and tweet storms, problems with generalizing research, throwing statistics out with the bathwater

Last week had interesting findings on political mobilization, now with brain scans. Lots of discussions about appropriate methods for measuring government performance, improving statistics and facilitating adaptive programming. Useful resources from the Engine Room and Beautiful Rising. Oh, and Disco!.

Yet another comparative metric on freedom of expression (kind of)

The Expression Agenda (XpA) was just released by Article19. It's rather prettier than most, and nice to see free expression data in something other than a map or a list; but really, do we need this? The metric doesn't contribute any new data, and the visualization is hard to parse. The report buried in the background is more important by far, but likely only for advocacy on global policy.

Roundup: strategies for institutionalization in govt, social media activism is stressful, and nobody reads research.

Findings Social media activism is stressful– At least in Pakistan, according to a recent survey (N=237, convenience sample) which found significant correlations between stress levels and political activism on social media. Users of Greece’s national transparency and anti-corruption website say they trust government more since the website was established (web survey n=130, availability...

Methodical Snark critical reflections on how we measure and assess civic tech

Tags

Get in touch

Suggest research to be reviewed or mini-lit reviews. Ask questions or tell me why I'm wrong.