Methodical Snark

critical reflections on how we measure and assess civic tech
This is a blog about how the civic tech and accountability field measures its work and its impact. It’s based on a critical perspective, but tries to be more productive than conference snark. It’s an effort to highlight how much work is being done in silos across the academic-NGO divide, to de-mystify important methods and findings, and to call out research that is sloppy or unhelpful. Scroll for the blog roll, or check out the core content types:
WEEKLY RESEARCH ROUNDUP: findings, happenings, and absurdity in civic tech research. 
I read THIS FOR YOU: summaries for those with more interest than time. 
Mini Lit Reviews: when I wonder about something, I check what the research says, and write it up.

Latest stories

Roundup: why people participate in politics and tweet storms, problems with generalizing research, throwing statistics out with the bathwater

Last week had interesting findings on political mobilization, now with brain scans. Lots of discussions about appropriate methods for measuring government performance, improving statistics and facilitating adaptive programming. Useful resources from the Engine Room and Beautiful Rising. Oh, and Disco!.

Yet another comparative metric on freedom of expression (kind of)

The Expression Agenda (XpA) was just released by Article19. It's rather prettier than most, and nice to see free expression data in something other than a map or a list; but really, do we need this? The metric doesn't contribute any new data, and the visualization is hard to parse. The report buried in the background is more important by far, but likely only for advocacy on global policy.

Roundup: strategies for institutionalization in govt, social media activism is stressful, and nobody reads research.

Findings Social media activism is stressful– At least in Pakistan, according to a recent survey (N=237, convenience sample) which found significant correlations between stress levels and political activism on social media. Users of Greece’s national transparency and anti-corruption website say they trust government more since the website was established (web survey n=130, availability...

Roundup: degrees of responsiveness, evidence on smart participation design, how digital mobilization works, civic engagement with the dead

Lots of findings in civic tech research last week. Evidence on how to build open procurement and citizen participation initiatives, field experiments on degrees of responsiveness and accountability workshops gone wrong. New resources on crowdsourced legislative processes and evaluating police accountability, plus insights on citizen policy preferences and lots of cases studies. All of this...

When do global do-gooders influence government behavior? (a mini lit review)

Here’s a long-ranging exploration of the literature on international relations, policy diffusion, public administration, global policy assessments and multi-stakeholder initiatives, where I try to draw some conclusions about what we know and what we don't. I wrap it up by proposing six research questions that could directly inform the design of global do-goodery. There’s a bulleted summary up top.

Mechanism Mapping: a tool for determining when programs can be scaled or adapted

A recent Oxford white paper proposes mechanism modelling as a method to determine when results of policy evaluations should be scaled or adapted to other contexts. This is an compelling contribution to ongoing debates about external validity of RCTs, more importantly, it's a simple and useful tool for thinking about when and how civic tech programs work across different contexts.

Methodical Snark critical reflections on how we measure and assess civic tech

Tags

Get in touch

Suggest research to be reviewed or mini-lit reviews. Ask questions or tell me why I'm wrong.