Methodical Snark critical reflections on how we measure and assess civic tech

TAI’s guide to contextualizing learning and evidence

T

What is it?

It’s a 16 pg pdf presenting lots of background work.

Most immediately, this is a method for contextualizing evidence for the design of transparency and accountability programming. The method is simple, and based on two steps:

  1. Ask specific questions when reviewing evidence (including specification of the accountability actor and whom they seek to hold accountable, as well as specified hypotheses and causal pathways)
  2. Organize evidence according to contextual characteristics (this method uses 10 indicators drawn form comparative international data, to distinguish between things like “regime type”, “professionalization of bureaucracy” and “strength of horizontal accountability institutions”).

Specifying causal dynamics and context allows in evidence review for a more precise identification of which evidence is actually relevant, and the idea is that this makes evidence more useful, and by extension, strengthens the design of projects.

The authors have tested this approach in three topical evidence reviews (taxation and accountability, international standards on accountability, and information and accountability [forthcoming]).

Most importantly, they’ve also started building a tool with which to query available evidence, there’s also a video explaining how to use it.

Should you care?

Yes, if you are at all interested in using evidence to design better programs, unequivocally yes.

Misunderstandings about how and when to learn across contexts might be the biggest obstacle to meaningful evidence use and learning in the civic tech and accountability community.

I’ve written before about the tyranny of the unhelpful case study in civic tech research, and what the field needs to do to make sense of case studies. I’ve also flagged mechanism mapping as the most accessible and useful method for people doing actual program design to learn from other contexts.

This approach builds on mechanism mapping by creating typologies for most relevant contextual factors, and makes it all incredibly accessible by building a tool!!!

Where it’s coming from

This produced by the TA/I donor collaborative and produced by Lily Tsai and at the MIT GovLab. I don’t see any obvious biases or interests at play, it just looks like a smart appraoch.

Notably, the authors are a nice mix of academics and research-seasoned practitioners, which helps explain how something this accessible and useful resonates so strongly with rigorous qualitative methods from the social sciences (I’m thinking particularly of Collier et al’s work on conceptual typologies and Bennett’s work on typological theorizing and contrast typologies, to identify the scope conditions for specific causal mechanisms [George and Bennett 2005, 233-262]).

Well done. I hope this gets tested widely, and that the authors get some useful feedback to refine it.

 

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags