Case by case: what development economics can teach the civic tech and accountability field about building an evidence base

Warning: long post, deep weeds.

Last week saw some really interesting thinking in the development economics blogosphere, focused on design questions for external validity (the applicability of case-specific findings to other cases). This is a central question for research on civic tech and accountability programming, which talks a lot about wanting an evidence base, but remains dominated by case studies, enthusiasm and a handful of ametuer researchers. We see this clearly a couple times a year (at TicTech, the Open Data Research Forum), where the community gathers to talk about evidence, share novel case studies, acknowledge that we can’t generalize from case studies, and then talk some more as if we can.

If we want to develop the kinds of general heuristics and and rules of thumb that would be useful for the people who actually design and prioritize programming modalities, the kind that would make it possible to learn across country contexts, then we have to be smarter about how we design our research and conceptualize our evidence base. There’s a lot to learn from development economics in that regard. Development studies is like pubescent civic tech and accountability’s older uncle, who used to be cool, but still knows how to get shit done. In particular, there was a lot to learn from last week’s discussions about generalization and validity.

Continue reading “Case by case: what development economics can teach the civic tech and accountability field about building an evidence base”

Democracy in the eye of the beholder

I love it when messy methods get topical, and this might be one of the very few silver linings to come out of Trumpland. December saw the publication of an IPSR special issue on measuring democracy, and then shit got real this week, when Andrew Gelman began a stream of posts criticizing the application of EIP methodology to the recent presidential elections in US states, and especially the claim/meme that North Carolina is no longer a democracy.

Continue reading “Democracy in the eye of the beholder”

The long haul towards evidence: information in elections edition

Civil society groups emphasize the need for high quality public information on the performance of politicians. But, does information really make a difference in institutionally weak environments? Does it lead to the rewarding of good performance at the polls or are voting decisions going to be dominated by ethnic ties and clientelistic relations?

Enter the Metaketa project’s first phase, running 7 experimental evaluations in 6 countries to answer that question: Does more info change voter behavior? The results and synthetic analysis are all coming out early next year, which is exciting, but a long ways away. I was also happy to see that they have a pre-analysis plan for that synthesis work (basically a self control mechanism to ensure that data doesn’t get fudged under analysis to support preferred outcomes, unfortunately they don’t really get used). Continue reading “The long haul towards evidence: information in elections edition”

Gaps in Human Rights Research, Advocacy and Compliance

How human rights scholars conceal social wrongs.

That’s the title of an Open Democracy article published yesterday, which takes issue with the way that international comparative indices (such as Ciri Human Rights Data Project and Freedom in the World) hide injustice in rich western democracies. Specifically, the authors are angered by the US government’s consistently high ranking, despite systematic disenfranchisement of the African-American electorate. Continue reading “Gaps in Human Rights Research, Advocacy and Compliance”

When Indicators Get in the Way, Go Report Minimal?

Now, there’s a lot we could debate here about data collection processes, or tools, or when and how data clerks should be employed – but that’s not the point. Instead, we suggest that a growing amount of the qualitative evidence indicates that costs of collecting and reporting on the data that inform high-level performance indicators (for various agencies) can be quite high – perhaps higher than the M&E community typically realizes. These opportunity costs were echoed across countries and sectors; discussions with agricultural staff in Ghana, for example, suggest that many extension workers spend up to a quarter (or more) of their time collecting and reporting data.

That’s from a recent ICTworks blogpost. It’s focused on a specific initiative (HIV clinic in TZ), but the spirit will ring true to anyone who’s done donor reporting from the field (or at all really). The idea that reporting gets in the way of work is familiar, but it’s often even worse in a tech context, when there’s a presumption of data abundance, and finding metrics to strenghten work and anticipate roadblocks can be hard enough.  Continue reading “When Indicators Get in the Way, Go Report Minimal?”

All the books on researchers and the interwebs

Or at least the three I had in my bookmarks. But I feel like there’s been a lot in recent weeks. Are there others to add to this list?

9781447329251Being a Scholar in the Digital Era: Transforming Scholarly Practice for the Public Good (Jessie Daniels and Polly Thistlewaite, Eds). 
Strong normative bent in this one, for open research as well as social impact. Explicit focus on collaborating with activists. I look fwd to reading. Their blurb: Continue reading “All the books on researchers and the interwebs”