Warning: long post, deep weeds.
Last week saw some really interesting thinking in the development economics blogosphere, focused on design questions for external validity (the applicability of case-specific findings to other cases). This is a central question for research on civic tech and accountability programming, which talks a lot about wanting an evidence base, but remains dominated by case studies, enthusiasm and a handful of ametuer researchers. We see this clearly a couple times a year (at TicTech, the Open Data Research Forum), where the community gathers to talk about evidence, share novel case studies, acknowledge that we can’t generalize from case studies, and then talk some more as if we can.
If we want to develop the kinds of general heuristics and and rules of thumb that would be useful for the people who actually design and prioritize programming modalities, the kind that would make it possible to learn across country contexts, then we have to be smarter about how we design our research and conceptualize our evidence base. There’s a lot to learn from development economics in that regard. Development studies is like pubescent civic tech and accountability’s older uncle, who used to be cool, but still knows how to get shit done. In particular, there was a lot to learn from last week’s discussions about generalization and validity.