Methodical Snark critical reflections on how we measure and assess civic tech
Category

Random Snark

Updates, commentary, live blogs, cries for help. There’s no telling what will show up in this category.

Click bait for accountability pundits: this month’s most misleading blog title

This blogpost describes an MAVC learning event, which in turn identified “7 streams of tech-enabled change that have proven to be effective in pursuing accountable governance.” Those seven streams are listed below, and while they represent a useful typology of tech for accountability programming, they do not represent activities that connect governments with their citizens.

Designing useful civic tech research at scale: why methods matter

The Hewlett Foundation has asked for help in crowdsourcing research design for citizen reporting on public services. This is great; it’s a fantastic way to design useful research, and shows that Hewlett is maintaining the strong commitment to evidence and rigorous question asking that is so important to this field. The post has already generated some useful discussion, and I’m sure that they are...

Cosgrove goes to Washington

I’ve just finished my first week at Georgetown University, where I’ll be through the end of 2017 (locals can find me at cw986). I’m here to do field work for a case study on the institutional context of open government, and it’s an exciting theme to be digging into right now, as the US revamps work on it’s much speculated OGP participation.

The (other) problem with scholarship on digital politics

Update: My review of Analytical Activism is up at Information, Communication & Society (gated). Here’s a free e-print and the preprint. One of the great dangers of the digital moment we currently are liveing through is that the discipline as a whole will succumb to a particularly virulent form of availability bias. It is easy to gather Twitter data. It is harder to navigate the Facebook...

New evidence on the domestic policy influence of global performance assessments

Using a multilevel linear model to account for the hierarchical structure of our survey data, we find evidence that performance assessments yield greater policy influence when they make an explicit comparison of government performance across countries and allow assessed governments to participate in the assessment process. This finding is robust to a variety of tests, including country-fixed and...

Measuring women’s empowerment: pushing composite indicator frameworks on projects?

While the framework remains unchanged, the characteristics and indicators that make up the index change from context to context, aiming to capture the characteristics of an ‘empowered woman’ in the socio-economic context of analysis. The index provides a concise, but comprehensive, measure of women’s empowerment, while also allowing breakdown of the analysis by level of change or the individual...

Case by case: what development economics can teach the civic tech and accountability field about building an evidence base

Warning: long post, deep weeds. Last week saw some really interesting thinking in the development economics blogosphere, focused on design questions for external validity (the applicability of case-specific findings to other cases). This is a central question for research on civic tech and accountability programming, which talks a lot about wanting an evidence base, but remains dominated by case...

Democracy in the eye of the beholder

I love it when messy methods get topical, and this might be one of the very few silver linings to come out of Trumpland. December saw the publication of an IPSR special issue on measuring democracy, and then shit got real this week, when Andrew Gelman began a stream of posts criticizing the application of EIP methodology to the recent presidential elections in US states, and especially the...

Methodical Snark critical reflections on how we measure and assess civic tech

Tags