Methodical Snark critical reflections on how we measure and assess civic tech

Roundup: participation is up in Latin America, nobody’s paying for the data revolution, and somebody finally asked the activists what research they actually want


Bouncing back from World Press Freedom Day and on the cusp of Open Gov Week, it’s good to be back on the weekly roundup. I’ve been collecting links for the last 4 months I’ve been offline as well, might get around to writing them up at some point too (there’s more than 600, ugg).

Anyway, here’s what happened in civic tech research last week:


Across Latin America, “there has been an increase in people’s willingness to participate in political protests and to sign petitions – in some [countries], those levels have been at their highest compared to the past ten years.” This from the inaugural America’s Civic Empowerment Index, from Humanitas360 and The Economist Intelligence Unit. Because we need *more* democracy rankings.

“Information campaigns are a cost-effective way to increase voter leverage over politicians, but incumbent politicians respond by increasing vote buying, leading to no effect on incumbent vote share.” This one of many nuggets in David Rogger’s summary post on the latest research on the quality of governance (drawn mostly from RCTs, including some interesting treatments of political accountability as a positive externality for tech diffusion).

“Despite the excitement surrounding the data revolution, […] financing for statistics has not materialized to support the revolution.” That’s the main, unsurprising takeaway from @OpenDataWatch’s new report on financing for development data, which also includes a useful breakdown of funding modalities and their comparative strengths and weaknesses.

World Vision and Save the Children have released lessons from piloting a mobile health app to manage acute malnutrition in five countries. The lessons emphasize the important government buy-in, appropriate tech, running tech support. Most interesting, I think, is a discussion of how the app surfaced resistance to the demands of treatment protocols, which front line care providers could skip when using paper forms.

A survey of 311 users in San Francisco (n= approx 2-3k per year over three years) suggests that the use of tech for crowdsourcing is bridging rather than exacerbating participatory divides. Specifically, they find that underrepresented groups (defined by age, gender, race, income, education, and rent) are well represented. Interestingly, they find that “web/mobile use of 311 is generally more representative of the citizenry than phone-based use.” At bottom though, as the authors mention in an 88 word aside at the end of the article, San Fran is a pretty unique place, and it’s not entirely clear what can be generalized here.

Context matters, after all, and “[government] Crowdsourcing efforts are more likely to be successful if organizations adopt a coherent strategy to implement an overarching engagement framework, and provide significant resources and managerial processes.” Leadership is important too. These unsurprising findings from a study of >2000 crowdsourcing projects run by 18 Australian local governments. They are accompanied by a useful framework for thinking about how institutional factors and design choices influence crowdsourcing outcomes.

Lastly, Lance Pritchett suggests 6 important and politically incorrect findings in development research.

How Change Happens

@jonathanfox707 has a new article on accountability keywords, looking at how language and concepts empower effective advocacy. He notes that the language we use to pursue accountability matters, and suggests how to find the terms and concepts that will resonate the most.

On the institutional tip, there’s also a really great write up of DFID’s digital ninjas with lots of reflections on how to navigate large and complex bureaucratic landscapes. It’s something we should be thinking about more often, and nice to hear a story of it being done well.

The Role of Research

@AnnenbergPenn‘s @jreme100 ‏et al ‏ put out a report on the research needs of digital rights advocacy. Based on a survey (n=79) of global advocacy organizations, the report notes significant limitations on organizations’ capacity to conduct useful research (“…many organizations do not have the time, funding, or expertise[…]. This is especially true for digital rights-related activism, where methods […] are often highly technical” (4)). It also captures where organizations see the gaps in accessible external research (corporate data, comparative legal data, qual data on user experiences for ““human angle stories”) and how they experience potential for collaboration with researchers. It’s essential reading for anyone who cares about making research useful to advocacy, and a good model for the civic tech field.

In the world of less empirical relevance, Silvio Waisbord is calling for “sociological approach to link the study of media activism to a broad conception of social change.” STS enthusiasts attack!

Resources and Tools

MySociety has made all their data available on a portal, Harvard Ash Center has published a searchable database of maps and visualizations using government data and Researchers from  George Mason University have developed an AI model to help social media campaigners “to  infer in real-time the different user types participating in a cause-driven hashtag campaign.”

For all you nerds who care about useful project data, The Goldilocks Challenge is a new book promoting the CART principles for collecting the right kind of M&E data (Credible, Actionable, Responsible and Transportable). There’s a website with a toolkit, resources and lots more info.

In the Methodological Weeds

The real geeks team shares some lessons on using Qualitative Comparative Analysis and argues that it’s particularly well suited to “evaluations of policy influence and citizen voice interventions.” Their discussion is a fantastic look at the demands a method makes on teams behind the scenes, and is a must read for anyone considering mixed methods approaches to evaluating civic tech.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech