Methodical Snark critical reflections on how we measure and assess civic tech

Last Week in Civic Tech Research: the perfect storm for government as platform, the cost of infant lives and open government, and proof that size matters (for protests)

L

Findings:

A review of 133 cross-sectional studies finds that the most significant political effects of social media use across contexts have to do with expression of political views on social networking sites, while an  experiment with Belgian legislators confirms the WUNC thesis of protest influence on elite opinion (ie: size matters, so does coherence). An experiment on public sector absenteeism in Punjab meanwhile piggy-backed on an SMS monitoring project to find that “smartphone monitoring nearly doubled inspections at public clinics across Punjab” and demonstrated significant effects of providing evidence to policy-makers.

Looking large, but not really delivering, this Economist report on Open Government Data use (survey-based, n=1,000 over 10 countries) finds that lack of awareness is the biggest barrier to use (50% of respondents). It also cites numerous perceived benefits to OGD, but the number of respondents who believe OGD is good for government transparency is low, and varies significantly by country.

Useful Findings:

A SMS notification program for maternal health care in Guinea integrated an operations research protocol into the SMS program design and found that the program costs the government “$650 per neonatal death averted.” Want to do something similar? Thanks to @results4dev you can. They’ve developed an excel-based tool and methodology for costing for open government programs, and validated it in Ukraine and Sierra Leone.

Resources and commentary:

@robertnpalmer gives an overview of international measurements of countries’ open data release. (He call’s them measurement tools, which is a misnomer, because they’re finished assessments, not tools that you or I can apply to countries). He provides basic information on the big 4 (ODB, GODI, OECD’s OURdata INdex and ODW’s ODIN), and lists a handful of others.

We’re starting to see the first synthetic conclusions from @AllVoicesCount’s massive research program over recent years. The first volley comes from Vanessa Herringshaw, who’s main takeaway is that voice, accountability and responsiveness are messy stuffs. She argues for systemic, strategic thinking, and berides those who aren’t embracing that approach. Not clear how that is related to the research, but it can’t hurt.

Cases:

Forecasting food hazards in Chicago. This case study describes a “perfect storm” of resources and support and quality open government data that “allows Government to act as a platform.”

This pre-print article compares two data-strategies for anti-corruption advocacy (in Spain and Italy), highlighting how context and resources shape communication strategies and campaign styles.

A literature review and set of Kenyan case studies on adaptive programming in tech for transparency work suggests that existing theories be simplified, that projects get more support, and that stakeholders should “keep on experimenting, networking and advocating.”

For the titles:

Elsewhere:

Turns out that research impact means wildly different things to different researchers, and the battle for the data revolution rages on, with another 300 page report in pdf (this time with 6 relatively clear actions for governments to take in order to get their acts together).

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags