Methodical Snark critical reflections on how we measure and assess civic tech

Roundup: evidence on the power of knowing who’s watching, nothing disruptive about open data research, and wet string.

R

Holy moly, there was a lot of civic tech research chatter last week. December push FTW. I’ve tried to order it according to utility, but don’t forget to skim the methodological and community stuff.

Findings

Social pressure. When considering how to respond to a FOIA request, Korean officials are influenced more by request-related factors (favorability to agency, difficulty of processing, purpose of request) than by institutional or environmental factors (this based on record analysis of 2087 requests, interviews and decision tree analysis). Yet the audience effect seems alive and well in other contexts, according to a field experiment in Peru, which demonstrates that announcing government-backed civil society monitoring reduces the cost, but not the time, required to produce public works, suggesting decreases in corruption.

Tech and Democracy. Panel data from 1960-2010 suggests that increased respect for civil and political rights boosts GDP, and this conference paper suggests that access to ICTs correlates with electoral integrity in east Africa (but the argumentation’s pretty specious). Meanwhile, the Bank put out a case study on the state of participatory rule-making, demonstrating that it’s on the uptick. I did not find it helpful.

Resarching the researchers. The new book, Social Dynamics of Open Data, has an intro chapter on the state of open data research. Surprise!, open data research straddles a wide variety of fields, but is being conducted primarily by men in the global north and affiliated with universities, and when collaboration happens, it tends to be within institutions.

Dark Social. It’s not as sexy as the name implies, it’s about sharing news and info on email or private messaging apps instead of Facebook. But it is interesting, and CIMA analysis of marketing research suggests that perceived surveillance has people avoiding social sharing in repressive countries.

Lessons learned

Jed Miller sums up MAVC research on adaptation and learning in civic tech programs. In sum, it’s hard and happens in a lot of different ways.

Open data training with African government insiders demonstrate that local case studies are important, that it’s important to accommodate for different knowledge levels and to balance peer-learning with formal training. A related learning needs survey (no methods info) finds that African government focal points for open data want to learn data viz, metrics and innovation, and want to learn about it in face-to-face training.

Elsewhere, Luminary Labs reflects on running 17 large-scale prize competitions targeting “society’s most complex problems” (including at least one uprising lesson: “hug the lawyers”), and Atanu Garai on 3 years of mobile money payment schemes in India (processes are hard to scale, mobile network operators are inflexible and their practice varies greatly)

Peripheral insights

Behavioral economists suggest the cognitive mechanisms that underpin social exclusion  (for example, “”adaptive preferences”in which an oppressed group views its oppression as natural or even preferred.”). Meanwhile, the evidence pendulum for policy body cams swings again, and a study of comments on medical research finds that tentatively expressed findings correlate with disagreement about what research means. Be clear, people.

Community and Resources

Snark well played. The Engine Room’s report on Responsible Data in Open Contracting got a harsh critique from @timdavies, who suggested it “risks damaging the field by failing to provide clarity, and asking scattergun questions without a clear way of mapping those to the particular decisions.” The Room responded in comments by promising to issue a revised version, and raising a couple points worth further debate. That’s a great response and exactly how the field should be holding itself to higher standards. Well played to all parties.

Calls to action: A new report from the Beeck Center (26 pgs) calls on intermediaries to “design safe environments to facilitate data sharing in the low-trust and politically sensitive context of companies and governments,” Deloitte’s new report (21pgs) want’s to re-imagine measurement for social impact work, but doesn’t tell us how to do it, and Rachel Botsman wants to start conceptualizing “distributed trust.”

Happenings: Oxfam is starting meta analysis of over 100 program evaluations, with results on accountability and governance coming soon, the UN launched a new portal for migration data, and Development Gateway blogs on how it’s hard to cull useful info from IAIT data.

#SAD: Oh, and fake news and closing civic space not depressing enough? CPJ’s data is out on journalists killed and imprisoned in 2017.

In the Methodological Weeds

The Center for Evaluation Innovation has a fantastic report/guide on contribution analysis (i.e.: how to determine/demonstrate if advocacy contributes to big political wins, or if it would have happened anyway or differently).

Enrico Bertini on using data viz to analyze data. Good weeds, though important to always combine viz with tabs (recall the dinosaur dozen).

Plus there’s a good video on the problem of keeping data anonymous, and a paper with strategies for dealing with measurement validity and multi-level longitudinal analysis in big data analysis.

Case studies

 

Big Teases

Factors Influencing Decisions about Crowdsourcing in the Public Sector: A Literature Review, is actually an assessment of crowdsourcing in Poland, which finds that the type of task to be accomplished is important.

The Engine Room’s blogpost, Measuring Social Impact Can Be hard, But Here’s Why We’re Doing It never actually describes how they’re doing it.

The World Bank’s case study on participatory rule-making is a difficult read lacking much information. I summed it up here.

For the titles

 

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags