Methodical Snark critical reflections on how we measure and assess civic tech

July Roundup: why civic tools fail and advocacy works, and much much more

J

Look out, three weeks’ worth here. Because summer.

Findings

Contradicting much recent research, global panel data from 176 countries (2003-2014) finds no evidence that “E-government has a positive impact on corruption reduction,” though “a country’s government effectiveness, political stability and economic status” do seem to help. Worse yet, there is no scholarly consensus on the instrumental value of democracy, according to this book. Back to the drawing board.

Online approaches to open government vary wildly in France, Italy and the UK, especially in terms of participatory platforms, according to a study of 979 platforms published in Quality and Quantity. Meanwhile, political partisanship appears to have been a good thing for government accountability at the turn of the century, according to survey data from Europe (2002–2012, n=290,000), and survey data (n=317) suggests that Ghanians aren’t accessing online government services because of the strong premium they place on personal interaction.

Civil society groups around the world are vulnerable to digital attacks, under-resourced and outgunned by the their digital adversaries, according to a new report that describes “extensive open-source review of more than 100 organizations supporting politically vulnerable organizations, and […] more than 30 interviews with activists, threat researchers, and cybersecurity professionals.”

Most americans have mixed feelings about social media activism, believing that is effective and giving voice to the voiceless (64%), but also that it distracts from important political debate (77%) and instills a false sense of efficacy (71%). This from a @pewresearch  analysis of #BlackLivesMatter tweets.

Limited stakeholder uptake, limited tech skills and lack of partners are some of the reasons that civic tech tools fail, according to this report from @AlinaOstling, based on desk research and six interviews.

Tools and Useful Research

American researchers are developing “an open-source transcription engine that transforms video into searchable archives of government legislative meetings.”

@3ieNews  “reviewed 56 impact evaluations and used positive deviance analysis to identify factors associated with successful advocacy programmes.” The subsequent report finds that “Key factors influencing successful advocacy programmes include who advocates, whether incentives are offered, whether the target group is offered comparison information about other groups, who delivers messages and which channels are used to disseminate information.”

@DIAL_community just released FlowKit, “an open source, state of the art and easy-to-use toolkit that will strengthen and facilitate large-scale analyses of mobile operator data for development and humanitarian purposes.”

Concepts and Case studies

Frameworks:

This article makes conceptual distinctions between social accountability and open government models for collaboration, while this article offers a conceptual framework for understanding why individuals in government resist ideas and input from outside of government, and how this can obstruct open government processes.

This article offers an assessment framework for participatory aspects of judicial websites, based on “availability of information but also the participatory mechanisms related to e-justice and open justice,”piloted on 32 state judicial websites in Mexico (2014-2016).

Cases:

This report from the Web Foundation summarizes research from 12 countries and spells out the obstacles preventing African women from using open data.

There’s also new research on:

Community & Curation

Peer review

@TheGovLab is looking for peer review on its Blockchange Field Report and @mySociety has a new research roundup section in their newsletter. The latest includes research on “the emotional content of parliamentary debates ”the emotional content of parliamentary debates.”

Arguments

Feedback Labs’ massive lit review on the conditions under which information is empowering is out, asserting seven principles: interpretation is social, Leaders affect interpretation of vaccination information, Reinterpretation is power, Demand rules, Vivid, emotional narratives persuade, Information must rise above the noise, Incentives and repetition cement new behaviors, Ice cream melts (re timeliness of information).

The Skeptics Guide to OGP is out, purporting to provide evidence for how open government supports supports five kinds of outcomes (“Public Engagement Improves Public Service Delivery”, “Save Public Money through Open Procurement”, etc).

The evidence, however, seems to be constituted by a series of end notes attached to broad claims, like “Public engagement has had significant positive impacts on the education, health, water, and public works sectors.” That claim, for example, is endnoted to Gaventa & Barett’s, “So What Difference Does it Make? Mapping the Outcomes of Citizen Engagement,”, Fox’s, “Social Accountability: What Does The Evidence Really Say?,” and Kosack & Fung’s, “Does Transparency Improve Governance?”.

Strictly speaking these articles do support that broad claim, but feels a bit disingenuous, as the claim seems to imply more, and the articles don’t offer definitive evidence, or broad heuristics on how and when public engagement improves service delivery. Engagement with research and evidence appears just as shallow across the 82 pg pdf.

Collections and catalogues

The Gender & Development journal has a new special issue on tech, gender and development, including articles with a focus on “how digital spaces are enabling new debates both on and offline which bring spotlight and attention to sexual harassment and toxic masculinities.“

@opengovpart has also launched CitizEngage, a web portal highlighting stories of citizen engagement, including 12 country case studies.

The Crowdlaw project has published a catalogue of 100+ examples of citizen engagement in policy-making around the world.

In the Methodological Weeds

Contrary to common belief, randomised controlled trials inevitably produce biased results (LSE Impact Blog)

This review of of standards for open data proposes a 10-level “standards stack,” applies that framework applies it to 12 “high quality” civic data sets (building permits, election results, etc…), then suggests metrics for accessing both content and availability.

Real Geeks on using digital tools to improve the quality of survey data.

There’s a new Practical Guide to Measuring Women’s and Girls’ Empowerment in impact evaluations.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags