Methodical Snark critical reflections on how we measure and assess civic tech

research links w 50-52

r

Papers and Findings

Do global norms and clubs make a difference? A new dissertation assesses implementation of EITI, CSTI and OGP in Guatemala, the Philippines and Tanzania to conclude that multi-stakeholder initiatives can strengthen national proactive transparency, but have little impact on demand-driven accountability. There are interesting insights on open washing and the importance of high level political ownership.

Meanwhile, MySociety’s @RebeccaRumbul assessed civic technology in Mexico, Chile and Argentina (interviews w/ gov and non-gov, n=47), to conclude that the “intended democratising and opening effects of civic technology have in fact caused a chilling effect,” prompting Latin American governments to seek more restrictive control over information. In Brazil, researchers assessed 5 municipalities to see whether strong open data initiatives correlated with strong scores on the digital transparency index–they don’t.

Austrian researchers reviewed the literature on gamification strategies in e-participation platforms globally, concluding that gamification of democracy doesn’t happen often, and when it does, it’s often rewards-based, a strategy they expect to “decrease the quality of participation.” This conference paper by computer scientists proposes an e-government maturity model, based on a literature review of 25 existing models, and the International Budget Partnership has released a report on how civil society uses fiscal transparency data. Spoiler: they don’t have the data they want.

A number of global reports and releases were published. The DataShift has a new guide on Making Citizen-Generated Data Work, based on a review of 160 projects and interviews with 14 case studies, and which presents some useful classifications and typologies. Creative Commons has released the 2016 Global Open Policy Report, with an overview of open policies four sectors (education, science, data and heritage) in 38 countries. The White House has released a report on the performance of it’s public petition site, We the People, highlighting four cases where e-petitions arguably impacted policy in the platform’s first five years of operation.

Meanwhile, the Governance Data Alliance has released a report entitled “When is Governance Data Good Enough?” based on snap polls with “500 leaders” in 126 countries, and which suggests among other things that credibility and contextualization of governance data is important to governance data users, and that governance data is used primarily for research and analysis. The general impression seems to be that yes, in many countries, the governance data that exists is in fact good enough “to support reform champions, inform policy changes, and improve governance.” A launch event was held on Dec 15.

Flow Journal has a special issue on Media activism politics in/for the age of Trump. International Political Science Review has a special issue on measuring the quality of democracy.

Community and Commentary

GovLab sought Peer Reviewers for open gov case studies on Cambodia, Ghana, India, Jamaica, Kenya, Paraguay and Uganda, but there were only 9 days to sign up (in late Dec) and 2 weeks to review (during the holidays). Hope they found someone. There must be a happy medium between the glacial grind of academic peer review and… this.

A Freedominfo.org post highlights the Access to Information component in the World Bank’s Open Data Readiness Assessment Tool, and suggests how it can be a useful tool for advocates and activists.

The World Bank has released a new guide on crowdsourcing water quality monitoring, with a focus on program design, not measurement, which is nicely summarized here.

Mike Ananny and Kate Crawford’s new article in New Media & Sociaty critiques the “ideal of transparency” as a foundation for accountability, identifying 10 limits of transparency, and suggest alternative approaches for pursuing algorithmic accountability.

The LSE blog re-posted a piece describing novel metrics for research social media influence, and distinguishing between aspects of “influence”, such as amplification, true reach and network score, but failing to link to that research. In Government Information Quarterly, a troika of international researchers have suggested an uninspired research agenda for “open innovation in the public sector”, with a focus on domain-specific studies, tools other than social media, and more diverse methods.

This TechCrunch article attributes civic innovation in US cities to governmental gridlock at the federal level, the NYT describes research suggesting that price transparency in the US health sector has failed to drive prices down and Results for Development Institute is developing a framework to help governments “cost” open government initiatives before they pursue them.

In the Methodological Weeds

The Development Impact Blog has a great post on life satisfaction reporting between women and men. The discussion begins with the assertion that “women definitely say they are happier” and moves quickly to debunk that assertion, using hypothetical vignettes, anchored to common response scales. The methods are smart, and highly relevant to response bias problems in any social survey setting, especially in assessing political and social impacts of media and information.

An article in JeDEM presents a model for multidimensional open government data, which focuses on the integration of official and unofficial statistics. The proposed method builds on the data cube model from business intelligence, and relies entirely on a linked data technology. This paper goes a bit beyond my technical expertise, but at bottom it promises to harmonize indicators from different data sources (with different, but overlapping meta-data and data context) on the basis of shared attributes. Kind of a lowest common denominator approach. This is intuitive, and the type of thing I’ve seen attempted at data expeditions via excel, but having a rigorous method could be a huge advantage. Especially if demonstrated with the participation of governments in the pilots this article references, a solid methodology for this could be hugely useful to initiatives like DataShift, which talk a lot about merging citizen-generated data with official statistics, but struggle to make that happen either politically or technically.

Academic Opps

Calls for Papers:

Miscellanea and Absurdum

  • America’s most common Christmas-related injuries, in charts (from Quartz)
  • The Hate Index “represents a journalistic effort to chronicle hate crimes and other acts of intolerance since Donald Trump’s presidential election victory.”
  • DataDoesGood is asking you to donate your anonymized shopping data, which gets sold, and profits donated to charity.
  • Academic article: “Tinder Humanitarians”: The Moral Panic Around Representations of Old Relationships in New Media
  • The Association of Internet Researchers has a YouTube channel (!)
  • 4% of U.S. internet users have been a victim of “revenge porn” (via Data & Society)
  • CFP: Women’s Head Hair as a tool of communication, in media outlets and social media activism

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags