research links w25 – 17

Findings

From the duh desk: 
A white paper from Cornell Law reviews e-government and rulemaking processes in the US, to find that an institutional “culture of risk adverseness” is much more obstructive to e-participation than is a lack of technological solutions.

What difference does it make?:
An article in Telecommunications Policy documents how mobiles have dramatically reshaped the political communication ecology in Ghana and deepened civic engagement, without affecting “the fundamental structures of political power and the levers of control.” Things look slightly better in a series of research briefs on open data and OGP processes produced by @ITforChange and @AllVoicesCount. The briefs describe incremental progress in all three countries, with significant reservations. Despite increasingly progressive open data practice and policy in the Philippines, for example, “the benefits to individual democratic citizenship are far more conclusive than the benefits to democracy as a whole.” Similarly, the increasingly participatory and inclusive nature of Uruguay’s OGP action plans are described as “gradually modifying” governance processes, through increased interaction and deliberation (though the research brief provides neither a narrative nor a theory to explain how this might be happening). Most optimistically, the brief on inclusive municipal technologies in Spain describes not only specific instances of “engaged and transformative citizenship,” but a proliferation of knowledge sharing and participatory strategies across the country. Here too however, details are light.

In other news, sorry, democracy does not cause innovationContinue reading “research links w25 – 17”

research links w 4/17

Papers & Findings

The world is ending. The 2016 Corruption Perceptions Index finds links between corruption and inequality, and notes falling scores for countries around the world. The Economist Intelligence Unit’s Democracy Index is titled Revenge of the “deplorables”, and notes a worsening of the worldwide “democratic recession” in 2016.

Civic techs. What are the most important characteristics for civic apps? Low threshold for use, built in feedback, and visible change and engagement across users. This according to a paper presented at a recent Cambridge conference. Meanwhile, research on Twitter use in the 2016 Ugandan elections finds that the social media platform “provides minority groups important access to public space otherwise denied on traditional media platforms,” and a Yale study suggests that city use of citizen reporting platforms correlate with lower levels of crime, potentially due to increased social cohesion, though the authors are careful not to assert a causal relationship. Continue reading “research links w 4/17”

research links w 50-52

Papers and Findings

Do global norms and clubs make a difference? A new dissertation assesses implementation of EITI, CSTI and OGP in Guatemala, the Philippines and Tanzania to conclude that multi-stakeholder initiatives can strengthen national proactive transparency, but have little impact on demand-driven accountability. There are interesting insights on open washing and the importance of high level political ownership.

Meanwhile, MySociety’s @RebeccaRumbul assessed civic technology in Mexico, Chile and Argentina (interviews w/ gov and non-gov, n=47), to conclude that the “intended democratising and opening effects of civic technology have in fact caused a chilling effect,” prompting Latin American governments to seek more restrictive control over information. In Brazil, researchers assessed 5 municipalities to see whether strong open data initiatives correlated with strong scores on the digital transparency index–they don’t.

Austrian researchers reviewed the literature on gamification strategies in e-participation platforms globally, concluding that gamification of democracy doesn’t happen often, and when it does, it’s often rewards-based, a strategy they expect to “decrease the quality of participation.” This conference paper by computer scientists proposes an e-government maturity model, based on a literature review of 25 existing models, and the International Budget Partnership has released a report on how civil society uses fiscal transparency data. Spoiler: they don’t have the data they want.

A number of global reports and releases were published. The DataShift has a new guide on Making Citizen-Generated Data Work, based on a review of 160 projects and interviews with 14 case studies, and which presents some useful classifications and typologies. Creative Commons has released the 2016 Global Open Policy Report, with an overview of open policies four sectors (education, science, data and heritage) in 38 countries. The White House has released a report on the performance of it’s public petition site, We the People, highlighting four cases where e-petitions arguably impacted policy in the platform’s first five years of operation.

Meanwhile, the Governance Data Alliance has released a report entitled “When is Governance Data Good Enough?” based on snap polls with “500 leaders” in 126 countries, and which suggests among other things that credibility and contextualization of governance data is important to governance data users, and that governance data is used primarily for research and analysis. The general impression seems to be that yes, in many countries, the governance data that exists is in fact good enough “to support reform champions, inform policy changes, and improve governance.” A launch event was held on Dec 15.

Flow Journal has a special issue on Media activism politics in/for the age of Trump. International Political Science Review has a special issue on measuring the quality of democracy.

Community and Commentary

GovLab sought Peer Reviewers for open gov case studies on Cambodia, Ghana, India, Jamaica, Kenya, Paraguay and Uganda, but there were only 9 days to sign up (in late Dec) and 2 weeks to review (during the holidays). Hope they found someone. There must be a happy medium between the glacial grind of academic peer review and… this.

A Freedominfo.org post highlights the Access to Information component in the World Bank’s Open Data Readiness Assessment Tool, and suggests how it can be a useful tool for advocates and activists.

The World Bank has released a new guide on crowdsourcing water quality monitoring, with a focus on program design, not measurement, which is nicely summarized here.

Mike Ananny and Kate Crawford’s new article in New Media & Sociaty critiques the “ideal of transparency” as a foundation for accountability, identifying 10 limits of transparency, and suggest alternative approaches for pursuing algorithmic accountability.

The LSE blog re-posted a piece describing novel metrics for research social media influence, and distinguishing between aspects of “influence”, such as amplification, true reach and network score, but failing to link to that research. In Government Information Quarterly, a troika of international researchers have suggested an uninspired research agenda for “open innovation in the public sector”, with a focus on domain-specific studies, tools other than social media, and more diverse methods.

This TechCrunch article attributes civic innovation in US cities to governmental gridlock at the federal level, the NYT describes research suggesting that price transparency in the US health sector has failed to drive prices down and Results for Development Institute is developing a framework to help governments “cost” open government initiatives before they pursue them.

In the Methodological Weeds

The Development Impact Blog has a great post on life satisfaction reporting between women and men. The discussion begins with the assertion that “women definitely say they are happier” and moves quickly to debunk that assertion, using hypothetical vignettes, anchored to common response scales. The methods are smart, and highly relevant to response bias problems in any social survey setting, especially in assessing political and social impacts of media and information.

An article in JeDEM presents a model for multidimensional open government data, which focuses on the integration of official and unofficial statistics. The proposed method builds on the data cube model from business intelligence, and relies entirely on a linked data technology. This paper goes a bit beyond my technical expertise, but at bottom it promises to harmonize indicators from different data sources (with different, but overlapping meta-data and data context) on the basis of shared attributes. Kind of a lowest common denominator approach. This is intuitive, and the type of thing I’ve seen attempted at data expeditions via excel, but having a rigorous method could be a huge advantage. Especially if demonstrated with the participation of governments in the pilots this article references, a solid methodology for this could be hugely useful to initiatives like DataShift, which talk a lot about merging citizen-generated data with official statistics, but struggle to make that happen either politically or technically.

Academic Opps

Calls for Papers:

Miscellanea and Absurdum

  • America’s most common Christmas-related injuries, in charts (from Quartz)
  • The Hate Index “represents a journalistic effort to chronicle hate crimes and other acts of intolerance since Donald Trump’s presidential election victory.”
  • DataDoesGood is asking you to donate your anonymized shopping data, which gets sold, and profits donated to charity.
  • Academic article: “Tinder Humanitarians”: The Moral Panic Around Representations of Old Relationships in New Media
  • The Association of Internet Researchers has a YouTube channel (!)
  • 4% of U.S. internet users have been a victim of “revenge porn” (via Data & Society)
  • CFP: Women’s Head Hair as a tool of communication, in media outlets and social media activism

research links w 48-49

Papers and Findings

A new Brookings report aims to answer the question “Does Open Government Work?” NBD. Not surprisingly, the report doesn’t provide a definitive answer. It does suggest six structural conditions for open government initiatives to achieve their objectives. The framework is nuanced and useful, but it’s not at all clear how the authors came up with it. It would be nice to know more about their “analysis of hundreds of reports, articles, and peer-reviewed academic studies discussing the effectiveness of particular programs.” Presumably they looked at evidence internationally, but there’s no clear distinctions made between different political and cultural contexts…

Meanwhile, an article in the ARPR assesses the implementation of the OGP in the US (OGP didn’t do much to change the way the US does transparency) and Portuguese researchers have proposed a “transparency ontology” to guide the development and implementation of open data initiatives, in order to make them more relevant for citizens. The paper relies on journalists’ role as “information brokers,” which is reflected in their method. They don’t seem to have interviewed any actual citizens.

Globally, the OECD has a new book out summarizing the future of Open Government, while the 2016 UN E-government survey paints a rosy picture. It finds that 90 countries have a portal for open data or services,  148 countries provide at “online transactional services” and “an increasing number of countries are moving towards participatory decision-making.” #devilinthedetails Continue reading “research links w 48-49”