research links w 21-17

Findings

E-government projects are more successful when formal decision-making processes include stakeholders and actively manage risk, according to a survey of  Swedish national government agencies and municipalities (N=550). Meanwhile, @timdavies is coauthor on a paper in Science & Technology Studies that tracks how data standards influence bureaucratic processes for opening government data. The paper warns that standards can in some ways obstruct actual engagement with users, and puts a useful focus on people in institutions just trying to get things done.

Mixed findings on social media effects this week. Chinese participants in political discourse on Weibo experience that discourse as deliberative, despite the interactions being “mostly non-dialogical and non-creative in nature, and characterised by homophily and polarisation.” (New study, n= 417). In the US, social media played a definitive role in determining how the Tea Party negotiated it’s identity and relationship with the Republican party in the course of Trump’s rise to power. Not in the least, it allowed for quick differentiation of activist perceptions on appropriate degrees of openness, which seem to correspond with political objectives and conceptions of political efficacy. This is described by a new paper in Social Media + Society (not to be confused with New Media and Society, I recently made that mistake > facepalm), which offers a fascinating case, without clearly actionable findings.

Continue reading “research links w 21-17”

research links w 9-17

Findings

All the reports:
A @datasociety report finds low trust in media among US youth, who often find news by accident, and demonstrate a variety of innovative verification strategies. Meanwhile, a University of London report finds that whistleblowing is more dangerous in the digital age and a new OECD report finds that the resurgence of single bidding significantly increases risks of corruption in European procurement. Take note #opencontracting strategists. Perhaps most happily, new research described in @SSIReview suggests that funders do use knowledge! In fact they get it primarily from peers and grantees, but it’s not enough to provoke change.

Continue reading “research links w 9-17”

research links w 50-52

Papers and Findings

Do global norms and clubs make a difference? A new dissertation assesses implementation of EITI, CSTI and OGP in Guatemala, the Philippines and Tanzania to conclude that multi-stakeholder initiatives can strengthen national proactive transparency, but have little impact on demand-driven accountability. There are interesting insights on open washing and the importance of high level political ownership.

Meanwhile, MySociety’s @RebeccaRumbul assessed civic technology in Mexico, Chile and Argentina (interviews w/ gov and non-gov, n=47), to conclude that the “intended democratising and opening effects of civic technology have in fact caused a chilling effect,” prompting Latin American governments to seek more restrictive control over information. In Brazil, researchers assessed 5 municipalities to see whether strong open data initiatives correlated with strong scores on the digital transparency index–they don’t.

Austrian researchers reviewed the literature on gamification strategies in e-participation platforms globally, concluding that gamification of democracy doesn’t happen often, and when it does, it’s often rewards-based, a strategy they expect to “decrease the quality of participation.” This conference paper by computer scientists proposes an e-government maturity model, based on a literature review of 25 existing models, and the International Budget Partnership has released a report on how civil society uses fiscal transparency data. Spoiler: they don’t have the data they want.

A number of global reports and releases were published. The DataShift has a new guide on Making Citizen-Generated Data Work, based on a review of 160 projects and interviews with 14 case studies, and which presents some useful classifications and typologies. Creative Commons has released the 2016 Global Open Policy Report, with an overview of open policies four sectors (education, science, data and heritage) in 38 countries. The White House has released a report on the performance of it’s public petition site, We the People, highlighting four cases where e-petitions arguably impacted policy in the platform’s first five years of operation.

Meanwhile, the Governance Data Alliance has released a report entitled “When is Governance Data Good Enough?” based on snap polls with “500 leaders” in 126 countries, and which suggests among other things that credibility and contextualization of governance data is important to governance data users, and that governance data is used primarily for research and analysis. The general impression seems to be that yes, in many countries, the governance data that exists is in fact good enough “to support reform champions, inform policy changes, and improve governance.” A launch event was held on Dec 15.

Flow Journal has a special issue on Media activism politics in/for the age of Trump. International Political Science Review has a special issue on measuring the quality of democracy.

Community and Commentary

GovLab sought Peer Reviewers for open gov case studies on Cambodia, Ghana, India, Jamaica, Kenya, Paraguay and Uganda, but there were only 9 days to sign up (in late Dec) and 2 weeks to review (during the holidays). Hope they found someone. There must be a happy medium between the glacial grind of academic peer review and… this.

A Freedominfo.org post highlights the Access to Information component in the World Bank’s Open Data Readiness Assessment Tool, and suggests how it can be a useful tool for advocates and activists.

The World Bank has released a new guide on crowdsourcing water quality monitoring, with a focus on program design, not measurement, which is nicely summarized here.

Mike Ananny and Kate Crawford’s new article in New Media & Sociaty critiques the “ideal of transparency” as a foundation for accountability, identifying 10 limits of transparency, and suggest alternative approaches for pursuing algorithmic accountability.

The LSE blog re-posted a piece describing novel metrics for research social media influence, and distinguishing between aspects of “influence”, such as amplification, true reach and network score, but failing to link to that research. In Government Information Quarterly, a troika of international researchers have suggested an uninspired research agenda for “open innovation in the public sector”, with a focus on domain-specific studies, tools other than social media, and more diverse methods.

This TechCrunch article attributes civic innovation in US cities to governmental gridlock at the federal level, the NYT describes research suggesting that price transparency in the US health sector has failed to drive prices down and Results for Development Institute is developing a framework to help governments “cost” open government initiatives before they pursue them.

In the Methodological Weeds

The Development Impact Blog has a great post on life satisfaction reporting between women and men. The discussion begins with the assertion that “women definitely say they are happier” and moves quickly to debunk that assertion, using hypothetical vignettes, anchored to common response scales. The methods are smart, and highly relevant to response bias problems in any social survey setting, especially in assessing political and social impacts of media and information.

An article in JeDEM presents a model for multidimensional open government data, which focuses on the integration of official and unofficial statistics. The proposed method builds on the data cube model from business intelligence, and relies entirely on a linked data technology. This paper goes a bit beyond my technical expertise, but at bottom it promises to harmonize indicators from different data sources (with different, but overlapping meta-data and data context) on the basis of shared attributes. Kind of a lowest common denominator approach. This is intuitive, and the type of thing I’ve seen attempted at data expeditions via excel, but having a rigorous method could be a huge advantage. Especially if demonstrated with the participation of governments in the pilots this article references, a solid methodology for this could be hugely useful to initiatives like DataShift, which talk a lot about merging citizen-generated data with official statistics, but struggle to make that happen either politically or technically.

Academic Opps

Calls for Papers:

Miscellanea and Absurdum

  • America’s most common Christmas-related injuries, in charts (from Quartz)
  • The Hate Index “represents a journalistic effort to chronicle hate crimes and other acts of intolerance since Donald Trump’s presidential election victory.”
  • DataDoesGood is asking you to donate your anonymized shopping data, which gets sold, and profits donated to charity.
  • Academic article: “Tinder Humanitarians”: The Moral Panic Around Representations of Old Relationships in New Media
  • The Association of Internet Researchers has a YouTube channel (!)
  • 4% of U.S. internet users have been a victim of “revenge porn” (via Data & Society)
  • CFP: Women’s Head Hair as a tool of communication, in media outlets and social media activism

The long haul towards evidence: information in elections edition

Civil society groups emphasize the need for high quality public information on the performance of politicians. But, does information really make a difference in institutionally weak environments? Does it lead to the rewarding of good performance at the polls or are voting decisions going to be dominated by ethnic ties and clientelistic relations?

Enter the Metaketa project’s first phase, running 7 experimental evaluations in 6 countries to answer that question: Does more info change voter behavior? The results and synthetic analysis are all coming out early next year, which is exciting, but a long ways away. I was also happy to see that they have a pre-analysis plan for that synthesis work (basically a self control mechanism to ensure that data doesn’t get fudged under analysis to support preferred outcomes, unfortunately they don’t really get used). Continue reading “The long haul towards evidence: information in elections edition”