Methodical Snark critical reflections on how we measure and assess civic tech

research links w 48-49

r

Papers and Findings

A new Brookings report aims to answer the question “Does Open Government Work?” NBD. Not surprisingly, the report doesn’t provide a definitive answer. It does suggest six structural conditions for open government initiatives to achieve their objectives. The framework is nuanced and useful, but it’s not at all clear how the authors came up with it. It would be nice to know more about their “analysis of hundreds of reports, articles, and peer-reviewed academic studies discussing the effectiveness of particular programs.” Presumably they looked at evidence internationally, but there’s no clear distinctions made between different political and cultural contexts…

Meanwhile, an article in the ARPR assesses the implementation of the OGP in the US (OGP didn’t do much to change the way the US does transparency) and Portuguese researchers have proposed a “transparency ontology” to guide the development and implementation of open data initiatives, in order to make them more relevant for citizens. The paper relies on journalists’ role as “information brokers,” which is reflected in their method. They don’t seem to have interviewed any actual citizens.

Globally, the OECD has a new book out summarizing the future of Open Government, while the 2016 UN E-government survey paints a rosy picture. It finds that 90 countries have a portal for open data or services,  148 countries provide at “online transactional services” and “an increasing number of countries are moving towards participatory decision-making.” #devilinthedetails

In the world of activism, Johnathan Fox is back with another working paper on accountability. This one presents scaling strategies for monitoring and advocacy work, backed up by 9 case studies. Lot’s to think carefully about here. The Engine Room has a new report on tech tools for human rights documentation, which identifies functionalities and obstacles to effective use, and maps out available dedicated platforms.

A new paper in Public Culture traces the history of participatory mapping back to the 1930s, through its use by indigenous communities in the 70’s, global hype in the 90s, and how the enthusiasm of going online has undermined an underlying principle of community control (#westerntechies). Meanwhile, a mixed methods case study of Open Street Map in Israel/Palestine blames OSM’s “ground truth paradigm” for the dominance of Israeli contributors.

In the world of organized politics, Fenwick McKelvey looks at Nation Builder from an Actor Network Theory perspective, to see how the platform facilitates flows of information and political capital between different groups in the service of political campaigns. He makes some interesting observations about the relationships between national campaigns and international politics. A conference paper looks at smart city platforms in two Finnish cities, and suggests that they represent a departure from party-based politics (#casestudies).

In the world of development, Development Gateway ran research in Ghana, Tanzania and Sri Lanka, to see how development data gets used in the health and agriculture sectors. I couldn’t find any info on their methods, only the assertion that they “are interviewing up to 200 government officials, donor staff, NGO operators, and others in each country.” Looks like the next step is to link result indicator data from 15+ donors. Would love to know what on earth that means. In any case, favorite quote from a blogpost on the research: “The main issue is not that people don’t have data skills (even though they often don’t)—it’s that they don’t have incentives to worry about data.”

For extended reading, Socrates has a special issue on e-government, with a broad selection of case studies from developing countries, and the Journal of Community Informatics has a special issue on Data Literacy, including a case study on Rahul Bhargava’s data mural work. There’s also a new Handbook of Research on Citizen Engagement and Public Participation in the Era of New Media (link) and Iannelli’s new book, Hybrid Politics: media and participation (link).

And lastly, #whoa. An experiment described in Political Behavior used Twitter bots to demonstrate that trolls use less racist slurs if shamed for doing so by members of their own demographic.

Community and Commentary

Toby McIntosh reports that international efforts to agree on a method for assessing progress on SDG 16 are faltering. Chief challenges are not only agreeing on appropriate metrics, but on individual countries’ reporting protocols. The politics of measurement in a blogpost.

Muckrocker has a new  FOI newsletter and the Open Government Research eXchange (OGRX, love that acronym), has been revamped, with a blog  and reading lists. Open Knowledge has started a reading group to think about the politics of open. The first round read Hayek and de Soto, and blogged their thoughts here.

In the world of data and resources, @martinchek released a data set of all Facebook posts from 15 mainstream media sources 2012-2016, Mo Ibrahim Foundation released a new portal for data on governance in Africa (the data isn’t new, but the interface is a lot smoother) and an EU project on media conflict and democratization has released a new open source tool for Twitter analysis. The tool does extraction, analysis and visualization.

There’s a friendly spat afoot regarding language and utility of accountability research. Duncan Green complained about the latest IDS Bulletin on Power, Poverty and Inequality, noting that the analysis is thoughtful, but the conclusions not useful. As he put it: “Okaaayyy.” John Gavanta responded with a story about how some of his research got used once, despite being abstract and inaccessible, but failed to respond to the critique.

More provocatively, “Consider impact.” That’s the first sentence in Julie Bayley’s powerful reality check on research impact, while an ICTworks blogpost argues that “indirect data” can be used to assess external validity of ICTs. By this the author means we can choose specific indicators from “big data” to determine whether results of an RCT in one context will be valid in another. While it’s true that you never know what you’re going to find in “big data,” we can’t pick the right indicator without already knowing the answer to the transportability problem in some way, and even if we do pick the right indicator, we can’t do so without bias that would invalidate the methodological rigor of the RCT, which is THE WHOLE POINT.

Otherwise, dana boyd argues that algorythmic transparency is impossible, which makes algorythmic accountability all the more important,  Paul Webb reviews Daniels & Thistlethwaite’s Being a Scholar in the Digital Era, the book by Daniels and Thistlethwaite(my description here), and BBC Media Action blogged a reminder about their research on media and politics, which uses quantitative data from seven countries to assert that people who consume political media are more politically active.

In the Methodological Weeds

Big Data & Society has a new paper describing a participatory method for generating “big” environmental sensor data. It focuses on working with sensors to create “data stories”, which include observations about the relationship between the data and its context. They also suggest a concept of “good enough data,” which rubs the methodological fetish of this blog the wrong way, but is a nice approach of anchoring research and data generation in it’s utility for actual people.

The Open Contracting Partnership has released a methodological guide for researching procurement data that is published according to the OC data standard. Useful.

This PAR article concisely explains the limits of big data representivity, and the practical methodological problems that poses for policy makers.

Academic Opps

Miscellanea and Absurdum

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags