Whoa, week 26, half way through 2017. That went quick.
There are serious transparency and participation shortcomings in international transparency review mechanisms (like the UNCAC Implementation Review Mechanism and the OECD Working Group on Bribery, according to a new report from Transparency International. And a report on global internet censorship from @BKCHarvard finds “evidence of filtering in 26 countries across four broad content themes: political, social, topics related to conflict and security, and Internet tools (a term that includes censorship circumvention tools as well as social media platforms).” Continue reading “research links w 26”
Papers & Findings
The world is ending. The 2016 Corruption Perceptions Index finds links between corruption and inequality, and notes falling scores for countries around the world. The Economist Intelligence Unit’s Democracy Index is titled Revenge of the “deplorables”, and notes a worsening of the worldwide “democratic recession” in 2016.
Civic techs. What are the most important characteristics for civic apps? Low threshold for use, built in feedback, and visible change and engagement across users. This according to a paper presented at a recent Cambridge conference. Meanwhile, research on Twitter use in the 2016 Ugandan elections finds that the social media platform “provides minority groups important access to public space otherwise denied on traditional media platforms,” and a Yale study suggests that city use of citizen reporting platforms correlate with lower levels of crime, potentially due to increased social cohesion, though the authors are careful not to assert a causal relationship. Continue reading “research links w 4/17”
Papers and Findings
A new Brookings report aims to answer the question “Does Open Government Work?” NBD. Not surprisingly, the report doesn’t provide a definitive answer. It does suggest six structural conditions for open government initiatives to achieve their objectives. The framework is nuanced and useful, but it’s not at all clear how the authors came up with it. It would be nice to know more about their “analysis of hundreds of reports, articles, and peer-reviewed academic studies discussing the effectiveness of particular programs.” Presumably they looked at evidence internationally, but there’s no clear distinctions made between different political and cultural contexts…
Meanwhile, an article in the ARPR assesses the implementation of the OGP in the US (OGP didn’t do much to change the way the US does transparency) and Portuguese researchers have proposed a “transparency ontology” to guide the development and implementation of open data initiatives, in order to make them more relevant for citizens. The paper relies on journalists’ role as “information brokers,” which is reflected in their method. They don’t seem to have interviewed any actual citizens.
Globally, the OECD has a new book out summarizing the future of Open Government, while the 2016 UN E-government survey paints a rosy picture. It finds that 90 countries have a portal for open data or services, 148 countries provide at “online transactional services” and “an increasing number of countries are moving towards participatory decision-making.” #devilinthedetails Continue reading “research links w 48-49”
Papers / Findings
- Citizen engagement in rulemaking — evidence on regulatory practices in 185 countries (from the World Bank). TL;DR: opportunities for engagement are greatest in developed countries with strong regulatory systems, as are the use of ex post ante impact assessments. Paper includes an incredibly brief literature review and the study itself is based on e-questionnaires (word docs, expert perception only, no data on actual participation), which was sent to 1,500 individuals in 190 countries. The researchers also conducted follow up interviews for clarification, but there is no information on how many questionnaire responses were received. Most strikingly, the report advances a composite scoring mechanisms for engagement in rulemaking, for application across all country contexts. It’s clunky, with 4 scoring options for most metrics, each of which beg a million questions about comparability and the applicability of the scores to individual political contexts. I’d love to read some reflections on the challenges in actually applying this. Methods and questionnaire available here.
- User Research on UK parliamentary data from the ODI. Contains 4 detailed recommendations plus user journeys, but very sparse info on the methods or users interviewed. Also, @ODIHQ, stop using Scribd, we’ve been through this.
Continue reading “research links w42”