Methodical Snark critical reflections on how we measure and assess civic tech

research links w5-17

r

Papers & Findings

What makes for a strong and democratic public media? According to comparative research on “12 leading democracies,” it’s all about multi-year funding, legal charters limiting gov influence, arms-length oversight agencies and audience councils. Compelling, but not shocking. Similarly, we know that the internet doesn’t drive democracy, but increased digital media penetration and demand are part of the complex processes that do. These findings confirmed by new replication research comparing data on 72 countries from 2004-2014.

E-government and open budget practices correlate strongly with good governance and anti-corruption, according to panel data on 48 countries from 2004-2015, reviewed by Turkish researchers in a Romanian journal. At least that’s my best reading, the authors’ English isn’t great, and their prose actually seems to consistently argue that the existence of these comparative indices leads to less corruption.

A new Brookings report assesses the literature on information for accountability in the education sector, emphasizing the importance of infomediaries and careful selection of dissemination tools. Transparency International has produced some not so rigorous case studies suggesting that open procurement has reduced corruption in Honduras, Ukraine and Nigeria, and research from The Engine Room shows what the humanitarian community thinks about messaging apps. Based on desk research and interviews (n=45) the report compares functionalities and suggests rules of thumb for use.

Commentary & Community

A European debate on political tech is being organized by a French think tank, including a debate on the role of research. It’s about half way through it’s two week run and no one seems to be showing up.

Do valid critiques of bad research do more harm than good? Sometimes. Andrew Gelman gives a thoughtful exploration of situations in which snarksters and methodological zealots might want to stand back and let bad research do social good.

The World Bank’s 2017 World Development Report reaffirms links between the media, transparency, accountability and policy effectiveness. There are some nice authoritative formulations for citing in both arguments and research. Good summaries from @freedominfoorg, Global Integrity and BBC Media Action.

The GovLab has published Selected Readings on Algorithmic Scrutiny with a focus on the impact of algorithms impact on Information Intermediaries, Governance, Finance and Justice.

In the Methodological Weeds

More research showing that RCTs are neither as reproducible nor  as neutral as we’d like to think (h/t @p2p), while a world bank blogpost argues that system maps are a good way to deal with attribution for policy interventions caught up in messy causal relationships.

When do protests induce policy change?

Academic Opps

Miscellenea and Absurdum

  • In Britain e-petitions are popular if mostly pointless (title of a daily chart at the Economist’s graphic detail section)
  • Twitter has manifest an academic Journal of Alternative Facts, because Twitter.
  • In 2016, Perception Institute conducted the “Good Hair” Study, the first study to examine implicit and explicit attitudes related to black women’s hair.
  • Bing, bing, bong, bong.” One of the track titles for the upcoming academic conference on “Trump’s America.”
  • From culture jamming to cultural acupuncture (book chapter from Henry Jenkins)

 

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags