Methodical Snark critical reflections on how we measure and assess civic tech

research links w 4/17

r

Papers & Findings

The world is ending. The 2016 Corruption Perceptions Index finds links between corruption and inequality, and notes falling scores for countries around the world. The Economist Intelligence Unit’s Democracy Index is titled Revenge of the “deplorables”, and notes a worsening of the worldwide “democratic recession” in 2016.

Civic techs. What are the most important characteristics for civic apps? Low threshold for use, built in feedback, and visible change and engagement across users. This according to a paper presented at a recent Cambridge conference. Meanwhile, research on Twitter use in the 2016 Ugandan elections finds that the social media platform “provides minority groups important access to public space otherwise denied on traditional media platforms,” and a Yale study suggests that city use of citizen reporting platforms correlate with lower levels of crime, potentially due to increased social cohesion, though the authors are careful not to assert a causal relationship.

Old dogs, new tricks? A survey of national governments on Facebook (n=147)  finds that they achieve more interaction when avoiding links and using outward focused language, which in turn correlates with more fans.  In Canada, however, middle manager resource constraints are part of the reason why  governmental agencies are using social media much more than  GitHub according to research from the Centre for Policy Informatics, even when there is explicit support from senior political leadership–of course the technical and functional hurdles also play a role, and the authors conclude that “ill-suited as a collaboration tool to support document writing.”

Connecting with governments. Summarizing research on open budget advocacy, the IBP’s@Halloran_B notes the importance of relationships between CSOs and other actors in “accountability ecosystems,” and offers several insights for advocacy. New research suggests that the availability of tools and network connectors are key for the spread of participatory budgeting among local governments in Estonia. An article in IMODEV’s International Journal of Open Governments (?) finds that e-rulemaking efforts in the US has largely failed, both in attracting users, and in informing them enough to make their input useful. Some solutions hop right over government, however. RCTs on Indian cash transfer programs find that the introduction of banking tech like biometric smart cards reduces corruption significantly (over 40% reductions in cash “leakage” along the transfer chain).

How does crowdsourcing and citizen science projects actually work? Last week saw a spat of publications looking at that quesiton: A peer reviewed stakeholder analysis for crowdsourced satellite observations found that participants would benefit from increased trust, support and interaction. This is in line with James Wynn’s Citizen Science in the Digital Age uses a broad collection of case studies to examine the communication challenges between scientists and citizen scientists, and how crowdsourcing projects tweak their rhetoric to appeal to potential recruits. Almost as a case in point, Chandra Clarke’s Be The Change is kind of like a recruiting handbook for citizens.

Lastly, sad programmers make bad software. New research identifies “49 consequences of unhappiness of software engineers.

Commentary & Community

OGP reports on a consultation regarding how SDG measurement can support national accountability efforts, advocating for the creation of national indicators and data sets, and the contextualization of global indicators. Meanwhile, methods training for social sciences should include methods for influencing policy, argues an LSE blog.

There was lots of new stuff last week. The International Bar Association  released an app for recording evidence of human rights violations, designed for admissibility in legal proceedings, but reinforcing credibility of evidence more generally as well. There’s now a textbook on government analytics, a new IGI book on e-government principles in public procurement has a chapter on anti-corruption frameworks, and Data & Society’s new podcast is great. It has all the researches.

In the Methodological Weeds

International Rescue Committee has released a step-by-step guide to conducting social network analysis, including reference to specific software and people you can email if you need help (!).@fp2p reviews it here.

A special issues of Social Science Computer Review digs into the practical and ethical challenges to big data methods for social science. The intro article describes main challenges as “deriving valid interpretations and treating subjects with care.” Illustrative articles tackle extrapolation of policy positions, identifying civic engagement in Facebook data and audience diversity measures.  More concretely, ODI offers 10 guidelines for the same.

Academic Opps

Miscellanea and Absurdum

The US has been downgraded from a “full democracy” to a “flawed democracy” in the latest edition of EIU’s Democracy Index (but not because of Trump).

16 articles from The World Hobbit Project

CNN put 9 men and 1 women on a panel to discuss the women’s march.

A We The People petition for the release of Trump’s tax returns has gathered more than double the number of signatures it needs to receive a formal response.

Research on traveling baseball teams confirms: east-west jet lag is the worst.

Tinder for cities: how tech is making urban planning more inclusive (headline from the Guardian)

How to give your Member of Congress a spine (headline from Mr Sifry)

MIT Media Lab and Nimen Foundation are organizing a workshop/clinic/idea lab on how to deal with misinformation in media (24-26 Feb, Boston)

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags