Or at least the three I had in my bookmarks. But I feel like there’s been a lot in recent weeks. Are there others to add to this list? Being a Scholar in the Digital Era: Transforming Scholarly Practice for the Public Good (Jessie Daniels and Polly Thistlewaite, Eds). Strong normative bent in this one, for open research as well as social impact. Explicit focus on collaborating with...
Research Links w 38
Papers and Findings Text analysis of Swiss media during national referenda on smoking bans finds that the use of evidence in political debates is rare, and usually used only to increase speakers’ credibility. Monitoring the activity of Swiss parliamentarians, meanwhile, is directly and positively affected by monitoring (explicitly via video recording parliamentary sessions) according to a...
The problem with the problem with input transparency
This isn’t about research or methods, so I’ll be brief. Cass Sunstein, US policy veteran and eminent scholar, recently released a draft article distinguishing between input and output transparency, suggesting that arguments are weaker for the former, and offering reasons why input transparency might often not be a good thing. (To grossly oversimplify: there are too many inputs to policy-making...
password security tools in an age of constant breaches
I keep a fairly close eye on Pindrop’s On the Wire and the Hacker News, which means I’m bombarded by a constant stream of news about hacks and exploits and data leaks. I’m also lucky enough to get notified whenever one of them affects me, thanks to the Have I been pwned?, which crosschecks your email against the lists of any hacked credentials made public, then sends you a...
Crimes against data, talk by Andrew Gelman
Andrew Gelmen gives a great talk on how data gets abused in research and politics. He goes a bit into the statistical weeds at times with T & P values and the like, but he’s also a pleasure to listen to. And he gives some great examples of both academics and public figures that either “treat statistics as a means to prove what they already know, or as hoops to be jumped through...
Research Links (w 36-37)
Papers / Findings Dont trust the crowd! Paper in PLOS-One finds that the subjective experiences of contributors to crowdsourced map-making are influenced by their nationality and degree of expertise. Moreover-surprise-this influences what they report to the map. Data sampled from geowiki, methods a bit too technical for me to assess. New empirical analysis in Administration and Society shows...
New Research Guide on Open Data and Accountabiltiy
The GSDRC is a resource centre that synthesises and summarizes research for use in international development programming. It’s a great initiative for making scholarly work relevant and useful in the real world, and last week they released a new topic guide on open data and accountability. I was excited to take a look, as I’ve previously found their guides and responses to research help desk...
Methods for Measuring Open Data
Back in 2014, the Web Foundation and the GovLab at NYU brought together open data assessment experts from Open Knowledge, Organisation for Economic Co-operation and Development, United Nations, Canada’s International Development Research Centre, and elsewhere to explore the development of common methods and frameworks for the study of open data. It resulted in a draft template or framework for...
Measurement always goes bad
What Flowers found was that even the best performance measurement systems tend to ossify. In 2010, 11 state and local public interest associations joined together to form the National Performance Management Advisory Commission. In its report, A Performance Management Framework for State and Local Government, the commission singled out Nashville, Tenn.’s Results Matter system as an example of a...
Research Links w 35 (back from summer)
So I’m back in the office and finally done wading through all the interesting stuff that piled up in August, but there’s too much to put here, so popping right into September… Papers/Findings A comparison of FOI requests in 11 jurisdictions concludes that everything depends and that comparison is hard (I agree), and references some common indicators for measuring implementation...