Methodical Snark critical reflections on how we measure and assess civic tech

research links w42



Papers / Findings

  • Citizen engagement in rulemaking — evidence on regulatory practices in 185 countries (from the World Bank). TL;DR: opportunities for engagement are greatest in developed countries with strong regulatory systems, as are the use of ex post ante impact assessments. Paper includes an incredibly brief literature review and the study itself is based on e-questionnaires (word docs, expert perception only, no data on actual participation), which was sent to 1,500 individuals in 190 countries. The researchers also conducted follow up interviews for clarification, but there is no information on how many questionnaire responses were received. Most strikingly, the report advances a composite scoring mechanisms for engagement in rulemaking, for application across all country contexts. It’s clunky, with 4 scoring options for most metrics, each of which beg a million questions about comparability and the applicability of the scores to individual political contexts. I’d love to read some reflections on the challenges in actually applying this. Methods and questionnaire available here.
  • User Research on UK parliamentary data from the ODI. Contains 4 detailed recommendations plus user journeys, but very sparse info on the methods or users interviewed. Also, @ODIHQ, stop using Scribd, we’ve been through this.

Commentary and Community

In the Methodological Weeds

  • The recent OpenDemocracy post on international human rights research (I commented here) has provoked something approaching a spat. The eminent Todd Landman, who’s driven methodological development in comparative human rights research for more than a decade, takes umbrage with the post’s general claim that comparative research obscures injustice. He gets into the methodological weeds (alternative coding mechanisms, additional variables for cross-national time-series analysis, mixed method and small-n comparative analsysis) to argue that the author’s main assertion is “simply fallacious.”Worth the read. Meanwhile, co-director of one of the data sets criticized by the post takes a higher level view. He argues that comparative data sets might not capture lived experience, but, well, that’s because it’s not what their for.

Academic Opps

Absurdum / Miscellanea

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech