Methodical Snark critical reflections on how we measure and assess civic tech

research links w40

r

Papers / Findings

  • Squeaky wheels get the grease.  Analysis of policy crowdsourcing for urban planning in California uses natural language processing to show that (1) whether or not citizen contributions are included in policy depended on their “volume and tone,” (2) that the contributing crowd was more representative of the community than elected representatives contributions, and (3) that NLP analysis still requires a lot of human effort.
  • Bloomberg Philanthropies’ What Works Cities project released a conference paper claiming to assess data use in 67 participating smart cities. They find that (1) government leaders are constrained, (2) stated commitment is the “strongest indicator of overall performance,” and that (3)  doing well in one area correlates with

    doing well in all areas (except regarding open data, where city size is the most important contextual factor, independent of performance in other areas). I hesitate to post this. The methods for data collection and scoring are not described, though the authors acknowledge some disparate data and refer to a “rigorous” collection process.

  • Also from Bloomberg, a new report on privacy laws around the world (n=61).

Commentary and community

  • Umm, I think the World Bank just made a dashboard that actually helped with something. Specifically, it collected pretty novel data, and facilitated smart economic policy adjustments on tight timescales by the government of South Sudan.
  • Paz Concha reviews The Creative Citizen Unbound: How Social Media and DIY Culture Contribute to Democracy, Communities and the Creative Economy, a book summarizing results from a 30 month research project on digital communication in community organizing.
  • An article in the Journal of Clinical Oncology argues that research aiming to influence policy should have more rigorous methodological standards. Apparently this hasn’t been the case in medicine either.
  • Last week saw the International Open Data Conference, which included a 1 day research symposium on open data (my review and summary here). Research and measurement also headlined in a few of the main conference sessions, but which in practice seemed to be geared towards helping conference organizers get feedback on specific measurement challenges and initiatives. That’s one way to bridge the researcher/practitioner divide, but I found more deliberate efforts  conspicuously absent. There’s worthwhile summaries from Sunlight Foundation, ODI, and MAVC. My biggest takeaway? I learned about 5 open data indices that I didn’t know about. Coordination, anyone?

In the methodological weeds

  • Natural language processing of legal and policy docs is used to assess the policy of US political institutions, and, wait for it, to identify policy similarities and alignments across institutions. “Our work illustrates vector-arithmetic-based investigations of complex relationships between word sources based on their texts.” I can’t even.
  • Charalabidis and co-authors suggest a “multi-perspective evaluation framework” for assessing government social media monitoring for policy development. It’s a weighty read, but manages a compelling balance of 3 assessment perspectives (grossly simplified, the political utility of the data, the representivity of the data, and the implementability of the method, all heavily theorized). They apply the assessment method to a multi-country EU project, and pitch it as good for promoting policy innovation. On a quick read I saw no discussion of the normative implications for balancing and prioritizing these perspectives.
  • UK researchers propose a measurement technique for “Data Exploitability,” in smart cities, “to specify the compatibility of the policies attached to the delivered data – obligations, permissions and prohibitions – with the requirements of the user’s task.” This is intended for broad and diverse data sets (ie: sensor data, crowdsourced data, admin data), and applied to the British city of Milton Keyes. The method is hypothetical and procedural, and the authors promise a follow up paper on “computationally handling data usage policies.” The big challenge in application has of course to do with political allocation of resources, which is not mentioned here, but it would be exciting to see it applied in their case city.

Miscellania and absurdum

  • Calls for papers are out for studies on Elvis Presley and Harry Potter (on the latter: “a diversity of scholarship opportunities and […] innovation in approach to research about the Potterverse”).
  • California Senator dabs in televised debate.
  • Psychology Studies’ Replicability Problem: the cartoon

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags

Get in touch

Suggest research to be reviewed or mini-lit reviews. Ask questions or tell me why I'm wrong.