Methodical Snark critical reflections on how we measure and assess civic tech

research links w 16-17

r

Findings

Do international norms and evaluations influence country performance? New evidence on the Aid transparency Index suggests they do. Combination of original panel data and interviews gives some pretty fascinating insights into institutional processes in government.

Community & Resources

A couple of new (and arguably redundant) efforts to open data in the US this week:

  • The US State Department launched the “F Interagency Network Databank (FIND)” for accessing international development data by country.
  • Former Microsoft executive spends a ton of cash creating USAFacts, to provide an integrated look at revenue and spending across federal, state and local governments. Coverage and skepticism

There’s also now a SAGE Handbook of Resistance, @morganweiland has crowdsourced a lit review on free speech theory and technology in the US context, data from the 2016 Right to Education Index is now live, there’s 1 week left to comment on @SunFoundation’s Tactical Data Engagement Guide, and the eminent Stephen Coleman has a new book coming out to revitalize cyber utopianism

For commentary, @fp2p has a 3 part series of posts on academic/INGO collaboration. Here‘s the final one. There’s also an Oxfam blog on getting data used in policy, Merrill Sovner describes how academics can and can’t use new data on global human rights philanthropy, and @ICT_Works has 3 Lessons Learned in Creating Digital Development Tools. TL;DR: it’s all about the user.

In other news, @TAInitiative‘s new strategy maintains an emphasis on learning, and describes a desire to “facilitate the exchange and curation of information,” and to “work with the donor members and partners to systematically build the evidence base.” But the milestones for this priority area suggest that this will be much more targeted towards the staff of donor members than towards the wider T/A community, which is too bad.

In the Methodological Weeds

  questions the reliability and representivity of big data from sensors, while @citizenlabco ‏, the Belgian(?) platform/SAS provider for smart cities has a blog post suggesting 4 nice metrics for measuring the quality of online citizen engagement. Broadly applicable, easily implemented.

@TheProZorro ‏ kicks off a series on the @opencontracting site on “measuring outcomes and impacts of open contracting and procurement reforms in our projects.” The post presents a combination of survey-based user perception data, usage metrics and output indicators (resolution of cases by government or changes in the “savings” in procurement processes, represented by the difference between “value estimates” for procurement and contract amounts). It would be useful to go systematically into the weeds on each of these, presenting the indicators in detail, and discussing their strengths and weaknesses. Since the Open Contracting Partnership is investing heavily in learning across it’s partners, this is hopefully being worked out in the background.

Academic Opps

Funds

Events

Hires/positions

Calls for Papers/Participation

  • Backward Glances 2017: Mediating Resistance (Deadline: 15 june. Event: 29-30 Sept, Chicago)
  • Participatory Design Conference (Deadline: 10 Nov. Event: Aug 2018, Belgium) Note, the sudden aneurysm their splash page induced suggests that perhaps some of the webpage designers should not have participated.
  • Workshop on Recommendation in Complex Scenarios (Deadline: 6 June. Event: 27-31 Aug, Como, Italy)
  • Submit session ideas for MerlTech 2017 (Deadline: 12 May. Event: 7-8 Sept, Washington DC)
  • Sixth annual International Symposium on Media Innovations (Deadline: 15 May. Event: 16–17 Oct, Tallinn)

Miscelaneous & Absurdum

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags