Methodical Snark critical reflections on how we measure and assess civic tech

The Open Data Research Symposium 2016: summary and issues


Wednesday saw the second Open Data Research Symposium, convened on the sidelines of the International Open Data Conference (and this year’s IODC was a doosie, with side events and opre-events stretching across 5 days different parts of Madrid).  Here is a quick summary of the papers and working groups, followed by some hanging questions and challenges for next year’s Symposium.


The bulk of the day was devoted to presenting 33 papers, all of which are available in preprint form, and are likely to be released in book form around the turn of the year (kudos). The presentations I was able to follow were strong. I didn’t see any groundbreaking methods, but there were some provocative findings. I’m most interested in the role of intermediaries and incentives, here were some of my favorites:

Working Groups

After a late lunch, the symposium broke into working groups to talk about OD in developing countries, research infrastructure, how to publish the conference papers, and measurement frameworks. I joined the last conversation, which ended up focusing on some of the methodological challenges hounding the comparative indices like the Open Data Index and Barometer. I’ve written about these challenges before, and am concerned to see that they might be imported into the Open Data Charter Assessment Framework currently being developed. This was addressed again in a dedicated session in the OIDC proper. More on that in a later post.

Hanging Thoughts

Most of the papers were case studies, I didn’t see much conceptual work being done. This is arguably because the program committee is trying to surface and connect with new scholars, who tend to be working on the countries in which they’re based. It might also be because they think that case-study level findings will resonate best with practitioners at the IODC.
These would be reasonable justifications, but I missed conceptual efforts to make sense of the disparate case studies. More ambitiously, the OD Research Symposium seems like a logical place to start trying to draw conclusions from contextually dependent findings. This inductive process is the next step in answering some of the most pressing questions about open data. Next year I’d like to see the Symposium test those waters.

I was also struck by the isolation of academics and practitioners. Of course, some of this happens organically when you present 33 papers on the fringes of 300 person practitioner conference (and research did feature in the larger conference program). But it’s also an issue that deserves careful and dedicated thinking within an academic setting. There are thorny ethical, practical and empirical questions to be asked about how researchers relate to social impact movements as their research objects. These issues generated a lot of interest in (ever brief) Q&A session  that followed presentations, but deserves a structured conversation. I’d like to see the Symposium address this directly next year.

All in all, the papers were great, and it was nice having a bunch of people to nerd out with over coffee, but I’ve also been left with nagging questions about a few central concepts:

  • Impact
    Why are researchers using the term “impact” colloquially? Sure there’s an incentive to talk about impact in fundraising, and we all want to know about the positive consequences of projects, but researchers need to be more deliberate with our language. Projects lead to outputs (like data portals), which hopefully lead to outcomes (civil society uses data portals), which, if we’re really lucky, lead to outcomes and impact (less corruption, empowered communities, usually 10 or 15 years down the road). Talking about “impact” as if it’s something we can already see leads to sloppy thinking.
  • Engagement
    We talk a lot about citizen engagement with open data, but in both research and practice, we fail to differentiate between ways of engaging. I think that there’s a continuum at play, from passivity, where simple awareness of data availability can have ambient influence on governance practices, to reading data, analysing and using data in building things, to co-producing and designing data, and all the way to structured interaction and communication between citizens and governments. This far end of interactive engagement resembles the preconditions for accountability outlined by Fox and others. We should begin thinking carefully about these different types of engagement, and when they correspond with different modes of open data.
  • Social capital of infomediaries
    Information intermediaries are critical in most analyses of open data ecosystems. They parse and package government data in ways that make those data useful and useable for users. But this requires engagement with different kinds of actors, and in many contexts, that requires different kinds of social capital and legitimacy. Francois van Schalkwyk has done some preliminary work to understand the social capital of intermediaries in South Africa open data ecosystems, and at the symposium, Johanna Walker’s paper on sustainable start–ups looked at the importance of social capital with international support networks. But we don’t know a lot about how this works in country dynamics, and how it relates to the professionalization of NGOs.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech