Methodical Snark critical reflections on how we measure and assess civic tech

New Research Guide on Open Data and Accountabiltiy

N

The GSDRC is a resource centre that synthesises and summarizes research for use in international development programming. It’s a great initiative for making scholarly work relevant and useful in the real world, and last week they released a new topic guide on open data and accountability. I was excited to take a look, as I’ve previously found their guides and responses to research help desk queries to be insightful and useful. This guide builds on work about infomediaries and CSOs holding governments to account, which the GSDRC produced in the last year or so, and provides a strong overview in an easily accessible format for program designers. I felt that it fell short in a few important ways, though, especially by relying on the usual academic suspects, by skipping some of the most important scholarly debates and dynamics for transparency programming, and by not directly addressing the significantly nascent state of research on the topic.

 

The good

Firstly it’s worth pointing out what this guide does well, and there’s a lot that it does very well. The language is clear and accessible for non-experts, and the structure is useful for thinking about how and if to start open data programming, and where to go for more information. It’s also helpfully peppered with specific case studies which, though they don’t always directly support larger claims, are useful illustrations for the inevitable dive into greater detail and context. This type of thing is exactly what the civic tech for accountability crew seems to consistently be clamouring for when it talks about wanting evidence.

The guide also provides some useful synthesis for researchers, and even some conceptual clarity. For example, this is the first time I’ve seen a model for considering open data and transparency programming as parallel and comparable programming modalities, each anticipated to produce accountability in similar ways (4-6, 8). That’s a modest insight, but a pretty profound one to have spelled out clearly and articulately.

The usual suspects

As a synthesis of existing research, this type of resource has a unique opportunity to start connecting some dots across disciplines and fields of practice. I think this is especially important for program design and how the civic tech for accountability thinks about evidence, which is why I was a little disappointed to see the guide rely so heavily on literature already very well known and accessible to the tech and accountability community (otherwise defined as those who attend and speak at practitioner conferences?). Here I’m thinking of MACV/IDS researchers and their associates like Jonathan Fox, the Open Data for Development research contingent, and the GovLab contingent. Somewhat peripheral, but rarely left un-cited by the above, is the Harvard governance school contingent, represented by Archon Fung and co-authors. These people do excellent work and there’s a value in synthesising that work, but I was disappointed not to see more reference to some of the cutting edge research being done in public administration studies, e-governance scholarship or the literature on e-participation.

There’s a number of very useful literature reviews out there that have valuable insights on the dynamics this guide addresses (Hossain et al, 2016; Ingrams, 2016; Jakobsen et al, 2016; Medaglia, 2012; Reilly & Alperin, 2016; Sorrentino & Niehaves, 2010), there’s also a number of contributions from other fields that directly address the question at the heart of this guide (Mabillard & Zumofen, 2016; Reggi & Dawes, 2016; Sandoval-Almazan, Gil-Garcia, 2015; Welch & Fulla, 2005), as well as some excellent experimental work with directly relevant recommendations (Attard et al, 2015; Bertot et al, 2010; Martin, 2014; Zuiderwijk & Janssen, 2014; Zuiderwijk & Janssen, 2015).

To be clear, I’m not arguing for specific insights in any of these works, or even that there is substance excluded by the guide by not including them. But I think there’s merit in joining literatures for a guide like this, which is  explicitly intended to synthesise research and make it accessible. I’m also pretty sure that doing so methodically would produce some insights.

(Some of) the debates

The guide’s section on “key debates” addresses privacy, consent, concerns about exploitation, and impacts on inequality. These are all important issues, but the focus on potential harm (ie #responsibledata) is just one kind of debate around open data, transparency and accountability. I miss references to debates about the importance of uptake incentives and user studies (McGee & Carlitz, 2013), to the tension between different advocacy communities working in these areas and how that can affect program implementation (Fumega, 2015), to debates about how and if evidence and learning can influence accountability programming in the first place. These are all active debates, of direct relevance to the users of this guide, and though they get referenced peripherally in other sections, a close analysis here would be both useful and appropriate.

It’s also worth noting that almost all of the references in the debates section are to grey literature, blogs or communications. There’s almost no academic literature referenced, unless as a conceptual foundation, for example “The idea of consent is central to the idea of privacy (Cate, 2006)” . On the one hand, that’s understandable. Debates happen in the real and messy world of designing, implementing and reflecting on accountability programming. On the other hand, there are a lot of insightful scholars working on these debates (I’m thinking of Crawford, Taylor, Sunstein, Gray, Cassili and others).

Devil in the details

There were a number of small but important things I missed when reading through the guide. Here’s a couple.

  • A fairly limited understanding of the accountability value chain and theory of change.
    The guide primarily synthesises conceptual work by Fox, Fung and GovLab, which is strong and important, but there are a host of conceptual frameworks for understanding the interplay of accountability, data and transparency out there. They’re often only slightly but challengingly different, which makes them difficult to compare. But this is exactly why it would be nice to have it done in a research synthesis, so that we could gain their insights about staging and sequencing, about incentives and about interaction which are highly relevant to program design (Charalabidis & Koussouris, 2012; Irvin & Stansbury, 2004; Jakobsen et al, 2016; King et al, 2016; Vigoda, 2002).
  • Incentives, users and uptake
    Understanding how and why people access and use open data for accountability is key to the theory of change sketched by the guide, and there’s a lot of compelling research with recommendations and micro-level findings to refer to (de Zuniga et al, 2010; Grossman et al, 2015; Huang & Benyoucef, 2014; Shah & McLeod, 2009; Zuiderwijk & Janssen 2015; Weerakkody et al, 2016; McGee & Carlitz, 2013). This simply doesn’t get treated much in the guide, except peripherally in discussing infomediaries or in describing the debate on inequality.
    Notably, there is a significant discussion of incentives for governments to engage with open data and accountability programming, but this is spread across several sections. Consolidating this and accompanying it with a citizen perspective would be useful.
    The question of incentives was for me most notably absent in discussing the pre-conditions for “open data and transparency initiatives to lead to accountability” (ie: “2. Societal actors are able to find, access and use this data” but they also have to want to access and use it, and “4. Functioning response systems are in place, to impose sanctions or introduce other changes…”, but there must be incentives in place to motivate those changes. p9)
  • Consultations on open data and accountability
    Closely linked to the above, there’s good reason to think that consulting with the users of open data before designing and implementing open data projects can dramatically increase the likelihood of projects’ impact. The guide mentions this briefly, almost as an afterthought when considering the technical aspects of success factors (p26). It merits much more attention, and again, there’s a significant amount of research to synthesize and present (Liu, 2016; Swapan, 2016; Sandoval-Almazan, Gil-Garcia, 2015; Weerakkody et al; 2016, others) .
  • Arguments about the Transparency
    Perhaps unsurprisingly, this guide assumes that accountability and transparency are non-conditionally desirable, either as ends or principally. This isn’t always the case, as we’ve been reminded by recent scholarship (Sunstein, 2016) and social commentary. Addressing the rational case against transparency, and more of the research on it, would be welcome.

In summary

The guide is great, but whether due to time and resource constraints, or a very specific mandate, it doesn’t do what it could to link up relevant research from different disciplines. This is too bad, and it might be why there are a few missing bits that could be tremendously useful for project design.

But perhaps more than anything, I wish that the guide took a more critical stance regarding the state of scholarship per se. We need critical acknowledgement that the literature is siloed, but also that it’s spotty and that there’s usually just one or two studies to evidence a claim. We remain largely under the thumb of case studies that don’t help us to arrive at general rules of thumb.

Explicitly acknowledging how limited the evidence base is when making broad claims is especially important for presentations of research like this, which explicitly target practitioners.[FN1] Failing to do so sets practitioners up to draw false conclusions, and doesn’t help the research field mature. Because at base, despite a recent proliferation of studies, research on technology, data and accountability remains an incredibly young field. And the rapid influx of quasi-research institutions, supported by donor interest in evidence and learning among grantees shouldn’t cover that up. That’s a caveat that should sit prominently at the top of any research synthesis.


References

(Note, these are pulled a little at random, and this blogpost isn’t meant to be a literature review, so treat these as illustrative. Also, there’s lot’s of super relevant grey research (esp from the World Bank) and research from the usual suspects that weren’t included hear for reasons that should be clear from the above. Also, because really, how many last names and years do you want to read in the middle of an argument).

Bertot, J. C., Jaeger, P. T., & Hansen, D. (2012). The impact of polices on government social media usage: Issues, challenges, and recommendations. Government Information Quarterly, 29(1), 30–40. http://doi.org/10.1016/j.giq.2011.04.004

Charalabidis, Y., & Koussouris, S. (2012). Collaboration for Open Innovation Processes in Public Administrations. (Y. Charalabidis & S. Koussouris, Eds.), Empowering Open and Collaborative Governance: Technologies and Methods for Online Citizen Engagement in Public Policy Making. Heidelberg: Springer. http://doi.org/10.1007/978-3-642-27219-6

de Zúñiga, H. G., Veenstra, A., Vraga, E., & Shah, D. (2010). Digital Democracy: Reimagining Pathways to Political Participation. Journal of Information Technology & Politics, 7(1), 36–51. http://doi.org/10.1080/19331680903316742

Fumega, S. (2015). Understanding Two Mechanisms For Accessing Government Information And Data Around The World. http://webfoundation.org/about/research/understanding-two-mechanisms-for-accessing-government-information-and-data/

Grossman, G., Humphreys, M., & Sacramone-Luz, G. (2015). Information Technology and Political Engagement: Mixed Evidence from Uganda. Working Paper, 1–27.

Hossain, M. A., Dwivedi, Y. K., & Rana, N. P. (2016). State of the Art in Open Data Research: Insights from Existing Literature and a Research Agenda. Journal of Organizational Computing and Electronic Commerce, 9392(December), 10919392.2015.1124007. http://doi.org/10.1080/10919392.2015.1124007

Huang, Z., & Benyoucef, M. (2014). Usability and credibility of e-government websites. Government Information Quarterly, 31(4), 584–595. http://doi.org/10.1016/j.giq.2014.07.002

Ingrams, A. (2016). An Analytic Framework for Open Government Policy Design Processes. In J. H. Scholl, O. Glassey, M. Janssen, B. Klievink, I. Lindgren, P. Parycek, … D. Sá Soares (Eds.), Electronic Government: 15th IFIP WG 8.5 International Conference, EGOV 2016, Guimar{ã}es, Portugal, September 5-8, 2016, Proceedings (Vol. 2, pp. 203–214). inbook, Cham: Springer International Publishing. http://doi.org/10.1007/978-3-319-44421-5_16

Irvin, R. a., & Stansbury, J. (2004). Citizen Participation in Decision Making: Is It Worth the Effort? Public Administration Review, 64(1), 55–65. http://doi.org/10.1111/j.1540-6210.2004.00346.x

Jakobsen, M., James, O., Moynihan, D., & Nabatchi, T. (2016). Introduction: JPART Virtual Issue on Citizen-State Interactions in Public Administration Research. Journal of Public Administration Research and Theory, (Jakobsen 2013), 1–8. http://doi.org/10.1093/jopart/muw031

King, C. S., Feltey, K. M., & O ’ Neill Susel, B. (2016). The Question of Participation : Toward Authentic Public Participation in Public Administration. Public Adminstration Review, 58(4), 317–326.

Liu, H. K. (2016). Exploring Online Engagement in Public Policy Consultation: The Crowd or the Few? Australian Journal of Public Administration, 0(0), 1–15. http://doi.org/10.1111/1467-8500.12209

Mabillard, V., & Zumofen, R. (2016). The complex relationship between transparency and accountability: A synthesis and contribution to existing frameworks. Public Policy and Administration. http://doi.org/10.1177/0952076716653651

Mcgee, R., & Carlitz, R. (2013). Learning study on the users in technology for transparency and accountability initiatives: assumptions and realities. https://opendocs.ids.ac.uk/opendocs/bitstream/handle/123456789/3179/IDS-UserLearningStudyonT4T%26AIs.pdf?sequence=1&isAllowed=y

Medaglia, R. (2012). EParticipation research: Moving characterization forward (2006-2011). Government Information Quarterly, 29(3), 346–360. http://doi.org/10.1016/j.giq.2012.02.010

Reggi, L., & Dawes, S. (2016). Open Government Data Ecosystems: Linking Transparency for Innovation with Transparency for Participation and Accountability. In J. H. Scholl, O. Glassey, M. Janssen, B. Klievink, I. Lindgren, P. Parycek, … D. Sá Soares (Eds.), Electronic Government: 15th IFIP WG 8.5 International Conference, EGOV 2016, Guimar{ã}es, Portugal, September 5-8, 2016, Proceedings (pp. 74–86). inbook, Cham: Springer International Publishing. http://doi.org/10.1007/978-3-319-44421-5_6

Reilly, K., & Alperin, J. P. (2016). Public engagement in open development: a knowledge stewardship approach, 9(1), 51–71. Retrieved from http://www.sirca.org.sg/wp-content/uploads/2015/08/Reilly_WhitePaper.pdf

Sandoval-Almazan, R., & Gil-Garcia, R. J. (2015). Towards an Integrative Assessment of Open Government: Proposing Conceptual Lenses and Practical Components. Journal of Organizational Computing and Electronic Commerce, 9392(December), 10919392.2015.1125190. http://doi.org/10.1080/10919392.2015.1125190

Shah, D. V., McLeod, J. M., & Lee, N. (2009). Communication Competence as a Foundation for Civic Competence: Processes of Socialization into Citizenship. Political Communication, 26(1), 102–117. http://doi.org/10.1080/10584600802710384

Sorrentino, M., & Niehaves, B. (2010). Intermediaries in E-inclusion: A literature review. Proceedings of the Annual Hawaii International Conference on System Sciences, 1–10. http://doi.org/10.1109/HICSS.2010.239

Sunstein, C. R. (2016). Output Transparency vs. Input Transparency. Preprint, 1–16. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2826009

Swapan, M. S. H. (2016). Who participates and who doesn’t? Adapting community participation model for developing countries. Cities, 53, 70–77. http://doi.org/10.1016/j.cities.2016.01.013

Vigoda, E. (2002). From Responsiveness to Collaboration: Governance, Citizens, and the Next Generation of Public Administration. Public Administration Review, 62(5), 527–540. http://doi.org/10.1111/1540-6210.00235

Weerakkody, V., Irani, Z., Kapoor, K., Sivarajah, U., & Dwivedi, Y. K. (2016). Open data and its usability: an empirical view from the Citizen’s perspective. Information Systems Frontiers. http://doi.org/10.1007/s10796-016-9679-1

Welch, E. W., & Fulla, S. (2005). Virtual Interactivity Between Government and Citizens: The Chicago Police Department’s Citizen ICAM Application Demonstration Case. Political Communication, 22

Wirtz, B. W., & Daiser, P. (2016). A meta-analysis of empirical e-government research and its future research implications. International Review of Administrative Sciences. http://doi.org/10.1177/0020852315599047

Zuiderwijk, A., & Janssen, M. (2014). Open data policies, their implementation and impact: A framework for comparison. Government Information Quarterly, 31(1), 17–29. http://doi.org/10.1016/j.giq.2013.04.003

Zuiderwijk, A., Janssen, M., & Dwivedi, Y. K. (2015). Acceptance and use predictors of open data technologies: Drawing upon the unified theory of acceptance and use of technology. Government Information Quarterly, 32(2015), 429–440. http://doi.org/10.1016/j.giq.2015.09.005


[FN 1]
For example, the guide notes that “MySociety found that the primary beneficiaries of civic technologies built on government data or services are privileged populations (Rumbull, 2015),” but fails to note that that this is only directly and obviously true in terms of demographics for 2 of the countries surveyed (US & UK). Characteristics of users in developing country contexts were much more diverse, and this was represented by a thoughtful conversation in the study, which isn’t noted here (pgs 19-21). More importantly, perhaps, the guide should have noted that this implication is drawn from only one study (perhaps the only study of its kind, based on a single survey conducted in four countries. This is one specific example, but it’s also a more general figure of speech throughout the guide and other comparable commentary.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags