Methodical Snark critical reflections on how we measure and assess civic tech

Building on TICTec: more thinking about research pls

B

 

Last week I joined the Impacts of Civic Technology Conference 2016, a sort of annual mixer for researchers and the civic tech community, organized by MySociety to “promote and share rigorous and meaningful research into online technologies and digital democracy around the world.”

The event was good (write ups here, here, here, and here), but notable for being so firmly grounded in the idea of research, without talking about it all that much. I left inspired, but frustrated, wishing there was a forum for addressing some of the thornier issues surrounding this still fuzzy idea of research and evidence on civic technology. Because throughout the event, the idea of “research” influencing programming got mentioned a lot, but never examined. Here’s a quick run through some of those issues, and thoughts about why they aren’t yet getting the attention they deserve.

We name drop “research.” A lot.

Firstly, there’s this weird disjunction between the idea of a research event, and an event dominated by practitioners presenting cool projects. Not to say that there wasn’t research presented, there was and it was impressive, but it was outnumbered (and the T4T/A usual suspects were well represented – you know who you are). When I first saw this in the agenda I thought I might have projected my desire for a research focus, but key presentations throughout the day suggested it wasn’t just that.

“We need research to tell us if civic tech is working, and how to make it work better” was a common refrain in TicTec sessions, but none of the discussions dug into why or how. We came the closest in a “From the funders” section (no coincidence), but still didn’t manage to talk about what research can and can’t do to make for better programming, what we know and don’t know already, or what we need to do to improve the evidence base.

Instead, there were series of sessions focused on cool new initiatives, peppered with presentations on research. And these research projects were compelling, but often highly niche and micro-level focused analysis (incentives for sms platform uptake in Uganda, user research for budget monitoring in west Africa, sociological studies of civic tech platforms in US city governments), or highly abstract and difficult to operationalize (social movements and power). All in all, I didn’t catch any attempt to assess what kinds of generalizable insights research has and hasn’t produced to date.

We don’t know much

This is at least partly because we don’t have much in the way of learnings. There has been a widely recognized demand for “research”, “evidence” and “learning” to improve programs that use technology for accountability and transparency over at least the last five years. But the vast majority of research conducted in that time has been individual case studies (and recently, a handful of single context RCTs). These can be incredibly useful for understanding the nuanced ways in which power and incentives influence the implementation and outcome of Tech for T/A initiatives, but they rarely tell us much that can be easily or readily adopted to other contexts.

The case studies that do offer generalized insights, tend to be produced from within the T/A community, often through less rigorous methodologies. These recommendations from practitioner research are often shrug-worthy, because of course, deep down, we all already knew that engaging with a broad group of stakeholders early in program design will increase the chances of platform uptake.

To generate useful insights that can be generalized and applied to programming in different contexts, we need comparative  work, studies which review large numbers of initiatives according to common frameworks, in order to identify and understand the influence of common and uncommon factors (micro and macro). This kind of work is hard, so it’s not strange that there’s only a few contributions in the civic tech discourse. But there are meaningful recent contributions (see Fox & Piexto). We should be cognizant of these, and also think carefully about why there are so few. If we’re really keen to answer the big questions about why and how civic technology and tech for accountability work, we need to start looking for answers across contexts.

We have no idea what we know

That said, we the civic tech community generally have NO IDEA what’s already out there. There is a tremendous amount of rigorous and careful research (both descriptive and experimental) on individual cases or small n context comparisons which could likely be of great use to individual program design. But practitioners generally don’t bump into it, and they don’t go looking either.

This is due in part to the indomitable gulf between academics and practitioners, but policy and hipsterdom play a role too. The vast majority of relevant academic literature sits behind paywalls, and what’s worse, most of it isn’t called civic tech or tech for transparency and accountability. The academy still has a fetish for e-government, and notwithstanding the fact that e-government literature is running hugely useful research on government incentives, citizens’ political efficacy and assessment frameworks for open government: OMG, e-government is like so 2002. The sad fact is that most of the relevant research isn’t being read, perhaps because it’s not being being tweeted.

But even when evidence is available, accessible and fashionable, reading and applying rigorous research is hard. I give you this, from Guy Grossman, one of the TicTec keynote speakers, concluding a study on factors whether promoting civic tech on rural Ugandan radio promotes uptake:

All of these findings suggest that problems are not present on the demand side. In contrast survey evidence suggests weaknesses in the system itself. Moreover, our analysis of player effects; i.e., that the change in the identity of the implementer, which was easily observed by experimental subjects, might have been consequential, suggest that general trust in the responsiveness of politicians is preventing engagement but is also rational. Interestingly in our case, player effects do not stem from motivation differences between implementers (as for example identified by Berge et al. (2012)), but rather from the way player identities interact with citizen expectations.

With the multiple pieces of evidence available to us we infer that the failure of the national system is not simply a function of weak demand on the part of citizens but is a function of larger inequalities that the intervention did not address—but which perhaps parliament may be able to address by tinkering with its outreach strategy—and in part a function of more fundamental weaknesses in the broader political system, which parliament likely cannot, or will not, address easily. (full paper)

It’s a tremendously relevant and useful finding for many civic tech and accountability projects. Apply it to your programming, quick, go.  

Enter the mutants?

So those are some pretty significant obstacles towards “evidence-based programming” for the civic tech and T/A communities. But the time seems right to do something about it. As Giulio Quaggiotto notes, one of the most fascinating impacts of technology in development is how it has morphed institutional relationships and opened up for organizations that can work across sectors and expertise in novel ways. This dynamic is no less applicable to civic tech research, and I’m looking forward to seeing some real mutants arise.

But we’re not seeing a lot of that yet. Formal research interest in these issues remains radically decentralized, and bound within academic funding and publishing incentive structures. Research boutiques (like eGap), University-hosted think tanks (like GovLab) and institutional development powerhouses (like R4D) have been dominant in the practitioner community for a while, and prominent sector-focused  initiatives like Making All Voices Count’s support to practitioner research is important. But there’s not many efforts that cross the academic/practitioner divide. New(ish) research networks like  MacArthur Foundation’s Research Network on Opening Governance and the Research Consortium on the Impact of Open Government remain pretty firmly in academy and practitioner turf, respectively, and while it will be interesting to watch how the Transparency and Accountability Initiative morphs and takes on new issues, I haven’t seen any efforts to facilitate collaboration between academics and practitioners, which would would be the real game changer for actionable “research”.

Trouble is, I’m not sure if it will come; the incentives within the academy and the tech-for-good sectors are so radically different. In the meantime, we can’t be blamed for the perennial turn to a platform to solve intractable social  problems. The Open Government Research Exchange could be a fantastic tool if it can manage to pass the user tipping point. I’ll do everything I can to help it out.

Another, slightly different conversation

Thinking about all this, it occurred to me that TicTec was in many ways  a perfect reflection of how the civic tech and tech for T/A communities deal with the question of research. We’re a fairly tight knit group of people who share mostly the same convictions and assumptions about the work we do. We are pressured by our donors to think carefully about learning and evidence, so enter into strategic relationships with a handful of carefully selected researchers, talk a big game about data (because we know about that and it sounds like it should be useful for learning), but barely have time and capacity to run our projects out, and can’t prioritize an academic article over donor reporting cycles and an overflowing inbox. That sounds harsh, but I think it’s about right, and barring any significant change in how evidence gets produced, it’s likely to stay that way.

This was also perfectly represented in the last TicTec session I attended. The 3 presenters agreed to shorten their presentations and forego questions, in order to finish with an open conversation about accessing and conducting research. The discussion was to be led by IDS’s Rosie McGee, who had time to articulate some thoughtful and pressing  issues, including such gems as:

  • Given what we have seen, we can safely assume that not many platforms will actually manage to close feedback loops, and we need to think about the extent to which evidence can address this
  • “Evidence-based policy and practice are a myth in this sector”, and there’s lots of evidence that isn’t being used in project design
  • The tech innovation and research communities seem to be operating on the basis of fundamentally opposed world views (iterate to improve vs. the scientific method)

(She represents her thoughts more carefully and accurately here, I noticed after writing this up)

There was a lot of nodding in the room. But no time for discussion This was not Rosie’s fault; she was concise. But we were operating within the established framework of a conference, from which we couldn’t deviate to explore the most fundamental and pressing questions about research on civic tech, in much the same way as we develop and implement programs from within specific frameworks for fundraising, reporting and managing the social conference circuit, without having the time to read the most recent evidence, or really understand what a P value is.

But I think this is something we can get over, and the first step might be to have a more thoughtful conversation about the bigger questions. Nobody wants another conference (or at least nobody should), but skype goes a long way and Rosie gave us a great place to start. If we can gather some thoughtful people from across pre-mutant networks, research boutiques, donors and those NGOs who really are hungry for evidence, we might be surprised by how clear (if not feasible) some of the action items are. We might also be able to inform those like IDS and the Bank who actually shape how this field understands research.  

This shouldn’t shouldn’t be read to disparage MySociety’s research work, which is breaking important ground in the field, or to replace TicTec. It’s incredibly important to have an arena where the idea of research gets inserted into the way our community talks about itself, and as Zara Rahman notes, there’s a lot to learn from these conversations about how we do our work.

But we do need something more, something differently geeky, where there’s a premium on thinking carefully about survey design and causal inference. If it exists already, then I don’t know about it, but I’d love to. Please, interwebs, produce.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags