Methodical Snark critical reflections on how we measure and assess civic tech

What’s e-gov got to do with it?



Emily Shaw posted a great piece on the relevance of e-governance research for civic technology earlier this month. She argues that academic e-government research dwarfs the nearly non-existent academic interest in civic tech (as evidenced by 169,000 vs 185 hits on google scholar), and that civic technologists should care about research on e-government.

And in the civic tech world, we can certainly derive value from the wisdom of our e-government colleagues who’ve been working to understand what happens when government service meets the internet. To the extent that civic tech implementation requires at least an open mind—and better, an enthusiastic partnership—on the side of our government partners, it is best if we know where they’ve been coming from.

I think she’s absolutely right, but want to challenge a couple of the distinctions she makes, and look for more proactive ways that civic technologists might engage e-government learnings.  

What’s in a name?

Emily draws a firm distinction between e-government and civic tech because the users are different (e-government is developed and implemented by governments per se), but also, she argues, because they operate under different theories of governance. Specifically, she argues that e-government is focused on digitizing government services to make them more efficient, and thus adheres to a traditional governance model, whereby accountability is exercised primarily through elections. Civic technology, on the other hand, facilitates more regular interaction, in which citizens are more equal partners, and this “provides a communication structure that can fuel a different model of governance: a Government 2.0 or open government model.”

This is a fair distinction, but it’s worth noting that civic tech is here defined by its ambitions, and it’s only fair to note that e-government shares some of those as well. We tend to associate e-gov with things like booking appointments online or electronic tax filing, but for many working in the field, that’s just the start. The literature on e-gov identifies 4 stages of e-government, the fourth of which looks a lot like the governance model for civic tech, with regular interaction (Keller, 2000, also Kim et al, 2009). In fact, some of the most common indicators for assessing e-government have to do with regular input and interaction between government and citizens, of the variety that recalls a gov 2.0 model of governance.

For me, this suggests more of a continuum when we think about the objectives and power struggles surrounding e-gov and civic tech. When we think about the “fields”, I think it recalls more of a venn diagram than a sharp distinction.

It’s also worth couching this whole question in the 2013 discussion about how the amorphous “field” of civic-tech/open-government/digital-advocacy names itself (most notably Heller, Piexto and Steinberg). We’ve known for a while that we have a problem naming our work and understanding the distinctions between the labels we use. In this sense, the distinction between e-government and civic tech isn’t new, it’s a classic example of stuffier practitioners sticking with a dusty label, while the kool kids in skinny jeans gets all the innovations. Of course that’s hyperbolic and to the extent it’s true, we also have to acknowledge that different labels have different affordances and limitations. But what I’d like to emphasise, and what’s perhaps most relevant for Emily’s argument, is the implications this has for learning.

Peering over the fence

As Tiago pointed out in 2013, “the lack of terminological consistency in the field is a major obstacle to cumulative learning,” because we don’t tend to pay attention to lessons that don’t share our label. I don’t think we could find a better example of this than Emily noting “ the perspective of the civic tech movement: E-government!! Are you serious?!”.

We tend to dismiss learnings from e-gov because we assume they are stodgy and irrelevant. Likely for many of the same reasons that support Emily’s distinction between governance models. But I think mostly, it’s because those learnings are bound up in differently labeled social circles and knowledge exchanges, where the cool kids don’t tend to tread.

E-government conferences are stodgy affairs, I’ve been to a few: not a post-it note or lightning talk in sight. And most of the learnings are tied up in academic papers, which civic tech simply don’t have time to read (though few are thoughtful enough to admit it).

If civic technologists had the time and inclination to sift through the weighty literature on e-government, they’d find a lot of useful lessons there. There are studies suggesting that:

  • The more useful websites are to government managers, the more transparent those websites are likely to be (Yavuz & Welch, 2014)
  • At the community level, increases in participatory governance tend to directly lead to increased transparency and information from government, but increased information does NOT lead to increased participation (“It is possible that the revolution in e-government is increasing interest in participation and transparency by researchers more than new media technologies are actually affecting participation and transparency outcomes.” Welch, 2012)
  • Free time and social networks are just as important drivers of “e-participation” as socio-economic characteristics and political affiliation (Vincente & Novo, 2014)
  • Transparency mechanisms in government are likely to be more effective when managed by individuals who think they are important (Ruijer, 2014)
  • Improved usability of government websites increases the perceived credibility of the information they present (Huang & Benyoucef, 2014)

That’s just a few findings I’ve recently noted, after reading up on the literature for a few weeks. So I’m undoubtedly only scratching the surface. And though not all of them are terribly surprising (but, hey, #evidence), and of course none of them should be taken at face value (they’re all couched in specific contexts and caveats, because: science), there’s lots and lots of this type of thing out there, and it’s relevant for thinking about how to build civic tech, precisely because it carefully explores what has and hasn’t worked. And the vast majority of e-gov empirical research is focused on non-gov users (Wirtz & Daiser, 2016), exploring incentives, obstacles and use stories that are directly relevant to civic tech design.

So I’d go further than Emily and suggest that civic technologists should do more than have an open mind. They should make an effort to peer over the fence into the research on e-government, and try to glean what’s useful for their specific projects.

Is the game fixed?

Of course that’s easier said than done. Because that is some dense material to wade through, some of it barely written in english, and often requiring more time and training than most project managers have. And even if they did, the vast majority of e-government research is paywalled.


So this, coupled with the funding incentives to produce shiny new program modalities that don’t make the slightest reference to dusty stuff like e-gov, means it’s likely not going to happen much.Unless we get more creative about how we partner with academics.

There’s a lot of interesting things bubbling up at the border between research and practice around civic tech: networks, repositories, meta-mapping. I’d like to think that there are opportunities for finding allies and infomediaries just across the border, so that practitioners can ask specific questions about all this: “What does the literature say about increasing uptake of mobile platforms for national accountability in rural communities?” That type of thing. That’s the kind of rabbit hole you can find, and which nerds like me (and Emily, I’m guessing) might go down if asked by someone designing a program. I’d like to think that the civic tech practitioner could send a scout to dusty e-gov conferences to report back with juicy tidbits and links. I’d like to think that we could find all kinds of creative, soft-touch ways to engage if we actually knew what kind of “evidence” would be most useful. But then again, I’d also like to think that all this #techforgood stuff will make a difference in the end.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech