Methodical Snark critical reflections on how we measure and assess civic tech

Is OGP asking enough?: an analysis of civic participation norms and policies

I

This is the third a series of posts ahead of the OGP Summit in Ottawa, summarizing aspects of my doctoral research on OGP and civic participation. You can find posts on OGP assessment strategies, socialization, plus background on the research.


One of the ways we expect OGP to improve governance in member countries is by telling governments (how) to be more participatory. But a close read of the participation norms and policies promoted and adopted in an OGP context suggests that even if government’s did everything they were told, it might not be such a game changer for responsiveness and accountability.

One of the three assessment strategies proposed in my doctoral research is applying more rigorous quality metrics to MSI norm promotion and policy adoption. To test and validate this approach, I took a close look at civic participation norms and policies promoted in an OGP context. This involved an analytical two-step: looking at what specific norms are actually promoted and adopted in an OGP context, and devising quality metrics with which to assess whether civic participation is likely to contribute to OGP-outcomes.

Combining those analysis gives a grim read of civic participation norms in OGP. Here’s a quick description.

What are the actual norms and policies at play in OGP?

One can make a couple of distinctions between articulations of civic participation in the OGP context. Firstly, one could distinguish between norms, policies, and initiatives, by degrees of specificity. But those boundaries are quite fuzzy, so I won’t make that distinction. Then there’s the question of who makes the articulation, whether it’s OGP telling governments what to do, or the participatory things that government says it will do. Those are rarely the same thing, since OGP is built on the logic that open government means different things in different country contexts, and governments themselves know best and decide for themselves how to implement OGP. Lastly, the stuff that OGP suggests governments do can be further distinguished according to the adoption of participatory norms in the development of action plans, and adopting participation norms in action plan commitments.

To make sense of this, I mapped out all official promotion of participation norms I could find by the OGP Support Unit 2011-2018 (that’s including pre-launch norms promoted between and towards founding members). Here’s a consolidated table that gives an overview of what I found.

This stuff can all be read as norm promotion: OGP telling government what to do, sometimes more explicitly so than others. I found it useful to divide that  into the following two data batches:

Norm promotion regarding participation in the development of action plans.
The recent Participation & Co-Creation Standards are the most prominent and elaborate articulation of participation norms for action plan development, but there are lots of others, stretching a long ways back.

Norm promotion regarding activities and commitments in action plans
At first blush, it looks as though OGP has less to say about this, but there is a host of exemplary work, largely documenting good practices. One has to dig to find these, however, and the topical focus and policy area are pretty uneven.

Lastly, there’s all the norms that are then articulated by government’s. National Action Plans. We might not expect those norms to be mirror images of the norms promoted by OGP, but they’re worth looking at to see if they differ significantly in quality or character. NAPs provide a pretty convenient record of this, and the IRM explorer has decently structured data on this, though I have found the internal validity of their coding to be questionable. Notably, I found no easy data source for assessing what governments actually do, and whether this aligns with the commitments in action plans. Anecdotal evidence and a cursory review suggests that it rarely does.

What makes civic participation in meaningful in regard to government responsiveness and accountability?

Not all participation is created equal. Open washing is real, and can be deliberate and malicious (see Groff et al, 2016 on strategically opaque data). Symbolic e-participation has been widely documented (check out Åström et al’s 2012 comments on the “second wave of e-participation in developing countries). Nor does participation clearly lead to meaningful policy or governance outcomes, and there’s plenty of research showing that it can have negative consequences. (There’s also a fun analysis showing that participation outcomes are often exaggerated because they are only studied by participation scholars, who want so badly for them to work, see Damnjanović, 2018: 112)

To figure out when and how participation leads to responsive and accountable governance, I followed the logic of Fung’s participatory cube, noting that different governance outcomes are associated with different “design characteristics” in participatory initiatives. I reviewed the literature on e-participation, communication studies, e-government, and accountability studies, to suggest 3 design characteristics:

  1. Reciprocal, two-way and interferential communication implies that communication between government and non-government actors moves in both directions, or may even be “tri-directional” including a public audience (Ferber et al., 2007). […]
  2. Participant control implies that the citizens participating in civic participation initiatives have some degree of control over either the content or the timing of their participation. […]
  3. Civic participation that is situated in a governance context will enjoy the active representation of government. This implies that the organization of participation is not outsourced to third parties, and that there is institutional “buy-in” and endorsement of participatory processes and outcomes (Liu, 2016; Reddick, 2005). […]

I used Gertz’s analytical model for social science concepts to associate these characteristics with empirical indicators, which allowed me to convert them to quality metrics, with which to score OGP participation norms.

Results: How meaningful are OGP civic participation norms?

In brief, not very.

OGP’s promotion of participation through action plan co-creation and consultation scores best by far according to the above metrics, but these norms remain remain vague. The design characteristics of reciprocity and governance context are referenced with some consistency (see for example the Government Point of Contact Manual), but without any detail. Participant control, on the other hand, is completely absent form formal SU guidance, but elaborated at length in OGP-affiliated recommendations on designing a multi-stakeholder forum.

This is very a modest win. It is, however, significantly better than the two other types of norms assessed here.

Guidance on the content of NAP commitments is sparse, less explicit, and articulated almost exclusively through simple reference to the core values of open government.  Some guidance on the design of reciprocal, participant controlled, and contextualized participation initiatives can be read out of the smattering of case studies and best practices disseminated by the OGP SU, but these characteristics are neither prominent nor consistent.

As such, it’s not surprising that the participatory initiatives governments actually commit to in their action plans tend not to score well on these quality metrics. Content analysis of NAPs for 61 countries from 2011-2014 reveals a body of 1498 English language civic participation commitments, which according to these metrics, are not likely to contribute to accountable or responsive government. In fact, the fuzzy language in which most commitments are couched suggests that governments aren’t thinking much about how participation could contribute to such outcomes.

So what?

This analysis suggests that, according to what the literature on civic communication, participation and accountability tells us, the civic participation norms and policies promoted and adopted in an OGP context are unlikely to make meaningful contributions to more responsive and accountable government. Even adopted wholesale by governments, norms OGP is pushing aren’t likely to open government.

That’s a grim verdict, but there are a couple of noteworthy caveats and arguments for stubborn optimism.

  1. These metrics might be wrong
    I did a pretty comprehensive literature review, but my takeaways might be wrong, perhaps skewed by over-attention to the use of digital tools in participation. Other reads might identify better metrics, more aligned with the way OGP actually works.
  2. It’s early days
    The most systematic and demonstrably comprehensive aspect of this analysis was looking at a time-bound set of early National Action Plans (2011-2014). It’s entirely possible that government commitments to civic participation are improving, getting less fuzzy, and explicitly describing how they facilitate reciprocity, participant control, and governance contextualization.
  3. Norms are slippery things
    This analysis is premised on the idea that OGP works by telling governments to do stuff, which we hope they will do. But norms are subtle things that move in complicated ways. As demonstrated in my case study research, socialization in government institutions can provoke surprising policy outcomes. That case-based finding was also supported by comparative analysis, showing that OGP membership has a statistically significant causal effect on countries’ open-adjacent policy (more in another post).

So it’s entirely possible that even the promotion and adoption of retrograde norms can have subtle, and potentially powerful policy outcomes, which might not be readily visible in the types of data reviewed here.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags