Methodical Snark critical reflections on how we measure and assess civic tech

Short summary of OGP research on open gov and trust in Latin America

S

What is it?: Restoring Trust through Open Government: an analysis of open government initiatives across Latin American subnational cases (36 pg report presenting research findings from 9 cities, commissioned by OGP).

Main Point: Online interaction is bringing municipal governments and publics closer together, building trust in government in the process.

Where it’s coming from

I read this as part of OGP’s international campaign to position open government as a policy mechanism for building trust in government, given the graphic branding and lack of any other info. The OGP support unit clarified that though it was commissioned to inform the Skeptic’s Guide to Open Government, it wasn’t used to that end, for the same quality reasons described below.

Findings

  • Government and civil society leaders responsible for open government believe that it builds trust.
    (Note that the author’s findings that “…trust in institutions has dramatically increased within the organizations actively participating in the open government policies” and that “…organizations have become “trust brokers” between the government and a broader audience of c.” (sic) should be taken with a grain of salt. See the section on shortcomings, below.)
  • “Larger cities with better internet penetration have more interaction”, which seems to be equivalent to social media engagement, see discussion of confusing measures under Shortcomings.
  • “the task of building trust is a difficult road” (21). Yes, also in research.

Insights

  • An interesting link is drawn between the concept of process-based trust-building and co-creation spaces (16-20). Drawing on self-reporting from several initiatives, the report suggests that

“The trust building process is incremental, and when a point of substantive interaction is reached these organizations become “trust brokers” that help the government reach out to other organizations and networks that they would not be able to otherwise. They share information to other organizations, they perform a pedagogical role of sharing information on public policies with actors that are usually reluctant to interact with the government, and finally they bring new actors and skills to collaborative spaces” (17).

Should you care?

No.

This paper is an excellent example of why, for all its failings, researchers still use a peer-review system. Any peer review process would have necessitated major revisions if this were to be published at all. In part perhaps because of petty academic politics, but also because as presented here, the paper is misleading and the findings cannot be trusted.

Methods

The link between open government and trust is conceptualized as produced through interaction with government (though loosely, see below), and operationalizes this by looking at three “mechanisms”, including a specific programming modality (deliberative participation) and two loosely defined concepts (institution-based trust and process-based trust) (6).

9 initiatives were selected, each from a different Latin American city, and two interviews were conducted for each city (one with civil society, one with government). An online survey was then distributed to 37 civil society leaders (no data on country distribution), and network analysis was run on 3 months of tweets that mention the official accounts of subnational governments. No theoretical framework or method is used to integrate analysis of these data.

Shortcomings

  • Lack of engagement with the literature. The section allocated to present “the debate surrounding trust in institutions” (2) does a fine job of describing trust in government in the abstract, and broad trends in Latin America, but fails to engage literature linking trust to anything relevant to open government.
  • Sloppy writing and presentation. The prose here is riddled with typos, including incomplete sentences, and inconsistent spacing and word mushes. Often this results in confusing equivocation, for example labeling Figure 2 as “Citizen participation” when it presents data from a survey of civil society leaders (whose experiences are bound to be different that “citizens”) (14).
  • Citation failures. Failing to include page numbers in references to other works is very bad form when you are attributing a specific claim or finding to another author. This is annoying when a blanket reference to a 700 page book is used to support an very specific attribution that can’t be documented (ref Coleman on page 5,I read the book, scoured treatment of it in other articles, even emailed the lead author of this paper but got no response). It’s worse for broad statements like “Tollbert and Mossberger explain that an important part of the current crisis of trust is that today’s citizens demand more participation in public institutions than before (2006)” (1). I’ve carefully read each of the 12 times T&M mention participation in their article, they don’t say that. Worse yet, I found at least one example of a misquote (the quote attributed to Keele on pg 5 should begin with “Here,” and instead begins with “Perhaps”, which better supports the paper’s argument). This suggests that sloppy citation practice is linked to sloppy reading of the literature. It makes me worried that the authors are also sloppy with their data, which makes me trust the findings less.
  • Unclear measures. The measures used in this analysis are confusing. “Engagement” seems to be used interchangeably with “interaction” when analysing tweet data, and is alternatively “calculated by dividing the Page’s PTAT with the total number of Likes” (10), and “measured by the amount of likes and retweets to posts” (10). PTAT is defined in footnotes as “how much people have interacted with a page or its content, in any way, over the last seven days” in footnotes (11, 31), so in the data set of tweets presumably implies links. Likes is not defined, but tables suggest that it refers to Facebook page likes (though Facebook data was not mentioned in the methods section).
  • Concept validity: The section on deliberative participation references a handful of initiatives uncovered in interviews, crunches numbers for Facebook engagement and sentiment analysis for tweets from government accounts, and runs network analysis on tweets mentioning government. That’s not deliberative participation. The sections on Institutional-based trust and Process-based trust don’t suffer from such dramatic confusion, but distinctions between these three “mechanisms” are not clearly identified, and their comparison (21) lacks credibility.
  • Sample bias. The main data sources for this study are interviews with civil society and government leaders, and surveys with civil society leaders, all directly engaged in the implementation of the initiatives under study. At best, this means that the study can hope to document the perspectives of open government enthusiasts, it does not demonstrate an actual effect of open government on anything. After all, it is no surprise that “Every public official interviewed for this study reflected on the importance that governments have vested in to social media for direct communication with citizens” (10). Also, sentiment analysis is run on tweets by government accounts. You know, in case government said anything critical about its own projects.

Add Comment

Methodical Snark critical reflections on how we measure and assess civic tech

Tags