Research on nearly 3 decades of democratic innovation and e-participation in Latin America has some interesting findings (Brazil, Colombia, Mexico and Peru). According to an Open Democracy blogpost (the actual project’s website is down): civil society participation programming uses tech more often than not, smaller countries are less prolific than large countries in terms of tech-driven innovations, and tech driven innovations are just as common at the national level as they are at the sub-national level. Though digital innovations are widespread, they only rarely facilitate decision-making (30%) or are formalized in legislation or policy (less than 50%).
University of Maryland research on anti-Trump protests finds digital media commonalities among an exceptionally diverse group, suggesting something that approximates a “movement.”
A review of research on government social media use finds that it is generally quantitative, ignoring both users and impacts, while a library study in the UK suggests that Open Data makes it hard to archive well in the NHS, and a study of service delivery in Kenya found that it was improved by decentralization, but that the mediating effects of e-government initiatives were insignificant (275 respondents, 8 county govts).
Continue reading “research links w 23-24 / 17”
Using a multilevel linear model to account for the hierarchical structure of our survey data, we find evidence that performance assessments yield greater policy influence when they make an explicit comparison of government performance across countries and allow assessed governments to participate in the assessment process. This finding is robust to a variety of tests, including country-fixed and respondent-fixed effects.
Whoa. That’s from a new AIDDATA working paper on global performance assessments (international measures of how well countries do at combating corruption, ensuing fair elections, opening data, or what have you).
Those findings aren’t shocking (that ranking countries can motivate govt actors in crude ways has become almost as much a platitude as the idea that participatory research enhances uptake), but their exciting because they are so clearly and directly relevant to the design of comparative assessments, and this study “feels” robust enough to carry some weight in conversations with the people who manage the budgets and optics of such projects. Based on econometric analysis of data from the 2014 Reform Efforts Survey (n= 3,400 gov officials in 123 low and middle-income countries) and 103 different GPAs, the use of elite survey data is a smart way to get around the problems of measuring influence by policy outcome.
The treatment of GPA characteristics is a bit crude though. 8 independent variables are identified at the GPA level (bi-/multilateral, whether govt’s are involved, whether data is public, etc), but none of these address policy areas or types of norms. Dan Honig’s research has suggested how important this can be for mobilizing the soft power of GPAs, and my own work on OGP suggests that it’s critical.