📑 The Authoritarian Trade-Off

Bookmarked The Authoritarian Trade-Off — Real Life (Real Life)

Those in the business of selling smart solutions for the world’s problems have a vested interest in muddling the differences between multiple things. They provide what they claim is a universal solution — an approach seemingly capable of processing any data and optimizing the desired outcome, regardless of its source or content — so that any problem can be reframed to fit it. Just tweak the parameters, shovel different data into the analytics engine, and out comes “actionable insights.” The pandemic isn’t the first time this will happen (just consider the whole “smart city” industry), and it won’t be the last.

Jathan Sadowski argues that exchanging privacy rights for public health is a false compromise. Rather than comparing the current crisis with the Spanish Flu epidemic, Sadowski suggests there is more to be learnt from the political response to 9/11. Companies with vested interests will step in with short term solutions which have long term gains. In China, applications that were initially designed for social credit systems have been commandeered for the purpose of contact tracing.

Even by the pragmatist’s standards, adopting these intrusive surveillance programs to contain the pandemic is at best a gamble. There’s a strong case, laid out here by legal scholar Susan Landau, that location surveillance of cell phones does not work for contact tracing because of technical limitations. Similarly, privacy researcher Ashkan Soltani has laid out strong critiques of a major partnership between Apple and Google to create a “Bluetooth-based contact tracing platform” that will be interoperable between iOS and Android phones. For Soltani, the data and capabilities from this initiative are “poor proxies” for actual infection rates and detract attention away from more widespread testing. The same skeptical eye should be cast over other tech solutions, which too often rely on the same justification, whether used for national security, public safety, or public health: “Trust us.” Why should we? There’s little evidence that digital surveillance tools developed for counterterrorism have prevented attacks (though the information may be classified), and independent studies looking into the effectiveness of these tools for predictive policing are inconclusive, at best. The burden should fall on those designing and deploying such techniques to prove that this time would be different. But instead, the public may be expected to bear the consequences of the shortcomings, side effects, and externalities.

Although there is talk about short-term compromise, the issue with wicked problems like pandemics and terrorism is that they do not have a clear end date. This is the challenge of narrative, that it has the potential to be manipulated by those in control.

Exploiting crisis is a guiding principle — a best practice for good governance, as the consultant class would put it — for how power is enacted and expanded. The long-term consequences of allowing short-term “solutions” to be applied unabated will mean that, even once the pandemic is alleviated, the crisis will never go away. The programs will be in place and, in the name of prevention, they’ll never be shut off.

In addition to this, the outcomes of such apps are still questionable. This is something that the Brookings Institution has argued. They highlighted that such apps do not replace the painstaking process of contact tracing, still prone to false positives, not always a reflection of transmission and lack clear policy guidelines.

no clever technology—standing alone—is going to get us out of this unprecedented threat to health and economic stability. At best, the most visible technical solutions will do more than help on the margin. At a minimum, it is the obligation of their designers to ensure they do no harm.(source)

The challenge is that ideas about the ‘new normal’ will not be made when the current crisis is all over, instead they are being decided today, if not yesterday.

Some like Maciej Cegłowski argue that we already accept such intrusions into our privacy, therefore maybe we should produce something good from it all? This is a sentiment carried by Dan Donahoo.

For a different introduction to the problem, see the Harvard Ethics WhitepaperHarvard Ethics Whitepaper, while Patrick Howell O’Neill, Tate Ryan-Mosley and Bobbie Johnson provide some questions to consider:

  • Is it voluntary? In some cases, apps are opt-in—but in other places many or all citizens are compelled to download and use them.
  • Are there limitations on how the data gets used? Data may sometimes be used for purposes other than public health, such as law enforcement—and that may last longer than covid-19.
  • Will data be destroyed after a period of time? The data the apps collect should not last forever. If it is automatically deleted in a reasonable amount of time (usually a maximum of around 30 days) or the app allows users to manually delete their own data, we award a star.
  • Is data collection minimized? Does the app collect only the information it needs to do what it says?
  • Is the effort transparent? Transparency can take the form of clear, publicly available policies and design, an open-source code base, or all of these.

Leave a Reply

Your email address will not be published. Required fields are marked *