by Marco Emanuele
The Science of Where Magazine meets Sarah Kreps. Sarah is the John L. Wetherill Professor of Government, Adjunct Professor of Law, and Director of the Tech Policy Lab at Cornell University. She is also a Non-Resident Senior Fellow at the Brookings Institution. Her work at the intersection of technology, politics, and national security. Between 1999-2003, she served as an active duty officer in the United States Air Force.
– What is the importance of policies that look at the “technological revolution” as a determining factor in the metamorphosis of politics and of reality as a whole?
There’s considerable debate about whether we’re in a Fourth Industrial Revolution and that the technological revolution we’re in is qualitatively different from the previous economy or whether this is more an evolution. Another question is why the difference matters. It matters because if we are in a Fourth Industrial revolution, then we need to be implementing major policy changes that address the disruptive nature of the technology.
My view is that there are indeed technologies that have disrupted politics and society but these are, in most senses, more of a difference of degree than kind. We’ve always had misinformation and propaganda, for example. Online misinformation certainly amplifies more and faster than when misinformation took the form of leaflets or tabloids. Drones also lower the threshold for conflict by reducing the risk given their pilot-less status, but here too it’s not a revolutionary technology since in most cases actors are using drones in similar ways as manned aircraft (the exception being counterterrorism strikes). Thus, I would argue that if the technologies are not revolutionary, we do not need a transformation of politics but rather a conscious, thoughtful evaluation of the specific areas where our current policies might be touched by these technologies and need to be updated or tweaked.
A good example in the United States is a 1996 law called the Communications Decency Act, which sought to regulate an internet that looked considerably different in the 1990s than today. Both parties in the United States think that the key part, Section 230, should be updated to reflect the ubiquity of misinformation and polarization on the internet, but it’s a case where the cure in terms of a revised Section 230 probably looks worse than the disease. I think the answer in most cases is society updating collective norms of use and behavior rather than a complete overhaul of existing legislation.
– “Disruptive” technologies have, and will increasingly have, a decisive impact on essential services (I think, but not only, on health) and on the labor market (we have published an interview with Keith Sonderling – a Commissioner on the EEOC, Artificial intelligence against discrimination at work). What are, in your opinion, the risks and opportunities?
Disruptive technologies include a wide array of technologies so I think it’s important to disaggregate rather than group all disruptive technologies into one big bucket.
Let’s take automation, for example. For a long time, there were concerns that automation would put people out of jobs. We shouldn’t dismiss that altogether but with the pandemic, we had health risks associated with people being in close contact. People left the work force to take care of older parents or immunocompromised family members, or to homeschool their children. So the combined effect was to make automation more appealing as both safer and something that would address labor shortages. Moving things like driver’s license renewals online so they are digitized now looks helpful given municipal offices that have been open sporadically or that face labor interruptions because of people in quarantine. Or automating tasks like collecting tolls or locating goods in an Amazon warehouse. Now it doesn’t appear to be tradeoffs between efficiency and employment as much as win-win but we need to be sensitive to the possibility that the labor market will again shift and now these jobs are automated and jobs previously available no longer are.
Artificial intelligence, on the other hand, introduces more perils, more tradeoffs. By introducing algorithms that help sift through and make sense of complex decisions, we can increase efficiency, for example going through job or college applications based on an algorithm that has calculated the “type” of curriculum vitae that is, based on past performance and individual profiles, most likely to succeed. However, we’ve learned that these algorithms often incorporate biases. If the data used to train the algorithm is based on evidence from a decade ago, the population looked considerably different and was much less diverse so the “ideal” candidate will look more homogeneous compared to the contemporary pool. But this doesn’t necessarily mean not using AI to make jobs and decisions more efficient. One of the most persuasive developments has been to ensure human supervision over these decisions so that we have computer-human interfacing that essentially keeps an eye on decisions and creates guardrails against the unfettered and potentially biased use of AI.
– Bob Fay of the CIGI think tank, interviewed by us (Towards the Digital Stability Board for a digital Bretton Woods), spoke about the perspective of a “Digital Bretton Woods”. Do you think this is a viable path?
It’s a provocative proposal and provides a nice catchphrase. To engage its merits though, we should go back to the original Bretton Woods system, which coalesced at the end of World War II to create a liberal international order that would make the domestic economic policies of the 1930s, which exacerbated the economic disaster of the Depression and arguably fostered the conditions suited to World War II. It created a gold standard to help create a more stable set of currencies, and the General Agreement on Tariffs and Trade, which ultimately led to the World Trade Organization that would help lower trade barriers versus the protectionist policies of the 1930s.
Two features made the Bretton Woods system possible. One was the disaster of World War II, which created a “never again” international psyche. Another was that the United States was a global hegemon and brought the international parties together to hammer out the agreement.
The current circumstances bear almost no resemblance to the late 1940s, early 1950s. We have no embers of war from which we are emerging, and therefore no clear impetus for this type of coordination. Second, the US, rather than being coherent and dominant internationally, is polarized and fractured. Generating the domestic consensus would create challenges, and that says nothing about the challenges of generating some sort of international consensus about action. China would need to be part of such a system, and Russia, but the preconditions of rapport and trust for an international agreement are far from present.
One thing the United States has tried to do is create digital norms among like-minded advanced industrialized democracies, but the Democracy Summit of December 2021 was widely ridiculed as hypocritical, ill-conceived, and ineffectual. In the US, we are still muddling through what these digital norms should look like domestically, as has been clear with every meeting between the Tech CEOs and the US Congress, which will make it close to impossible to create consensus across borders. Indeed, if anything the idea of Digital Sovereignty, in which each country crafts its own digital solutions that incorporate their country’s norms and values, is ascendent, not a Digital Bretton Woods that implies concerted coordination across borders.
– Finally, there is the strategic question of governance and use of data. We deal with the science of where, geolocation of data for prevention and intervention in many sensitive areas of reality (from public services in cities, to climate issues, to precision agriculture, to space). What do you think about it ?
Data is such a double edged sword. The same features of data usage that can make society, business, and government operate more efficiency are the same ones that create the circumstances ripe for abuse. For example, there were cases of digital contact tracing where local governments sold location data to third parties for commercial use. And there are still cases where autocratic governments are exploiting the crisis of the pandemic to overreach and conduct surveillance into the private lives of their citizens, in the name of public health. The pandemic has created that sort of crisis situation and permissiveness for surveillance and a concentration of government power that may see a reckoning after the pandemic, but if past is prologue, the accrual of such powers can often be difficult to roll back.
Indeed, the public health crisis opening the door for concentration of data and government powers resembles the type of dynamic we would in the past associate with wartime. Crises create permissiveness in terms of government oversight and regulation but those powers are often sticky and stay behind after the crisis has passed. The same may be true with the current public health moment, especially since, like many recent wars, it’s looking like there will not be a clear end of a “mission accomplished” moment. In the private sector the public registers its disapproval by voting with its feet, switching social media platforms for example from Facebook, if it views that company as insufficiently guarding the privacy, to a more secure platform. The public sector is a bit different. Members of a democratic society will ultimately have to weigh in on whether and when these “wartime” public health measures have gone from making us safe to being unnecessary and overly broad reach into our privacy.