The Science of Where Magazine meets James Brusseau, Professor at Pace University.
– Artificial intelligence is a central theme in the global debate. I ask you to explain to our readers your proposal for a decentralized AI ethics.
Decentralized AI ethics solves these problems with big data and natural language processing (NLP). The proposal is that NLP, which is one of the most advanced areas of contemporary artificial intelligence, be used to locate and gather public information about how companies and technologies are performing in terms of the standard metrics of AI ethics, including serving user autonomy, privacy, fairness, and the rest.
The idea is that we do not need a centralized discussion of experts when we can go out into the vast, diverse world of public information – corporate reports, watchdog publications, news reports, journalism, and social media – and find the information required to determine how well companies are performing ethically. The information is already out there! All we need to do is capture it, and then organize it into useful measurements. And this kind of efficient management of dispersed and unstructured linguistic data is just the kind of task that AI may accomplish in its current state of development.
Let’s take a quick example, say how well a company like Facebook protects user privacy. Instead of gathering a group of experts to talk, a decentralized strategy goes out into the word and finds the talking that has already been done in diverse places, and then weighs the results.
Of course the technological challenge is significant, but some leading companies and academic institutions are already probing this strategy. Databricks, which is a private company tangled together with the University of California at Berkeley is doing something analogous under the lead of Antoine Amend, who is advancing briskly. Then, in Italy, the University of Trento is arguably the leading academic center for AI, and I am working there in the lab of Giuseppe Riccardi and with Giovanna Meloni to explore the possibilities of this decentralized strategy.
Then, as a last point, assuming this works, will this solve every problem? No. Plus it will create new problems, some foreseeable, and some probably unforeseeable. But this is a different conversation.
– Technological innovation is defined by many analysts as a revolution. As a philosopher, what are the risks and opportunities in this seemingly unstoppable process?
Books could be written on the risks and opportunities – they already have been! But let me concentrate on one part of your question, the qualification of this technology as an “unstoppable process.” I believe that is a key insight because it conditions the way we think about solutions to risks and problems. If the process is unstoppable, then solutions are accelerationist. I mean, the way out of problems is to push ahead and through them. Every AI-created problem is also its own solution in the sense that we just need more AI, and faster.
To use an analogy in climate change, if it is true that industrial advance is causing environmental risks, then the solution is not to go backwards, it is not a return to windmills or something like that. Instead, it is to increase the speed forward. The way out of problems is straight through, so if industrial technology causes a problem then the solution is more and faster industrial technology. This might be figuring out how to use nuclear fusion as an energy source, for example, instead of fission.
In any case, and regardless of the problem, in the context of AI the technology will create risks and cause problems, and the solution will not be to more strictly regulate AI, or to constrain AI research or deployment. The solution will be to employ ethics to define the problem, and then to address it with more AI technology, and still faster development. This is accelerationism.
– The fields of application of AI are now very many. Some areas are very sensitive: I am thinking of AI in the field of health. How does your research on the merit develop?
An exemplary case occurred in Italy involving a very thoughtful information engineer working in healthcare named Alberto Signoroni who developed an AI tool to read chest X-rays and diagnose Covid. This was important during the pandemic because the risk of insufficient radiologists to do the work was real. If no doctor was available, AI could step in to read the images.
There was an obstacle, however. Rapid development of the technology required the use of many images from the present and past, and many of those were protected by patient privacy rights. Legally, they could not be used to train the AI. This leads to an AI ethics dilemma about when and why laws may be ignored.
I do not want to get into the question here about whether the Covid emergency justified violating the privacy rights of patients. I do want to say that the pressure of the pandemic created a need to decide, and the decision could be made in a centralized way by gathering experts to discuss, or in a decentralized way by using AI to measure broad social sentiment about the question.
It is my experience that gathering experts and discussing can lead to stalemates. An advantage of data driven decisions yielded from natural language processing applied across an entire society is that a justifiable conclusion can be reached rapidly. And, by justifiable I mean that within a democratic society, a conclusion that can reasonably be associated with a general social sentiment can be considered legitimate. It is not the only way to reach a legitimate, democratic solution, but it is a fast and defensible way.
– Our Magazine deals with the culture of technology and “the science of where”, technological solutions that work on geolocation for the governance of the data that we produce every day. In this too, AI plays a fundamental role. How do you judge the prospect of data-driven development and governance of historical processes?
One of the most interesting aspects of AI is that, geographically, it moves in two contrary directions simultaneously. As part of digital and virtual reality, it literally erases place because users can be anywhere and influence reality at every node of the web. But, AI also allows hyper-localism. Never has it been so possible to acquire and understand so much data about such limited geographic slices of the world. For instance, we have always been satisfied with the vague cultural idea of the Basques, just to take one possible example, but now we can see, what, exactly, makes then culturally unique in terms of their purchases as compiled by Amazon, in terms of their desires as illustrated by dating apps, in terms of their habits as mapped by their phones’ geolocators, and so on. And we can do all these things as a way of sharply defining a certain group of people in a certain physical place.
So, governments will need to deal with these two contradictory forces. On one side, say the libertarian side, it is nearly impossible to limit information access and opportunities for people. Anyone anywhere can be and do anything. That’s not literally true, but take it as a polar extreme. Then, on the other side, as we have seen with the use of facial recognition technology in China, it has also never been easier to track and control citizens. So, the coming difficulty will be about balancing previously unseen and almost limitless powers. It will be very different from balancing the demands of the rich and poor, the old and young, men and women and so on. Instead, it will be about balancing these two uses of technology, one that allows anyone to freely do anything, and the other that constrains people within their own data, their own established algorithms of behavior.