Following the unveiling of a joint book by Mila and UNESCO calling for a better governance of artificial intelligence (AI), Mila Founder and Scientific Director Yoshua Bengio sat down with prominent researcher Kate Crawford to discuss AI’s future in the face of unprecedented change.
Professor Bengio and Dr. Crawford, Research Professor at the University of Southern California Annenberg, shared thoughts on the rise of public generative AI tools like ChatGPT, intellectual property issues, the need to regulate the industry to avoid power concentration, and philosophical considerations on how human interests fit into AI development.
Concentration of power and copyright issues
The two researchers drew parallels between the mining of minerals and the training of AI systems with public data to criticize the current concentration of power and intellectual property issues raised by generative AI tools.
“Fewer than 6 companies produce generative AI at scale, that is one of the most concentrated industries in the world,” Kate Crawford said.
“Companies like OpenAI are scraping the creative work of not just decades but centuries of artistic practice —images, words, code– and then ascribing that to their own systems that will profit very few people. So how do we think about all of the labour that went into all of that data? At the moment, we have no system of recompense. People have no consent, they have no credit, and they have no compensation,” she went on.
“What we cannot do is to allow people to simply be used as raw material, to have their labour extracted, and to never have the gains of generative AI shared more generally.”
Yoshua Bengio pointed out that generative AI is trickier to regulate than the music industry, for instance, because it uses data from multiple sources.
“All of that data that humans are producing and feeding into these large AI systems is really contributing to collective wealth and should be treated as such,” he said.
“How do we make sure that technology is developed for good and that the wealth that is created goes back to everyone?” he asked, emphasizing the need for more public investment in AI development because companies are not currently incentivized by market mechanisms to develop socially beneficial tools.
Cooperation and openness
The two researchers discussed the need for better international cooperation and governance by drawing parallels between AI and agreements around nuclear weapons and energy, which required transparency and openness to inspection.
“We have done this in the past but we don’t have anything similar to this at the moment,” Kate Crawford said.
“What we have is a very small number of privately held companies motivated by, in many cases, either shareholder profit or individual profit that are not sharing what they are doing. We have a profoundly opaque system,”she added.
“It is very hard to govern and regulate something that you cannot see into.”
She and Yoshua Bengio shared concerns around openness in AI research.
“We had a good decade of industry developing AI in a much more open-science way but I’m really concerned that we are going to lose a good part of that,” Yoshua Bengio said.
He used CERN –an international research centre dedicated to nuclear energy– as an example of how openness and cooperation could be part of the future of AI development.
“We need to make sure that future progress is not owned by just a couple of companies but actually happens in directions that benefit society, and that’s going to happen if governments maybe create something like a CERN of AI,” he said.
“If we want to develop AI that’s going to help with climate change, education and healthcare, we’re going to have to build these large neural nets that require huge amounts of capital, but hopefully that’s going to be owned by us, by society and targeted towards these benefits.”
Regulatory changes and democracy
The researchers agreed that the pace of AI development far outpaces regulatory changes, and that now is the time to regulate AI development to protect society and democracy from its potential pitfalls.
“Neither of us expected that we would see this level of progress this quickly, and while it’s exciting for its potential benefits, it presents a genuine challenge for regulators who do not move quickly by design, because they move slowly in order to listen to their constituents in order to actually debate and reach a position of consensus on how to regulate,” Kate Crawford said.
Recent advances in regulation in the European and Canada are encouraging but more needs to be done on a global scale, they said.
“We need to invest much more in social sciences and humanities to help us really rethink our societies at the level of each country but globally as well,” Yoshua Bengio said.
“The kind of competition we’re currently seeing between companies, the race to the bottom in terms of carefulness and ethics, is an indication of how bad things can be in a few years when these tools could be doing much more damage.”
The researchers ended their discussion with ideas on how to further stimulate the conversation around the opportunities and risks of AI development.
"We need to think about what kind of world we want to live in, and how we architect that, because right now those decisions around architecting power are not being made by us,” Kate Crawford said.
“They are being made by an extremely small number of non-elected officials who are actually creating systems that could be very general-purpose and could have extraordinary far-reaching effects, some of which could be quite harmful.”
Yoshua Bengio added a hopeful comment about democracy.
“It’s all about democracy. Powerful tools lead to concentration of power. That’s why we need to change our system in small ways and eventually bigger ways to make sure that we preserve that beautiful thing that is democracy.”
“It is because a lot of people think that a problem is important that things move. So we all have a possibility of being part of making a better world.”
The full conversation can be watched here.