5 questions for Mark Brakel
With the help of Derek Robertson
Welcome to our weekly column: The Future in 5 Questions. Today we have Mark Brakel, director of policy for the nonprofit Future of Life Institute. The FLI’s Transatlantic Policy Group seeks to reduce the extreme, large-scale risks of AI by advising short-term efforts to manage emerging technologies. FLI has worked with the US National Institute of Standards and Technology on their AI risk management framework and contributed to the European Union their AI law.
Read on to hear Brakel’s thoughts on slowing down AI releases, not taking system reliability for granted, and cross-border regulatory cooperation.
Answers have been edited for length and clarity.
What’s one underrated big idea?
International agreement through diplomacy is greatly underestimated.
Politicians and diplomats seem to have forgotten that in 1972, at the height of the Cold War, the world agreed to the Biological Weapons Convention. The convention came about because the US and Russia were really concerned about the risks of proliferation of these weapons – how easy it would be for terrorist groups or non-state militias to produce these types of weapons.
At least for us at FLI, the parallel with autonomous weapons is clear – it would also be very easy for terrorists or a non-state armed group to produce autonomous weapons at a relatively low cost. Thus, the risks of proliferation are enormous. We were one of the first organizations to approach the public about the creation of autonomous weapons through us. Videos slaughter on YouTube in 2017.
Three weeks ago I was in Costa Rica, first conference on autonomous weapons between governments outside the UN. All the states of Latin America and the Caribbean united to say that we need a treaty. And despite the ongoing momentum of strategic rivalry between the US and China, there will definitely be areas where international agreement can be found. I think this is an idea that is slowly going out of fashion.
What technology do you think is overhyped?
Contrary to my intuition, I will say AI and neural networks.
The founding philosophy of FLI is that we care about the long-term potential of AI. But the same week we had all this GPT 4 craziness, we also had man defeated AlphaGo’s successor V go game for the first time in seven years, almost a day after we actually gave the game to computers.
We found that in fact systems based on neural networks are not as good as we thought. If you make a circle around the stones of the AI game and distract it into a corner, then you can win. This is an important lesson because it shows that these systems are more fragile than we think, even seven years after we thought they were perfect. The insight that Stuart Russell “Recently, an AI professor and one of our consultants shared that when developing AI, we put too much trust in systems that, when tested, turn out to be wrong.
What book has most influenced your vision of the future?
I professionally have to sayLife 3.0because our President Max Tegmark wrote it. But what struck me the most was the book.To heaven» Hanya Yanagihara. This is a book in three parts. The action of the third part takes place in New York in 2093. This is a world that has had four pandemics. And you can really buy apples only in January, because then it is cool enough to grow them. Otherwise, you must wear a cooling suit when you go outside.
It’s an eerily realistic look at the world he’ll live in after four pandemics, massive biological risk, and the climate crisis. AI doesn’t work, so you’ll have to drop that thought.
What can government do about technology that it is not?
Take action to slow down the race. I saw this article earlier today Baidu put out Ernie. And I thought, “Oh, this is another example of a company feeling pressure from companies like OpenAI and Google to come up with something too.” And now their stock is down because they’re not as good as they claimed to be.
And you have people like Sam Altman who say they’re really concerned about how these systems can change society – we have to be pretty slow in terms of letting society and the rules adjust.
I think the government should intervene to make sure this happens, so forcing people through regulation to test their systems, do risk management analysis before posting anything, instead of giving people this incentive to just each other and put more and more systems are coming out.
What surprised you the most this year?
How little mention of artificial intelligence in the EU is mentioned in the US debate about chatGPT and large language models. All this work has already been done – like writing very specific legal language on how to deal with these systems. However, I have seen several emails from various CEOs saying that they support regulation, but it will be very difficult.
I find this story amazing because there is a pretty brief outline that you can take bits from.
One of the cornerstones of the AI law is its transparency requirements – if a person communicates with an AI system, then they must be tagged. This is a basic requirement for transparency that works very well in some US states or at the federal level. There are all these good bits that legislators can and should look at.
What we In fact know about the just released GPT-4?
Besides the fact that this already hacked, that is. Matthew Mittelstedt, a researcher at the Mercatus Center, touched on this issue in a review yesterday. Blog post – one that also deals directly with the political implications of the new language model.
Early Returns: Basically it’s, well, it’s early. “What we can say with certainty is that this will serve as a catalyst for increased hype and competition with AI,” writes Mittelstedt. “Any predictions other than this are basically telegraphed.”
However, he offers his own policy assessments: GPT-4 shows how much and how quickly improvements can be made in reducing errors and bias, something that regulators should keep in mind; therefore, their prior data should be updated frequently with new research when considering regulation; that open criticism and stress testing of AI tools good thing and this discourse around “alignment”, reason and potential destruction wildly overheating. — Derek Robertson
The European Commission convened a second panel of his citizens on the technology of the metaverse this week, and in real time he spoke more about the long and complicated process of regulating new technologies.
Patrick Grady, Policy Analyst at the Center for Data Innovation, summed up the session. another blog published today (the first of which we covered last month). He contrasts a comment by Renate Nicolai, deputy director general of the European Commission’s technical department, who said the EU should deal with the regulation of the metaverse “in its own way,” with a comment by Ivo Wolman, another member of the Commission, who said on Friday that the EU was open to bringing in others. countries.
At the very least, the seeming contradiction is a reminder of how early this regulatory process is. (Grady additionally notes that “Also, contrary to Ivo, Renate called the Internet the ‘wild west’ and [that] this initiative is a precursor to regulation.”)
Another reminder of how early technology still there, and how Europe can fall behind: Apparently, technical problems overshadowed the entire session. “Many participants were unable to join the Metaverse platform,” Grady writes. “… The downsides meant audience questions had to be skipped and some members experienced major delays in joining,” a reminder that “the best products are off the block.” — Derek Robertson
Stay in touch with the entire team: Ben Schrekinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee[email protected]); Steve Hueser ([email protected]); And Benton Ives ([email protected]). follow us @DigitalFuture on Twitter.
If you have received this newsletter, you may register and read Our mission on the given links.