Pausing AI development?
A letter has triggered a global debate about an AI development pause. The responses have been interesting, to say the least.
(This post is not per se related to epidemiology. But the developments about pausing AI seem too important).
You have probably heard about the letter calling for a pause on developing models better than GPT-4. This has triggered a global debate which, in my view, is very welcome. The reactions, however, have been somewhat baffling.
The letter is specifically limited to working on models better than GPT-4. Unless you think you have a better model than GPT-4 (which you very, very likely don’t), your work is not affected. Of course, it might be difficult to measure what better means, but it’s clearly doable.
Many people seem incapable of separating the message from the messenger. I’m not a fan of Elon Musk either, but many of the arguments I’ve seen online are a version of “I don’t like Elon Musk.” This is your right, but what is your point regarding the arguments?
Similarly, a group of people has worked on this stuff for a long time, and they’re now rather angry that their work hasn’t gotten enough attention. Now Elon comes along, and boom - it’s global headlines. While I agree this isn’t fair, it’s not what we should discuss. Please don’t bury your excellent arguments in a frustration sandwich, as many may not read past the first layer. We need your voice more than ever!
In understanding the arguments, I’ve found it helpful to ask what a person believes concerning next-generation AI (i.e., AI that is more powerful than GPT-4). There seem to be three clusters:
1. It’s a generation-changing event (like the smartphone, the internet, etc.)
2. It’s a century-changing event (like computers, electricity, etc.)
3. It’s a civilization-changing event (like agriculture, writing, etc.)
Discussing with people in the first group is challenging. They think it’s just like the calculator, and we didn’t have to ban that, so what’s the problem? Discussing with people in the third group can also be challenging. Some are quite extreme and claim that AI and humans may not be able to coexist (and AI would win), which triggers a lot of existential angst. I’m in the second category, with some sympathy for the (non-extreme) third. I think the first group gets it quite wrong.
I’ve run a poll about this on Mastodon (non-representative, needless to say): 80% think we’re in the first category, and that this kind of AI is just like the smartphone or the internet. I am aware that I’ve essentially just said that 80% of my Mastodon followers are wrong 😅, but I am going to let the statement stand. I do think next-gen AI is beyond both the smartphone and the internet in terms of the impact it will have. We’re talking about approaching general intelligence and eventually surpassing it. It’s a BFD.Indeed, the key argument is that AI is now in group 3 (civilization changing), or at least in group 2 (century changing), and things are going so fast, that we need to buy some time to react. Remember October 2022, when nobody had heard of ChatGPT? And that GPT-4 is just a few weeks old? And that people were writing op-eds about “the next AI winter” (a period of reduced funding and interest in AI) in 2022? The speed is utterly unparalleled. So are the consequences. I find it only reasonable that we say, whoa, we need to get some sort of control over this.
Many counter-arguments say a pause is not feasible. On its own, that is an interesting discussion. But it’s quite depressing to read that a pause “would be necessary, but it is not feasible” version. It’s a declaration of bankruptcy. Arguably, our track record in the face of developing disasters is not great (of course, “Don’t look up” comes to mind). Nevertheless, I think we humans are better than that. We can get our act together if we want to. AI is not magic. It’s tech, and tech is human-made. Human-made things can be regulated.
Another line of argument is that we should not stop this, but rather make use of it for education, medicine, climate change, etc. I fully agree that these are great areas of application. But pausing a little to get our heads together so that we can agree on what you should and should not do won’t have a dramatic effect on us as a society. Most people agree. The worry is usually in the point above: yes, “we” could do it, but “the others” won’t, and therefore, we can’t afford to be outcompeted. I see the point, but I would argue that this makes an international agreement even more important.
Ultimately, my biggest concern is that the fear of these technologies will become so large, that there is going to be a complete ban. I would regret this because it would be a shame not to use the awesome capabilities of AI to advance human flourishing. That is my main reason to have signed this letter. Having no regulation seems bad (and unlikely). Massive blanket bans seem bad (but not as unlikely as no regulation). Smart regulation seems like the way to go. We can argue whether a pause is a right way to get to smart regulation. But it does feel like we’re driving a car, and we’ve just figured out how to make it go 50x faster - while driving. I don’t think that saying, “we shouldn’t make this go faster for a while” is a bad idea.
Speaking about technology and regulation. I’m quite glad that we have morphine, nuclear energy, and CRISPR for gene editing. But I’m also glad that people can’t just go and offer this stuff via an API to anyone on the internet. At the end of the day, almost everybody agrees that having regulation for very, very powerful stuff is meaningful. Well - next-gen AI is now very, very powerful.
“But isn’t it just like a knife? You can use it for good and for bad.” Yes. But knives didn’t all of sudden get dramatically better. They did not approach general intelligence. They were never connected to the internet. They were never a generic technology that you could apply absolutely everywhere. We never had to worry about knives’ capability for self-improvement. We are in a different regime with respect to the possibilities of this technology. Our old models don’t quite work as well as they’ve done before. Yet another reason to take it more slowly.
I don’t have the answers, and I’m curious to hear other views. This is my current thinking on the letter and its ensuing debate. I will happily update my views upon encountering convincing alternative arguments.