The dread of ChatGPT
That feeling when you realize everything is going to change, permanently. And what to do about it.
The day ChatGPT was released to the world, I started playing around with it. Initially, I was suspicious - “yet another chatbot that will start to say horrible things” - but I quickly realized that this time was different.
I couldn’t quite put my finger on it. But all of a sudden, I began to sleep very poorly. During the day, I had a feeling of dread that I couldn’t quite explain. My mind somehow could not stop thinking about ChatGPT. It wasn’t so much about ChatGPT per se, but the realization that the pace of AI development is much, much faster than most people had anticipated. I had a feeling of something fundamentally shifting underneath my feet.
The last time I had this kind of feeling was in January 2020, when I realized what was about to happen with SARS-CoV-2 (or nCOV, as it was called at the time).
This is a more personal post than usual, but I want to write it down because others might find it useful in their own reflections about AI.
1. Feeling dread? You are not alone
In the weeks and months after the first encounter with ChatGPT, I shared my sense of dread with close colleagues. Surprisingly, they all felt similarly. This was in stark contrast to the public discussions, where most people - including myself - were rather optimistic, and thinking about all the new possibilities of this technology.
I was particularly surprised to see that the feeling of dread was present in people who are relatively close to tech. Indeed, it appears to be fairly common.
So if the implications of ChatGPT give you sleepless nights, or you dread the implications of AI: Welcome. You are not alone.
2. I am almost certainly wrong
Technology revolutions happen in a fairly predictable pattern.
First, early versions of the tech fail. Almost any widespread technology we use today has a long history of failed attempts to mass adoption.
Then, seemingly overnight, a new version of the technology comes along that takes over the world. Personal computers, the internet, the smartphone: All ubiquitous technologies now, with decades of precursors that didn’t quite take off. Until the Mac. Until the WWW. Until the iPhone. Until ChatGPT.
Tech experts then say “nothing new here, folks”. And they are both right, and wrong. They are right because the underlying technology is not new. They are wrong because mass adoption is what changes everything.
In the early days of this transition, you can hear both voices of enthusiasm, and voices of concern (often from the same person 👋). The range goes from “the potential is amazing” to “this is the end of humanity as we know it”.
Indeed, people are saying the same thing: “this changes everything”. For one group, that’s a good thing, because they are thinking of all the hitherto unsolvable problems that could now be solved. For the other group, it’s a bad thing, because they are thinking of all the new problems this could generate.
What we can say looking back is that humanity had never ended. The new technology solved more problems than it created - at least if you accept that indicators like “fewer people in poverty”, “less hunger”, “higher life expectancy”, or “increased literacy” are good things.
Most importantly, if technology is destroying jobs, as it is always accused of doing, it’s terrible at it. Unemployment rates are at historic lows almost everywhere (in the US, the rate is at its lowest since the 1970ies).
Overall, there are no reasons to think “this time is different”. However:
3. This time is different
So why the feeling of dread? For me, it’s a mix of two things. The first is related to the COVID pandemic, the second to the belief that this time is truly different.
The COVID pandemic has shaken my trust in the world as a system of rational actors capable of resolving rapidly moving problems. Perhaps the trust was never warranted in the first place. But even though things could have been much worse, I simply cannot shake the feeling that the COVID pandemic showed us something about how the world works that is just profoundly unsettling. Maybe it’s the fact that at the highest level of power, people are just as clueless as the rest of us. Maybe it’s the realization that half of the population cannot even agree with the other half on basic facts.
So when a new, potentially game-changing development comes along, I now feel much more strongly that I have to deal with this alone, than I would have before the pandemic. Perhaps that’s a good thing (yay individual responsibility!). But it explains part of the dread.
The bigger factor though is a belief that this time is truly different. It is different not in substance, but in speed. The reason why technologies were never the end of humanity is that humanity has always adjusted to the new technology. It will also adjust to AI. But the pace at which this happens is now so fast that things will break in the process. Our institutions are not designed to operate at this speed. Our brains cannot operate at this speed. Our social interactions cannot operate at this speed.
By the time we will have woken up to the implications of this technology, truly bad things may have happened that are almost irreversible.
My main, and almost sole concern, is the trend from democracy toward authoritarian regimes. Powerful actors could use this technology to strengthen their grip on power. Mass manipulation and information warfare don’t quite sound so strange after the pandemic. Indeed, I find most discussions about ChatGPT being a confident bullshit generator beside the point - the point is whether the bullshit is convincing, and looking at the recent developments in the past 10 years, confident bullshit is what convinces people to vote in one way or another.
Once the damage is done, and a political system has ratcheted one click further toward authoritarianism, it’s nearly impossible to go back. This is a key difference to almost any other system, social or economic, where mistakes can be corrected by changing course, and delays can be addressed by speeding up. But once decision power has been given away to a few, it’s game over.
These reflections have helped me to channel my dread into a clear goal: to strengthen the systems of democracy and to help them prepare for the AI revolution that is now unfolding.
4. The argument for optimism
Identifying the cause for AI dread - i.e. political power grab - allows me to compartmentalize my thinking about AI. If I assume for a moment that a power grab can be avoided, then I see the case for AI optimism much more clearly.
Perhaps not surprisingly, I see the biggest potential in health and education. The case for education is quite apparent. Yes, how we evaluate students has to change, but that sounds feasible. As the quality of chatbots improves, having access to a personal expert tutor that also learns your strengths and weaknesses will be absolutely amazing. I’m quite certain we’ll look back at our pre-AI education system with some form of hindsight horror (“you had to do what in school, grandpa??”)
In health, the case is perhaps less obvious. The main argument I keep hearing is that ChatGPT is useless in health because while it’s fun to generate “almost correct” answers in some areas, it can be deadly in medicine. But this is beside the point - health is so much more than medical decision-making.
First, the dominant health problems of our time - and certainly those in the future - are related to behavior, mental health, and age. There is often not an easy correct answer (“take this pill / do this surgery and you will be fine”). They require long-term, potentially chronic engagement, which humans cannot provide professionally - it would simply be too expensive. AI assistants will be a major development here.
Have you ever spent a day with a lonely elderly person? Have you ever taken care of a person with depression? Have you ever tried to get rid of a bad habit, or adopt a new good habit? If you’ve said yes to any of these, you know very well that these are not issues with a simple right or wrong medical answer. These are issues that need engagement - engagement that AI will increasingly be able to provide.
Perhaps that’s too much for old people over 35, in the sense of Douglas Adams’ famous observation:
“I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.”
But since we’re talking about the future - increasingly populated by people described in rules 1 and 2 - I think the case for AI health assistance is pretty strong. I’ll discuss more on this topic in future posts.
Great post. Could be the start of a longer paper.
I think there is a typo: *un*employment rate is at its lowest.
This blog post really pointed out the same worries I share concerning AI. However, the text was too long, so I have summarised it with ChatGPT.