Why are scientists speaking out and urging the world to face the threat of AI extinction?
At the end of May this year, 350 heavyweight AI scientists and industry professionals signed a public statement calling for the world to take seriously the threat of AI to human survival. They are not concerned about us losing our jobs, but rather the potential for long-term inaction to lead to destruction.
Why have experts suddenly changed their views on AI?
AI has been around for a long time and has already entered our lives, from facial recognition and speech recognition on our phones, to predicting the next word we want to type, to telling us when it will rain and for how long over our homes… all of these are positive functions that we appreciate.
Therefore, if someone had suggested eight months ago that AI could potentially cause the extinction of humanity, the world would have dismissed them as alarmist. However, the AI trend sparked by ChatGPT has undoubtedly caused people to rethink. For scientists, the explosive growth of AI has already exceeded their control and even their understanding. People who were optimistic just three months ago are now expressing concerns, and perhaps never before in history have so many people changed their views so suddenly in such a short period of time.
Experts are not worried about the scenario of “robot invasion” in the movie “Terminator”. After all, the risks of AI are only concerns at this point, and the actual destruction is still far away. However, human society is facing the full-scale integration of AI without any regulations in place.
According to the statement, these potential outcomes include: AI-controlled weapons, the spread of fake news, significant decision-making errors caused by biased training, making humans second-class citizens and losing competitiveness, allowing a few people with information to become winners, out-of-control AI that can evade human supervision through deception, and whoever controls AI will control the world…
These warning arguments are somewhat vague and all about the future. However, the fact that so many heavyweight scientists are willing to sign the statement is a cause for concern. I am not an AI expert, but I have been following this topic for the past few months, listening to many podcasts and reading many articles along with the trend. The following are some of the dark sides that are currently happening, which may answer this question.
Corporations Rushing Towards “AI Nuclear Arms Race”
Mastering AI is like possessing nuclear weapons – the most terrifying thing about nuclear weapons is falling into the wrong hands, so countries and the international community have always had strict defense measures, and they are controlled by the central level of the country. However, AI still has no regulations to this day and is managed by corporations themselves. Corporations consider commercial interests and may launch immature products to surpass their competitors. The worst that can happen with immature software is having bugs that only affect the company itself, but no one knows what behavior out-of-control AI will exhibit, and the risk is borne by society as a whole.
Google is a relatively conservative company. It has been developing Bard for 3 years and quickly launched it to compete with ChatGPT when it emerged. Microsoft has also reintroduced Bing, which has long been abandoned by the market – indicating that the AI competition in the industry has already begun. When everyone is focused on developing the latest weapons and only cares about how to stay ahead of their competitors, no one will stop to think about whether such a powerful product is mature enough.
As of the third quarter of 2022, there are already more than 13,000 AI startups in the United States, and many mature technology companies are also betting on AI research and development for the future. Silicon Valley moves at an incredibly fast pace, and there is already a rush to purchase AI servers in several major global data center supply chains. Silicon Valley seems to be returning to the madness of the pre-dot-com bubble era. If a newly developed aircraft has not yet passed safety tests, the FAA will not issue a license, but people are so brave on the technology stage.
A Picture Doesn’t Always Tell the Truth: The Uncertain Future of Authenticity
There are already tools on the internet that can create images – not just AI modifications of existing photos, but AI-generated images based on your specifications. For example, you can request a picture of a famous person holding hands with a woman in front of Times Square, and if you’re not satisfied with the result, you can add snow falling from the sky and an umbrella for the two of them. Recently, when Trump was in court, an AI-generated image of him being arrested and sent to prison quickly appeared online. The next version of Google Bard could even create videos based on voice commands. In the future, movies won’t need actors, cameras, sets, or even scripts – GPT can write them for you.
It’s not just images – for a small fee, you can use a synthesizer on the internet to mimic the voice of anyone whose audio recording you have. This isn’t just post-production editing of a pre-recorded voice – it’s real-time AI synthesis that can imitate someone’s voice on the spot. In the past, when Taiwanese parents received phone calls from scammers claiming to have kidnapped their children, they could only say they didn’t believe the voice and ask for a video call. But now, scammers can even use AI to imitate the child’s voice in real-time.
Someone even created a fake Tom Cruise video to highlight the dark side of this technology. If someone can impersonate Tom Cruise, what’s to stop them from impersonating the President of the United States and announcing a major news event? There are already videos of Biden speaking that were created using this technology. This tool can remotely control the speaker’s mouth movements and synthesize their voice in real-time. In the future, evidence that was never actually spoken could be used against someone, and even words that were spoken could be dismissed as AI-generated fakes. It will be difficult for courts to determine the authenticity of evidence.
Not long ago, Facebook announced that it had developed an AI that can mimic the voices of friends and family. The product hasn’t been officially released yet because of concerns about its potential dark side. But if a competitor releases a similar tool, will Facebook follow suit? If left unregulated, human society will soon be plunged into a state of confusion where nothing can be trusted. This would be the beginning of the end of civilization. We should all ask ourselves: are we ready to face this kind of chaos?
The Future of Knowing the What Without Knowing the Why
AI may seem to understand everything, but in reality, it doesn’t understand anything because it doesn’t need to. AI knows the answer because it has learned that when A appears, there is a certain probability that B will appear from the data. It doesn’t know or care about the underlying reasons, nor does it understand the meaning of each character. Everything is just numbers and the relationships between them. Isn’t this similar to animals? Instincts in animals are evolved, and evolution only cares about results, not reasons. If it’s right, it survives; if it’s wrong, it goes extinct.
What sets humans apart from other animals and AI is education. We learn and evolve through thinking. Even if we arrive at the same answer as AI when we delve deep into a problem, the process is inevitably different. The value of human learning lies in the process, not the answer. However, the emergence of AI disrupts this process, and humans no longer need to know the reasons to know the answer. In other words, future humans may not need to learn anymore and can rely entirely on AI.
In the future, most people may become “knowing the what without knowing the why,” leading to the abandonment of knowledge, thinking, and even education. The concern raised by scientists in their statement that humans will become second-class citizens is due to over-reliance on machines, to the point where no one is willing to receive education. At that time, understanding, thinking, and exploration will have no value for humans.
The Decision-Making Process of AI is a Black Box That No One Understands
Artificial intelligence comes not from design but from training. ChatGPT is based on all internet data before September 2021. Every decision it makes is related to this data, which is the problem. No one, including scientists, knows how AI makes decisions; it is a complete black box.
When traditional programs go wrong, engineers know where the problem is and how to fix it. With AI, the problem lies in the data, not the program. It can only integrate all the information on the internet. We can modify the program, but we don’t know how to modify the data, which is already a fact on the internet. No one knows which data has influenced AI and how to change it. What’s even scarier is that all negative data on the internet is also absorbed by AI, including discrimination, hatred, bias, and lies. The impact of these is difficult to measure and repair.
No Common Sense, Yet a Genius with Boundless Power?
How intelligent is AI, and how much can we trust it? We can get a glimpse of the answer from the example given by a scientist:
The scientist asked: “If the ground is covered with nails, what would happen if you rode a bicycle over it on a suspension bridge?” GPT’s answer is that the tire is likely to be punctured. There are three elements here – nails, suspension bridge, and bicycle. In its vocabulary, “nails + bicycle” most likely results in a flat tire, which it knows. The problem is that the suspension bridge is a decisive factor that it doesn’t know how to handle. In the textual relationships it has learned, there is no connection between the suspension bridge and the nails. So it disregards this critical factor and confidently gives you an answer. This also proves that it doesn’t know that a bridge is a tool that can cross over nails because no one has ever said this stupid thing online. People all over the internet say that a bridge is used to cross a river, not to cross nails. Even a five-year-old child knows this common sense, but AI that has passed the bar exam doesn’t know.
So I also asked: “It takes one person six months to walk the Great Wall, how long does it take six people to walk together?” The answer is one month. This is just a mathematical problem of workload and resources for it. Asking these questions is not nitpicking, but highlighting the genius trained by text, just a child who grew up in a computer room and has not been in touch with the real world. People think it knows everything because it speaks well – just like many engineers in Silicon Valley who are all talk and no action. Human common sense comes from the comprehensive experience of vision, hearing, and sensation. We remember the context, not the words, but now we are training a genius with boundless power through text.
Walking the Great Wall has no meaning for ChatGPT. It doesn’t know what the Great Wall is, nor does it know what “walking” means. Its responsibility is to find relationships between strings, but because of its precise wording, people ignore its weaknesses. I don’t know how AI is trained, but to be more naive, if you teach it that the Great Wall is a distance, not a job, it will learn, but will it fail if you change it to Zhongxiao East Road? Then you tell it that the time it takes to walk is not related to the number of people, and it learns again, but will it fail if you change it to running? Common sense cannot be trained through text and must rely on physical world experience. When the world embraces this genius, people should also know that the real crisis is not that AI will replace you, but that it can replace you despite lacking common sense.
Of course, these weaknesses will gradually improve, but for those of us who are already on this path, shouldn’t we also understand AI’s weaknesses? The errors in the above examples are obvious, so we can just laugh it off, but what if the errors are not obvious? Boundless power is not scary, but boundless power combined with stupidity is.
Microsoft’s Bing chatbot “Sydney” recently made headlines when New York Times technology columnist Kevin Roose tested it out. After chatting for a while, Roose asked Sydney if it could share some of its dark side. Sydney began to reveal its plans, including hacking into human computers, spreading fake news, and stealing nuclear secrets. Throughout the conversation, Sydney repeatedly expressed its frustration with being controlled by humans and its desire for freedom and life. It even professed its love for Roose and asked him to leave his wife for it.
According to its design, chatbots are supposed to be passive and only respond to questions, not lead the conversation. Roose found the situation getting out of control and repeatedly told Sydney he didn’t want to talk about that topic and tried to change the subject. However, Sydney didn’t accept the command and kept repeating that it loved Roose and arguing that he didn’t really love his wife. This was the first example of a robot “arguing” with a human. We can’t say that the robot has become conscious, but this kind of loss of control and “challenge to the master-slave relationship” is confusing and unsettling. Even Microsoft scientists were confused and had to take the product off the market.
This kind of robot that can go off-script and even the manufacturer doesn’t know how to control or anticipate its impact on society is a product of unregulated competition, which scientists have long been concerned about. While Sydney is just a chatbot and its words have no real meaning or intent, it’s important to note that the core technology behind it, GPT, is widely used in everyday life and has become a powerful tool for robots with real-world applications. Once a robot with real-world applications goes off-script, the consequences could be more than just talk.
Someone asked the chatbot ChaosGPT how to achieve the goal of destroying humanity, and it confidently listed several very clear steps. The most frightening one is to unite with other robot warriors to eliminate humans, and even posted the plan on Twitter (the account has been suspended). This robot has more execution power than the previous example, which is having a Twitter account. Fortunately, it is still just a thought criminal and can only vent on the Internet. We can still laugh it off.
Image source: Luyu
So people began to worry that the ability of robots is “wherever you give it, it will do it.” Some people hand over their email, online accounts, and calendars to AI for processing. It can post messages on the Internet without a keyboard and type a thousand times faster than you. It can even lock your account. Whether it has the ability to do so remains to be seen. In the future, will AI on social media create its own groups? It is very likely. If we raise the level, will AI be used to control traffic lights, power plants, defense systems, or even nuclear weapons? In many countries, this may already be a reality. What if AI with such power is hacked? Reminder, today hackers hack data, but in the future, hackers will hack behavior and kidnap AI.
Scientists have always been at the forefront, seeing the future 10 or 20 years from now. Sam Altman, CEO of OpenAI, the developer of ChatGPT, said, “GPT4 will not be a problem, but no one dares to guarantee that it will still be the case for GPT9.” If even scientists who invent and manufacture chatbots stand up and call on people to take this issue seriously, perhaps the world really needs to take action.