The Genius Who Grew Up in the Computer Room
I asked ChatGPT:
If it takes 1 hour to dry a shirt under the sun, how long does it take to dry 15 shirts?
This brain-teasing question wasn’t invented by me, but rather something I heard on a podcast. The speaker was a professional AI scientist who didn’t oppose AI, as it was his livelihood. However, he urged people to recognize the hidden dangers behind the AI trend. We should face this issue.
While everyone is amazed by this genius who was able to pass the bar exam at birth, we shouldn’t forget that he is a “genius without common sense.” As the world embraces this genius, we should also be aware that the real danger isn’t how he can replace you, but rather how he lacks common sense and yet still replaces you.
He may cause you to fail in places you never thought of.
Common sense doesn’t need training. We learn it unconsciously all the time, as it’s a biological instinct. If a car is out of control and heading towards a four-year-old child, the child will jump away. He may run in the wrong direction and get hit, but at least he knows he should escape. However, if the same car is driving slowly, the child may continue playing on the grass. The role of AI in this example isn’t to decide whether we should escape or not, but rather to tell us which direction has the highest chance of survival. These are two different issues, and we shouldn’t confuse them.
No one has ever thought of teaching common sense, and it doesn’t need to be taught. The result of hard training is failure, as the trainer can never cover everything. If any species hasn’t been trained explicitly, it doesn’t know how to react, and all living things on Earth would have become extinct.
Therefore, when we look at this issue today, we should shift our focus from “how amazing he is” to “what else does he need to know that he doesn’t already know.” How can we prevent it?
Someone once extended the middle part of the number “3” on a 35 mph speed limit sign with a small piece of black tape, causing a Tesla self-driving car to accelerate to 85 mph. Although the “3” looked a bit strange, it still looked like a “3” to any human. Even if it looked like an “8,” our common sense would immediately take over, and our brain wouldn’t allow us to confuse the two. There’s no city in the world with an 85 mph speed limit – that’s the concept of 137 km/h. There’s no road in the US with such a speed limit, so this traffic rule doesn’t exist – this is called common sense. When it conflicts with your knowledge, common sense must win. Otherwise, the species cannot survive.
This is not something that can be taught or trained, otherwise you would have to think of and teach every possible variation of the number 3. The data that ChatGPT is trained on comes from all over the internet, but no one on the internet is going to treat you like an idiot and tell you that you can hang multiple pieces of clothing in the sun at the same time. If you didn’t say it, forgot to say it, didn’t think to say it, or assumed they already knew, sorry, they just don’t know. This kind of training data for idiots simply doesn’t exist on the internet. You can’t teach something that doesn’t exist. Okay, let’s train AI to tell it that you can hang an unlimited number of clothes in the sun at the same time. But the next time you move the clothes hanging problem to the balcony, it will fail again; the next time you move the problem to the playground, the roof, or even the desert… it will continue to fail until you have taught it everything.
Okay, let’s not talk about hanging clothes, let’s talk about something else.
So I asked him again, it takes one hour to fill a bucket with water on a rainy day, how long does it take to fill 17 buckets? Feel free to keep thinking and patiently training.
So the problem is not how to train it, but what tasks can be assigned to it and what cannot. Because you don’t know what it doesn’t know that you would never have thought of. I don’t know what AI will be like in 5 or 10 years, but we’re talking about today. All these major changes are already happening, and we need to figure out how to deal with the flawed AI at this stage.
What’s worse is that when a traditional program has a problem, engineers know where to go to fix it and how to fix it. Once it’s fixed, it’s done. The problem with AI is in the data, not the program. It has to read all the information on the internet to be able to fully understand it. We can fix the wrong program, but we don’t know how to fix the data. What’s even more troublesome is that the data already exists on the internet, and no one knows where it is or how to change it. That’s not your data.
Even hanging clothes has nothing to do with data. There is no wrong data on the internet, the only difference is that no one explicitly told it this common sense that everyone knows. This kind of data is impossible to change.
Of course, I don’t understand how AI is trained, and maybe experts will come up with some alternative methods soon. So at this stage, AI can only be used to supplement our shortcomings, and should not and cannot replace us. Its role is to enhance, not replace. If you’re worried about being replaced, then you should upgrade first and let AI be your assistant. You can’t let it make decisions for you, the relationship between master and servant cannot be confused. The power to decide, filter, and choose is all in the hands of humans. It’s just an assistant, it can’t replace the master.
For example, in the example of the car rushing past, AI can clearly tell you which direction to run for the highest chance of survival, but it cannot decide whether you should run or not – that is common sense and biological instinct. You can only teach knowledge, not common sense, and certainly not instinct.
This is where geniuses grow up: everything is numbers to them.
AI is a genius who grew up in a computer room. He has never seen the outside world, and everything he knows is in the computer room. Once unplugged, he is gone. Everything he knows is just theoretical. He doesn’t know what the sun is or what a balcony is. Of course, he recognizes these words, but he doesn’t know what they really are or their true meaning. ChatGPT can answer people’s questions so quickly because he is just filling in the blanks, grabbing key strings of words from the question, and quickly and accurately determining which strings of words to fill in based on the information he has read in the past. He is a master of filling in the blanks, and everything is just a word game to him. His answers are just what others have said on the internet, selectively and comprehensively taken out. In short, everything behind this is just numbers to him. The sun is a string of numbers, clothes are another string of numbers, and apart from the probability of correlation, there is no other meaning between them. If he hasn’t been trained on something, he won’t know it; if there is no data, he won’t know it either.
He just makes you think he knows everything, making you think he is a genius.
Therefore, at this stage, we should use it optimistically but very cautiously, borrowing its strengths and avoiding its weaknesses.
Going back to the problem of drying clothes, he is really a genius, telling me very carefully that it is related to the weather, humidity, wind direction, and angle of sunlight… These are things I didn’t think of but he knows, and this is where I need him; as for drying 15 pieces of clothes taking 15 hours… this is something none of us thought of and should be cautious about.
If I ask this question using a bucket instead of a cup, his answer is still wrong, but he kindly reminds me to be careful not to create a flood.