AI is NOT the Future

Politicians, big corporations, and billionaires are force-feeding the general population this preposterous narrative that the future has arrived, and that it has arrived in the form of artificial intelligence. We see these cool new tools like large language models such as ChatGPT and Claude, and they are being sold to us as this almost godlike force that is either going to save the world or destroy it. But perhaps there is a third option. What if it is just not going to change the world that much?

Why would anyone argue this? Well, I would start by saying that large language models, while certainly an innovative piece of technology, are not that powerful when you actually take a step back and look at what they are.

We the People Are Not at Fault for Being Afraid

First and foremost, I do not blame the people of the United States of America for feeling like AI is either the beginning of the robot apocalypse or the dawn of robot saviors. This technology is still relatively new, and it really does seem like magic in a lot of ways that a machine is able to respond in such a human-sounding way to our questions, speech, or text.

The people at the top want us to believe this narrative, and that is where all this propaganda is coming from, that the future is AI, because it helps their bottom line. Corporations are able to use this to boost their stock price. Billionaires are able to put more in their pockets. Politicians are able to run on it as a platform. All of them win, so they say ridiculous things at press conferences, lay off a bunch of workers while citing AI as the reason, or push a bunch of misleading commercials.

The “I” in “AI” Should Stand for “Illusion”

So what is AI actually? The name itself is misleading, because the I stands for intelligence (or say they say). Large language models, which are what most people mean when they say AI, are not intelligent whatsoever. They are incapable of thinking for themselves by their very nature. You might be saying “but how could that be?. It sounds so real and thoughtful when I talk to it.”

Listen, all it is doing when it talks to you is going over an enormous body of words taken from the internet and from millions of books, articles, and other pieces of text, and then predicting what a human would likely respond with.

So if you ask ChatGPT what happens when you throw a ball in the air and it responds with, “The ball will come down of course,” all it knows is that when a human talks about throwing a ball in the air, they are statistically very likely to talk about the ball coming down. It doesn’t have an actual understanding of physics or gravity.

That is a simplified example, obviously. It can get extremely elaborate. You can ask ChatGPT to write a five-paragraph essay about how Harry Truman’s childhood shaped his decisions as president, and the large language model will search through its huge body of text, which contains information about Harry Truman as well as information about how a human would be likely to talk about him, and then instantly output an essay that fits the criteria. An unimpressive essay, but an essay nonetheless.

You can take it a step further and ask it to output a picture or a video or have it do some task, and it will search through its data to figure out what the picture is statistically likely to look like, what the video is statistically likely to be, or how the task is statistically likely to be done, but it does not itself have a model of how the world works like a human does. It is all a fugazi.

What makes it so convincing is how polished it is, to the point that we inherently buy the notion that this machine is actually some supercomputer, when really it is just an extremely fancy autocorrect. And that certainly has some cool applications, like analyzing text really well, but that does not make it this omnipotent, all-knowing harbinger of either the golden years or the end times.

AI Output is Getting Much Sloppier

So not only do I think AI has either hit a ceiling or is close to hitting a ceiling, but I think we are in for a major reality check as we both realize what AI actually is and also see its quality deteriorate, because that is what is happening. Have you used ChatGPT recently? Its responses are significantly worse than they were a year ago.

Since AI feeds off the internet and all the text and information on it, and since AI has become more mainstream and more AI-written content has appeared online, these large language models have actually started training themselves on AI-generated content. You might see why that is a problem.

As it starts training on itself and outputting from that, and then training on that, and so on, you get this generic mush of text. That is why AI writing has gotten so bad. That is why the pictures are so uncanny. That is why AI video softwares, like Sora, are shutting down. And that is why tools like Copilot are absolutely worthless.

It is like if you plug a sentence into Google Translate, then translate that back into English, then into Spanish, then into Chinese, then into Japanese, and then back into English, and do that over and over again. Pretty soon, you are left with a sentence that makes absolutely no sense.

So, is this Good News or Bad News?

Both.

It is good news in the sense that I think the fears about AI replacing humans, doing all the creative work, taking all the jobs, or replacing education are not going to come to fruition, even though we are seeing some layoffs right now from massive corporations where they cite AI as the reason. That is often just being used as an excuse, since they wanted to lay people off anyway and now they have a scapegoat. And the companies that genuinely are trying to replace their people with AI are in for a rude awakening when they realize they actually do need people.

The reason this is also bad news is that so much of the economic growth in the past couple of years is tied up almost exclusively in AI, not for what AI is, but for what AI could be in people’s minds. There is so much hype behind it that it very closely resembles the dot-com bubble at the turn of the millennium.

By and large, I think things are going to be okay. I think AI and large language models, 10 or 20 years from now, are going to be used as a tool to analyze text, refine grammar, and maybe handle a few other useful applications, but they are not going to bring about the future that is being pushed as inevitable. So continue to invest in your education and your future, because the human race is resilient, we have been around a long time, and we are not going anywhere.

JD Hopper

JD Hopper is a mathematics instructor who taught classrooms at Charlotte Latin School and built Purpose Tutoring from a solo practice into a growing team of exceptional tutors. JD focuses on leading the company, matching families with the right tutors, and building systems that support a consistently high-quality experience nationwide.

Next
Next

Why Students Stop Caring, and How Great Educators Bring Them Back