TL:DR - AI does not really think, it just seems like it does. Technological shifts have happened before and people will adapt as necessary.
edit 3/8/2025: I initially wrote this a few years ago in March of 2023, There’s been a lot of changes in the field since then, but surprisingly (or rather unsurprisingly) a lot of these points still stand.
Everywhere I go I keep hearing “Oh no, AI is gonna take my job” “Oh no, Is Skynet coming?” “Oh no, is AI is gonna steal my husband/wife”.
And in some sense it’s true. It has the potential to upend entire industries and disrupt the next few decades of technology (and probably your relationship too IDK 🤷🏾♂️)
Every other day you see news about AI. Midjourney, DALLE, Stable Diffusion, Flux, SORA (the list goes on) generating images and videos that look a little TOO good. Every other LinkedIn post saying “[insert job here] is DEAD”, which in my opinion is often fear mongering at its finest to get you to buy into something.
Hell, one AI chatbot company, Replika, even supplies users with AI companions they can talk dirty to (but then again I have a feeling the people who use this AI seriously aren’t exactly the norm…)
These are crazy times we live in and it’s hard not to be frightened by all these AI that seem like magic in this brave new world.
Any sufficiently advanced technology is indistinguishable from magic
-Arthur C Clark
But once you know how a magic trick works, it ceases to be magic.
At the heart of it, all AI is just a pattern recognition engine. They’re simply algorithms designed to maximize or minimize an objective function. There is no real thinking involved. It’s just math. If you think somewhere in all that probability theory and linear algebra, there is a soul, I think you should reassess how you define sentience. (Author’s note: there are arguments for how things like bananas, rocks, and large enough integers are conscious but let’s save that for another post)
Chapter 1: Why AI is Still Dumb
Humans don't need to learn from 1 trillion words to reach human intelligence. What are LLMs missing?How do we compare the scale of language learning input for large language models vs. humans? I've been trying to come to grips with recent progress in AI. Let me explain these two illustrations I made to help. 🧵 https://t.co/hayhUU5Iv6Michael C. Frank @mcxfrank
Formally, what these models are doing is learning the probability distribution of data. Oversimplified, these models are learning patterns based on past data and then making predictions. And yes, it just so happens that text and images have probability distributions too which an AI can learn and then predict (now as to how how exactly these models get from that simple statement to things like ChatGPT or DALLE is another post(s) as the process can get very involved, very fast). Large Language Models (LLMs), for the most part, do next word prediction and I think it’s fair to say that this alone has given us massive results.
Learning patterns from texts and images across the span of human civilization, that in and of itself is a monumental achievement but there is no real thought involved. That’s just an algorithm trying to maximize its objective function. Trying it’s best to mimic data its seen in the past. It’s very good at picking up on patterns that have already existed but truly out of distribution, original thoughts is difficult if not impossible at the moment. And although it may seem original I’m heavily inclined to believe its a mimic of some training data at some point, somewhere.
What’s the end goal for increasing the complexity of these models? Every few months or so we get a new model that’s supposedly even bigger and better. Making increasingly complex pattern recognition engines with more data but fails to be actual reasoning via manipulation of logic (see some interesting work on this in Leon Bottou’s paper: From Machine Learning to Machine Reasoning) and most importantly seems to lack understanding. Even some of the top models still can’t do multiplication:
Sure, with a millions upon millions of training examples, of course you can mimic intelligence. If you already know what’s going to be on the test, common patterns for answers in the test, or even the answer key itself, then are you really intelligent? OR are you just regurgitating information from billions of past tests? I think the sentiments here echo those of John Searle with his Chinese room argument.
Let’s also not forget that machine learning and AI models are not always right. We’ve all seen examples of LLMs “hallucinating”, talking nonsensically about. In fact, It would be suspicious if it was as it probably means the testing data is leaking into the training data, i.e. the AI gets a “peek” at the answer key. AI image generation is good but far from perfect. There are no doubt some stunning results but qualitatively, even the layman can tell something is off. There’s still just some uncanny valley quality to most of the artificially generated images.
This video by Dmitri Blanco (a quant finance guy I follow on Youtube) does a good job of intelligently outlining the main points and criticism of the whole AI hype cycle
Chapter 2: Why it’s not scary
Unless something fundamentally changes in the ways these models evolve, I do not see them becoming a genuine threat to most jobs.
Let’s look at history of past technological changes. The mechanized loom was one of the key developments in starting the industrial revolution and Luddites reacted negatively to it. Painters reacted negatively to photography. Surprisingly, an anime review channel has a pretty good breakdown of these points:
The analogy is there. Mechanized looms simply automated work just as most AI automates work or will automate work. There used to be human alarm clocks too. People who would go up to homes and knock on your window to wake you up in time for work. Do we necessarily mourn the loss of that field of work? We used to wash dishes and clothes manually too and that too got automated. The list goes on.
Take the field of software for example. Nowadays, we have AI tools like Windsurf, Cursor, Copilot, etc. That can write boilerplate code in seconds. This just automates the boring boilerplate code and menial stuff and wasn’t that what we wanted in the first place? Not to have to (metaphorically) wash dishes and clothes?
Sure, Anthropic’s models can build a really cool TODO app or fabric simulator, but that was never the truly hard part about software in the first place: integrating with other software systems and codebases, robust testing, making sure this is something that should have been built in the first place, among many other things. AI tends to miss the bigger picture in these scenarios: knowing the right questions to ask, the right problems to solve. They’re not going to replace software engineers anytime soon in my opinion. Also consider Jevons paradox. Even if AI gets good enough to entirely replace jobs, it may actually increase demand for those jobs
Industry experience leads me to believe that there’s still a fundamental lack of trust with black box systems and will never completely eliminate humans in the loop. Having AI instead of humans in a certain role sort of misses the point entirely of that role. Sure you could write an apology letter for a tragedy or some sensitive subject matter using AI (see Vanderbilt), but should you? Sure you could have an AI that handles customer support, but does a customer want that?
Hot take: if your job can be partially or wholly eliminated by AI, that’s a GOOD THING. If your job has patterns that predictable or labor that routine, AI automation is a GOOD THING. Change is inevitable. You can’t stop the railroad as they used to say. It’s going to kill some jobs but not all of them. As a whole, I’m sure it’ll give birth to entirely new systems we haven’t even conceived of yet and, one can hope, free up that time and energy towards more meaningful or creative pursuits.