
Matt Shumer’s viral essay has been read by over 80 million people. The gist: AI just replaced programmers, and everyone else is next. Stockpile your toilet paper.
I’ve spent my career building financial models, teaching people to build models, and running the world’s only financial modeling accreditation body. I live in the world Shumer is warning you about. And I think he’s made a critical error in logic.
He’s right that something big is happening. He’s wrong about what it means.
The Human Robot Problem
Shumer’s argument boils down to a single leap: AI can now do what programmers do, therefore AI will do what everyone does. It sounds logical. But it’s built on a flawed comparison.
Let me offer a different one.
At its peak in 1925, Ford’s Highland Park plant employed nearly 70,000 workers to build cars. Their jobs were entirely physical and almost perfectly repetitive. Attach this bolt. Weld this seam. Lift this panel. Repeat. Eight hours a day, six days a week.
These weren’t people doing human work. These were humans doing robot work, before the robots were ready.
The photographer Edward Burtynsky was recently allowed inside an electric car factory in China. His images show a facility producing vehicles with almost zero human workers. Robots everywhere. People almost nowhere.
This didn’t happen overnight. It took a century. But it happened, inevitably, because the work was always robotic in nature. The humans were placeholders.
Now look at what happened with computer programming. What is most programming? It’s translating precise logical instructions into precise logical syntax. It’s pattern-matching. It’s repetitive, structured, and rule-bound. Programmers, in many cases, were doing machine work with human hands.
So when AI got good enough to write code, it didn’t replace “human work.” It replaced the robotic component of what humans were doing. Just like the welding robots at the Ford plant didn’t replace human creativity, they replaced the repetitive motion of a human arm.
This is the distinction Shumer misses entirely.
The Leap That Doesn’t Land
Here’s where the argument falls apart. Shumer takes the displacement of programmers and auto workers and projects it across the entire white-collar economy. Lawyers. Doctors. Financial analysts. Accountants. Managers. All of them, he implies, are next.
But this only works if you believe that what a lawyer does is fundamentally the same kind of work as what a programmer does. And it isn’t.
When a doctor sits across from a patient who just received a cancer diagnosis, that isn’t an information transfer problem. When a financial advisor helps a couple navigate the most stressful financial decision of their lives, that isn’t a spreadsheet problem. When a lawyer sits with their client before a trial and says, “here’s what I think we should do,” that isn’t a pattern-matching problem.
These are human problems. They require judgment, empathy, trust, and presence. They require the thing that we’ve spent millennia evolving to do: connect with each other.
Shumer’s mistake is assuming that because AI can handle the technical substrate of these jobs, the jobs themselves will disappear. But the technical substrate was never the whole job. It was never even the most important part.
The Humanity Premium
Here’s what I think Shumer, and many of the people panicking about AI, fundamentally misunderstand about human nature.
For thousands of years, humans have thrived on three things: community, storytelling, and purpose. We are wired to seek connection. We want to be seen, heard, and understood by other humans. This isn’t a preference. It’s biology.
Yes, an AI doctor might achieve a higher diagnostic accuracy rate than a human doctor. I’ll grant that. But will a patient feel cared for by a chatbot delivering their biopsy results? Will they trust a machine to help them weigh the agonizing tradeoffs of treatment? Will they feel the thing that actually helps people heal, which is the sense that another human being gives a damn?
No. They won’t. No machine can give them that.
Now, and this is crucial, I’m not arguing that we should reject AI in medicine, or finance, or law, or anywhere else. The correct response isn’t to dig in and pretend the technology doesn’t exist.
The correct response is to combine them.
The Real Future: Augmented Humanity
The people who will thrive in the next decade won’t be the ones who are replaced by AI. And they won’t be the ones who ignore it. They’ll be the ones who use AI to become better humans at their jobs.
A financial modeler who uses AI to build the first draft of a model in minutes instead of hours, and then applies years of judgment to stress-test it, challenge its assumptions, and present it to a board of directors who need to trust the person standing in front of them? That’s the future.
A doctor who uses AI to cross-reference symptoms across millions of cases in seconds, and then sits with their patient, looks them in the eye, and helps them make the hardest decision of their life? That’s the future.
A lawyer who uses AI to review 10,000 documents overnight, and then walks into the courtroom with the kind of persuasion and presence that only a human can deliver? That’s the future.
The pattern is the same every time: AI handles the robotic component. The human handles the human component. Together, they’re better than either alone.
The Fork in the Road
Shumer tells you to spend an hour a day with AI so you’re not left behind. On that, we agree. Where we disagree is on why.
Shumer thinks you should practice with AI because your job is about to be automated. I think you should practice with AI because it’s about to make you dramatically better at the parts of your job that actually matter: the parts that require you to be human.
The car factory workers didn’t lose their jobs because robots were better than humans. They lost their jobs because they were never doing human work to begin with. They were human robots, waiting to be replaced by real ones.
The question you should be asking yourself isn’t “will AI take my job?” It’s “how much of my job is actually human?”
If the answer is “not much,” then yes, you should be worried. But if your work involves judgment, trust, relationships, persuasion, empathy, creativity, or the ability to sit across from another person and help them navigate complexity? Then your job isn’t going away. It’s about to get a serious upgrade.
The doomsday crowd wants you to believe that humanity is a liability in the age of AI. I think it’s the ultimate asset.
Double down on it.
Ian Schnoor is the Executive Director of Financial Modeling Institute, the world’s only financial modeling accreditation organization.
Ian created the Financial Modeling Practical Skills Module for CFA Institute which is now a mandatory part of the CFA Program.
In 2002, Ian founded The Marquee Group, a leading provider of financial modeling training and consulting, which was sold to Training The Street in 2023. Ian is a passionate teacher who has trained thousands of students and professionals all over the world in financial modeling, valuation, and Excel.
Ian teaches a course called Advanced Financial Modeling at Queen’s University and is a past recipient of the Professor of the Year award in the Master of Finance program at the Smith School of Business.
Ian began his career in the Investment Banking departments at Citigroup and BMO Capital Markets. He holds a Bachelor of Commerce Honours degree with academic distinction and is also a CFA charterholder.
Financial Modeling Institute (FMI) promotes excellence and discipline in financial modeling through rigorous accreditation programs and thought leadership.
The Advanced Financial Modeler (AFM) accreditation is the only exam that requires candidates to build a 3-statement financial model of a company from scratch under time pressure, demonstrating their ability to translate data into actionable insights.