Mike Sweatt Blog

Reshaping the Matrix

A person standing at a crossroads. One path is glowing with digital lines (code/AI), leading into a somewhat sterile, automated-looking city. The other path is more organic, maybe leading towards a horizon where human figures are actively building or collaborating, perhaps with subtle digital overlays. The sky is, representing the two realities.
Michael Sweatt
Michael Sweatt
Future of AI

I distinctly remember watching the Matrix during my pre-teenage years. I watched Neo choose between two pills—one offers comfortable ignorance, the other painful truth. The choice was binary. But here's what I now understand that I initially missed: the real choice wasn't between the pills. It was what Neo did after consuming the red one.

Neo didn't just see the Matrix. He learned to reshape it.

Twenty-five years later, we're living through our own Matrix moment. AI systems are being deployed across every sector of the economy. Venture capital has poured over $100 billion into the space. Tech leaders openly discuss a future where AI handles most white-collar work. The narrative is everywhere: this future is inevitable, resistance is futile, adapt or perish.

But here's the thing about inevitability—it's usually a story told by people building the future they claim is predetermined.

The Automation Paradox

A startup called Mechanize recently published an article titled "Technological Determinism." Their argument? Technology doesn't autonomously determine social outcomes. Rather, "technology is shaped by social, economic, and political forces." Human choices, they argue, drive how technologies develop and get adopted.

It's a compelling thesis. There's just one small detail.

Mechanize's mission is to build the infrastructure to automate the economy—explicitly targeting white-collar work that accountants, lawyers, and other professionals perform. Their investors and leadership believe they're creating a future where AI replaces vast swaths of human labor.

So we have builders of automation infrastructure claiming technology isn't deterministic while actively constructing systems designed to make certain outcomes... well, pretty damn deterministic for the people whose jobs get automated.

Remember the Architect from The Matrix? He designed a system that offered "choice" within parameters he controlled. Similarly, when a small group of founders and investors build automation infrastructure, and millions of workers must simply adapt to it, is that really "social shaping"?

The question isn't whether Mechanize or companies like them will succeed. The question is: whose choices get to matter?

The Illusion of Binary Options

Most conversations about AI and work present a false binary, just like Neo's pill choice appeared to be:

The Blue Pill: Ignore AI, pretend it's not happening, get displaced when it arrives.

The Red Pill: Accept that AI will replace most jobs, prepare for mass unemployment, advocate for UBI as the only solution.

Both positions share a critical assumption that the outcome is predetermined. The only difference is whether you see it coming.

But this is exactly wrong. The real lesson from The Matrix isn't about choosing awareness over ignorance, but what you do with awareness, what actions you take. Neo didn't just wake up to reality—he learned the rules of the system well enough to bend them. He organized others and challenged the fundamental architecture.

The real red pill isn't accepting a predetermined future. It's recognizing that the future is still being constructed and forging a seat at the construction site.

The Deterministic Pressure

To be clear, there are powerful forces pushing toward a specific vision of the AI-automated future: capital concentration (over $100 billion in funding flowing to a small number of AI companies), platform control (a handful of tech giants controlling foundational models), market logic (quarterly earnings pressures incentivizing automation for cost savings), and narrative dominance ("AI is coming for your job" has become accepted wisdom).

These aren't abstract forces. They're choices being made by specific people with specific incentives. When Mechanize builds automation infrastructure, they're not discovering an inevitable future—they're constructing one possible future. This is where the technological determinism argument gets truly important. Because if we accept that technology is socially shaped, then we have to ask: who's doing the shaping?

Four Domains of Agency

The good news is that the future isn't written. The infrastructure is being built right now, which means it's still contestable. Drawing from my own experience, labor history and technology adoption patterns, I see four domains where choices still matter:

Individual Agency: Know the Terrain

This doesn't mean learning to code or becoming an AI engineer. It means developing "AI literacy"—understanding how these systems actually work, what they can and cannot do, and where the business case for automation is strong versus weak. Not everything that can be automated will be automated. Economics matter. Liability matters. Quality matters. Customer preference matters. Your individual agency starts with mapping your own skills: which tasks in your role are commodity processing versus nuanced judgment? Where does context matter that AI can't capture?

Collective Power: You’re Not Alone

The Writers Guild of America strike in 2023 included specific provisions about AI use in scriptwriting. They didn't stop AI from existing. They negotiated how and when it enters their profession, and who benefits from productivity gains. Whether through unions, professional associations, or worker organizations, collective power changes the equation. A single accountant has little leverage against an AI deployment. But ten thousand accountants with a professional association can demand transparency in how AI systems make decisions, human review requirements for high-stakes judgments, and retraining programs funded by automation savings.

System Design: A Seat at the Table

Systems designed without end-user input fail. They become misaligned with actual needs, get worked around, or create new problems worse than what they solved. When AI systems get deployed in your workplace, are the people doing the work involved in requirements definition? In testing? In evaluation? The participatory design movement has decades of evidence showing that systems co-designed with users outperform systems imposed from above. Demanding a seat at the design table is the difference between AI as a tool you use versus AI as surveillance you're subjected to.

Policy and Economics: Rewriting the Rules

If AI increases productivity by 30%, who captures that gain? Shareholders through increased profits? Or workers through higher wages, shorter hours, or better conditions? Right now, the default assumption is shareholder value. But that's a choice, not a law of nature. The policy domain is where we answer questions like: Should AI systems be required to disclose when they're making decisions about employment, credit, or healthcare? Should productivity gains from automation be taxed differently to fund retraining? Should workers have representation on boards making decisions about AI deployment? These are open questions determined by political choices we make collectively.

The Real Choice

Let me bring this back to Mechanize and their technological determinism argument. They're right that technology isn't deterministic. But that truth only matters if we act on it. The real test of whether technology is socially shaped is this: are the shaping forces distributed or concentrated?

Right now, they're heavily concentrated. A small number of people are making decisions that affect millions. That's not a technical problem—it's a political one.

This isn't a call to stop AI development. It's a call to democratize it. Neo didn't destroy the Matrix. He transformed the relationship between humans and machines. The goal isn't to halt automation—it's to ensure automation serves human flourishing rather than replaces human agency.

Your Next Move

Here's your toolkit:

  • Master the Architecture—understand how AI systems in your domain actually work.
  • Forge Your Alliance—join or form organizations in your profession.
  • Claim Your Seat at the Table—when AI tools get deployed in your workplace, insist on involvement in the process.
  • Engage governance—support policies that create transparency, accountability, and democratic oversight.
  • Most importantly: reject the narrative of inevitability. Every time someone says "AI is coming for your job," ask: which AI? Deployed how? Governed by whom? Benefiting whom?

The Matrix ends with Neo calling the machines on a payphone, promising to show people "a world where anything is possible." We're in a different kind of movie now. The machines we're negotiating with aren't sentient (…yet)—they're built by humans, funded by humans, deployed by humans. Which means the outcomes are up to humans.

The infrastructure to automate the economy is being built. The question is whether it's built to you or with you.

The future isn't predetermined. But it won't wait for us to decide, either.

The red pill isn't knowledge. It's action.

What will you do with yours?

A line

Are you taking the blue or red pill? Reach out at mike@mikescorner.io