he worked 80 hours a week building AI agents that replace humans

On a rainy Thursday in San Francisco, Leo walked out of the glass tower with a cardboard box and a company hoodie he didn’t ask to keep. Twenty‑six years old, dark circles under his eyes, his keycard already deactivated. Six months earlier he’d bragged to friends that he’d “made it” — hired by a hot AI startup that everyone on LinkedIn was drooling over. Now the security guard at the lobby only offered a polite nod and looked away.

He had just been fired from the job he thought would define his whole twenties.

The job where he’d spent 80 hours a week building AI agents designed to replace exactly the kind of people he grew up with.

When your dream job quietly eats itself

Leo’s contract called it “agentic AI for enterprise workflows”. Inside the office, nobody said “replace humans”. They said “augment productivity”, “streamline operations”, “unlock efficiencies”. Still, late at night, staring at endless lines of code and prompt chains, he knew what the product really did. It watched what workers did, learned to imitate them, then did it faster and cheaper.

He told himself he was building the future. He also stopped checking his messages from old college friends who were struggling to find work.

The routine was always the same. Mondays started with a stand‑up where managers celebrated a new client: an insurance firm letting go of 30 back‑office staff, a retail chain cutting its support team by half after integrating the latest AI “agent swarm”. Slides showed hockey‑stick graphs and smiling avatars answering tickets.

Then Leo and the other engineers went back to their desks. More integrations, more automations, more prompts that shaved seconds off human decision‑making.

When one client sent a thank‑you note saying they had “finally reduced dependence on low‑skill workers”, someone actually clapped.

The logic felt airtight. Investors wanted growth, clients wanted savings, engineers wanted hard problems to solve. Leo was caught in the middle of that triangle. The more teams he helped “automate”, the more praise he got. Promotions were fast, stock options dangled like golden keys to a different life.

Anyway, he told himself, if he didn’t do it, someone else would. That’s the quiet deal a lot of tech workers accept without really looking at it too closely.

➡️ Bananas stay fresh for 2 weeks without going brown if kept with 1 household item

➡️ A warm dinner served in a squash: this deliciously stuffed butternut squash puts comfort on the menu!

➡️ Goodbye microwave as households switch to a faster cleaner device that transforms cooking habits

➡️ This film won 11 Oscars: none has done better in 66 years, even if Titanic and LOTR matched the record

See also  Most people place household items in the least practical order

➡️ A Nobel Prize–winning physicist says Elon Musk and Bill Gates are right about the future : we’ll have far more free time: but we may no longer have jobs

➡️ Two years ago this CEO fired 80% of staff for refusing AI; today he says he was right

➡️ “Twenty years ago I’d have sent my daughter to top schools. Today I think it doesn’t matter,” says Ben Mann, Anthropic cofounder

➡️ Cleaning pros explain why applying vinegar to car glass works far better than people expect

*You can build the machine and still feel surprised when it starts pointing at you.*

The day the agent came for him

The irony hit during a late‑night sprint. Leo’s team was asked to “experiment with using agents to support internal engineering workflows”. It sounded harmless. Let the system watch their Git commits, tickets, code reviews, and then suggest improvements. Just a productivity boost, right?

Within weeks, the AI wasn’t just suggesting. It was writing entire test suites, generating boilerplate, refactoring legacy modules with eerie patience. Tasks that usually took Leo two focused afternoons suddenly took 15 minutes and a green “done” label.

Nobody said anything at first. They were excited. One teammate joked: “If this works, we’ll all be managing our AI interns.” The managers loved that line and repeated it in meetings. Then the language shifted. These weren’t “interns” anymore. They were “autonomous dev agents”.

A slide appeared in an all‑hands: “Engineering velocity increased 4.3x with agent‑assisted workflows.” Beneath that, a smaller bullet point: “Headcount optimization potential: 20–30% in select teams.”

You don’t need an AI model to guess which “select teams” they meant.

The company raised a new funding round. Cost‑cutting became a virtue, almost a moral stance. Decks circulated about “lean teams powered by agents”. Leo started noticing how every internal success story quietly had a human shadow: “Finance closed the quarter with half the manual work”, “Legal processed contracts with 70% fewer review hours”.

The plain truth is: once a company learns it can replace 5 salaries with one subscription, the spreadsheet wins.

Leo’s own performance reviews shifted tone. His technical skills? Great. His “leverage” compared to what the agents could now do? Suddenly “under evaluation”.

How to work with the machine without becoming disposable

If you’re reading this from your own open‑plan office or cluttered remote desk, there’s a practical question here. How do you stay valuable in a world where AI agents can do more of the work you once prided yourself on? The first move isn’t to run from the tools. It’s to run toward them with a notebook open.

See also  French divers capture the first-ever images of an iconic species in the depths of Indonesian waters

Learn what your company’s AI systems actually do. Where they fail. Where they hallucinate. Where they need human judgment, context, ethics, or messy negotiation with reality. That’s the space you want to live in.

A quiet trap many workers fall into: either they ignore the tools or they try to compete with them at their own game. Both moves backfire. If you ignore them, you look outdated. If you try to beat them at speed, you burn out trying to be a machine.

Your real leverage sits somewhere else. It’s in asking better questions, designing better workflows, translating between business goals and technical capabilities, and saying “no” when a solution looks slick but will backfire on real people. We’ve all been there, that moment when you realize the tool doesn’t understand the nuance of a customer’s pain or a legal risk that isn’t in the training data.

Leo’s turning point came months after his firing, over coffee with a former colleague who’d stayed. The company didn’t need “more coders”. It needed people who could orchestrate the agents, design guardrails, and talk to clients about what not to automate.

“Turns out,” the colleague told him, “we fired some of the people who understood the human side best. We kept the ones who could speak both languages — machine and messy human reality.”

  • Learn the tools deeply, not worshipfully. Understand their limits, not just their features.
  • Spot the non‑automatable: trust, negotiation, creative leaps, accountability when things go wrong.
  • Move up a layer: from “doing tasks” to designing the system that does the tasks.
  • Document your judgment: explain why you chose X over Y, where a model would likely fail.
  • Talk about risk, not just speed. People who see second‑order effects don’t get replaced so easily.

What Leo’s story says about us

Leo now freelances, advising small businesses on how to deploy AI without gutting their teams. He still writes code, but he spends more time asking awkward questions than shipping features. Who loses their job if this system works perfectly? What happens to your brand when a bot mishandles a vulnerable customer? Who gets blamed when the AI makes a bad call?

He’s still proud of his technical chops. He’s no longer proud of blindly pointing them at whatever goal a pitch deck demands.

There’s something raw about watching a young engineer build, with real excitement, the exact system that makes his own role optional. It’s not a sci‑fi parable. It’s a Tuesday at a dozen startups you’ve never heard of.

Let’s be honest: nobody really reads the full “ethics” page on the company Notion before accepting the offer. You see the salary, the logo, the chance to work on cutting‑edge tech. You don’t picture the email that says your position “no longer aligns with strategic priorities”.

See also  3 typical behaviours of impostors who know how to promote themselves

So this isn’t a call to panic or to romanticize a world without automation. It’s a call to think a little more bluntly about the game we’re playing. To ask who benefits, who gets erased from the slides, and where you want to stand when the next “efficiency” milestone rolls out.

The machines are getting better at being machines. The real work now is staying stubbornly, usefully human.

Key point Detail Value for the reader
See how the incentives work AI agents are sold as “efficiency”, which usually means fewer humans on payroll Helps you read between the lines of your own company’s AI strategy
Move up a layer Shift from doing repetitive tasks to designing, supervising, and questioning AI workflows Protects your role from direct automation and **increases your leverage**
Own the human edge Lean into judgment, ethics, relationship‑building, and second‑order thinking Makes you the person who can bridge tech, business, and real people — the one who’s **harder to cut**

FAQ:

  • Question 1Can AI agents really replace whole jobs, or just tasks?In most cases they start with tasks: drafting emails, summarizing tickets, processing documents. Over time, enough tasks get automated that a manager decides one full‑time role can be trimmed. Entire jobs don’t vanish overnight; they quietly erode.
  • Question 2What types of work are most exposed right now?Routine, digital, rule‑based work is first in line: support, data entry, some QA, simple coding, parts of marketing and operations. Anything that can be turned into clear instructions and checked automatically is attractive to AI agents.
  • Question 3How can I tell if my job might be “optimized” next?Look at what your company is proudly automating for clients. Then ask yourself how different your own daily tasks really are. If internal pilots are testing agents on your workflow, that’s an early signal.
  • Question 4What concrete skills should I start building?Prompt design, workflow mapping, basic scripting, data literacy, and communication skills that let you explain tech choices to non‑technical people. Also, an eye for risk: privacy, bias, and reputational landmines.
  • Question 5Is it hypocritical to work on AI if I worry about jobs being lost?Only if you refuse to ask hard questions. You can work on AI and still push for guardrails, better transitions for workers, and systems that **augment people instead of erasing them**. The hypocrisy starts when we stop caring who gets left behind.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top