Fired by a Machine: What Happens When AI Runs HR
What Hilke Schellmann’s book gets right about AI, hiring, and the quiet takeover of workplace decisions
I. The Story That Shouldn’t Be Possible
Lizzie followed all the instructions. She completed her job application, recorded the required one-way video interview, and waited. Then she received her result: a score of zero. Not a low score—a complete disqualification. The reason? The hiring software couldn’t detect her face in the video. No face, no score. No score, no interview.
She never spoke to a human being.
That story opens The Algorithm: How AI Can Hijack Your Career and Steal Your Future, and it sets the tone for everything that follows. Journalist Hilke Schellmann doesn’t lead with a statistic. She leads with a real person whose career trajectory was changed by software that made a mistake—and offered no appeal.
It’s not a glitch. It’s a warning.
A Book About Decisions We Can’t See
Most people still assume that hiring and workplace decisions are made by people. You apply. A recruiter reads your résumé. A manager interviews you. Someone makes a judgment.
But increasingly, that’s not what happens.
From résumé screeners to predictive firing algorithms, Schellmann shows that AI is already making decisions that affect your job prospects, your work experience, and even your employment status. And it’s doing so in ways that are largely hidden from view—sometimes even from the managers who rely on these tools.
“If you submit your résumé online, chances are your application is reviewed by a machine and not a human.”
That’s not speculation. It’s the new default for millions of applicants.
Why This Story Matters Now
This book lands at a moment when AI is being adopted across HR systems—not just by tech giants, but by governments, hospitals, retailers, and logistics firms. In one estimate cited by the author, more than 80% of Fortune 500 companies are now using some form of automation in hiring.
These tools promise efficiency and fairness. But as Schellmann’s reporting shows, they often deliver something else: pseudoscience, black-box decision-making, and unaccountable harm.
And it’s not just about getting the job. It’s about what happens once you’re in—how you’re evaluated, watched, scored, and potentially let go.
In the next section, we’ll look at who Hilke Schellmann is, and why she’s uniquely qualified to investigate this new world of invisible employment systems.
II. Why This Book—and Why Hilke Schellmann?
Hilke Schellmann isn’t just reporting on a trend. She’s documenting a transformation.
She’s an Emmy-winning investigative journalist, a professor at NYU, and a contributor to outlets like The Wall Street Journal and The Guardian. But what sets her apart in this book is how deeply she embeds herself in the systems she investigates.
This isn’t a pundit’s book. It’s a reporter’s book—with sources, data, FOIA requests, and firsthand testing.
The Moment That Sparked It
Schellmann’s journey began in 2018 at a business psychology conference. She watched as HR vendors demoed tools that promised to revolutionize hiring. One system claimed to detect “grit” from a video game. Another said it could assess honesty by analyzing facial expressions.
What shocked her wasn’t the ambition—it was the lack of scientific rigor. These tools were already in the market. Companies were already using them. But no one was asking for proof.
Then came a conversation with a Lyft driver. He told her he'd been rejected for a job after being assessed by a computer system. He had no idea what it had evaluated, or why he didn’t pass.
“That story stuck with me,” she writes. “There was a system making decisions about him—and he had no visibility into it.”
That’s the thread she follows in this book: What happens when humans no longer control the hiring process—and don’t even know what’s controlling it instead?
A Reporter Who Went Inside the Black Box
Over four years, Schellmann spoke to engineers who built these tools. She interviewed HR leaders, whistleblowers, psychologists, and rejected candidates. She tested the systems herself. She filed Freedom of Information Act requests. She got her hands on documentation that most users will never see.
The result is a rare thing in the AI discourse: a deeply human, meticulously sourced, and quietly devastating exposé.
And perhaps most importantly, she doesn’t let the story drift into abstraction. Every chapter ties back to real people navigating real systems—people like Lizzie, who don’t get a job, or get fired, and never find out why.
In the next section, we’ll unpack how these tools work—and what Schellmann reveals about the unseen machinery behind hiring decisions.
III. How AI Shapes the Employment Lifecycle
3A. Résumé Screening: Where the Gatekeeping Begins
The job application is still one of the most familiar rituals in modern work. You write a résumé, you upload it, you hit submit. But here’s what most applicants don’t realize: your résumé probably won’t be seen by a human.
Instead, it will be scored, filtered, and possibly discarded by software.
“These systems are pitched as objective,” Schellmann writes, “but they inherit the biases of the humans and histories they’re trained on.”
AI résumé screeners are designed to find “the best match” using keyword matching, employment history patterns, and even personal details like hobbies or zip code. But what Schellmann uncovers is how fragile—and discriminatory—those systems can be.
A System That Penalizes the Wrong Details
In one case she investigates, researchers submitted fake résumés with identical qualifications. One listed “baseball” as a hobby; the other listed “softball.” Only one scored highly. The AI learned from biased hiring histories that baseball—more associated with men—was a stronger signal of “leadership potential.”
This isn’t an isolated example. Applicants with gaps in employment history, addresses in low-income zip codes, or non-traditional credentials are often downgraded—not because of skill, but because the system interprets those patterns as risk.
And the most striking part? Candidates are never told. You don’t receive a reason, a rubric, or a rejection that makes sense. You just vanish from the funnel.
Employers Often Don’t Understand the Tools Either
Schellmann goes further: she speaks with HR teams and hiring managers who don’t even know how their screening algorithms work. Many use third-party vendors with proprietary scoring systems. They buy the promise—faster hiring, better matches—but they can’t explain how decisions are made.
It’s not just candidates who are in the dark. Often, the companies using the tools are too.
And unlike a bad interviewer or a biased manager, algorithms can’t be questioned, reasoned with, or appealed. Once the score is generated, the decision is locked in.
As Schellmann puts it: “We’ve replaced flawed human judgment with flawed machine judgment—only now, no one knows how to fix it.”
In the next section, we’ll look at what happens when job interviews themselves are handed over to algorithms—and how video analysis tools can fail in ways no human would.
3B. Video Interviews: Judged by a Webcam, Scored by a Black Box
Let’s say you make it past the résumé screen. You’re invited to an interview—but it’s not a conversation. It’s a one-way video. No interviewer. No human response. Just a blinking red light and a software timer.
This is what more and more candidates now experience: automated video interviews analyzed by artificial intelligence. The system watches your facial expressions, listens to your tone, and scores your “enthusiasm,” “confidence,” or “engagement.”
“These tools claim to read emotion,” Schellmann writes. “But many are built on discredited behavioral science.”
One of the most powerful stories in the book is that of Lizzie—a woman who completed her interview, answered all the questions, and was rejected. Later, she learned the AI had given her a zero. Why? It couldn’t detect her face.
There was no retry. No explanation. Just a silent algorithmic disqualification.
The Pseudoscience Behind Emotion Detection
Schellmann digs into how companies like HireVue and others have pitched emotion analysis as a breakthrough in hiring. These systems claim to detect honesty, enthusiasm, or even trustworthiness based on how someone smiles or how long they pause.
But when independent researchers audited these systems, they found something else: inconsistent results, little validation, and a high risk of misreading people—especially those with disabilities, neurodivergent communication styles, or non-Western body language.
Facial expression ≠ personality. But the tools don’t know that.
Candidates Often Don’t Even Know They’re Being Scored This Way
One of the quietest failures of this technology is the lack of transparency. Most applicants assume they’re being judged on what they say. In reality, they may be judged more heavily on how they say it—or how they look while saying it.
And even when candidates do know, there’s no way to understand the scoring process or contest the result. As Schellmann notes, “You can be rejected by a system, and never know what it evaluated—or if it even worked.”
What’s the Cost?
These tools are marketed as reducing bias and increasing efficiency. But what they really offer is standardization at the cost of human understanding. And in the process, they may filter out exactly the people we need most—those who don’t fit the algorithm’s narrow vision of “ideal.”
In the next section, we’ll look at another rising trend: gamified assessments. What happens when your ability to inflate a digital balloon or click fast enough becomes the metric for getting hired?
3C. Gamified Assessments: When Balloon-Popping Predicts Your Potential
Now imagine this. You’ve passed the résumé screen, nailed the one-way video interview—only to find your next challenge is… a game.
You’re asked to pop virtual balloons. Or rotate 3D objects. Or complete quick reaction tasks. What do these have to do with the job you want? That depends on what the algorithm decides.
This is the world of gamified hiring assessments, and Hilke Schellmann dedicates a full section of The Algorithm to showing how flawed—and seductive—these tools can be.
“Companies are replacing logic tests with balloon-pumping games that claim to measure risk tolerance. The problem? No one can prove they work.”
One test she investigates involves inflating balloons on a screen. Each click adds value, but go too far and the balloon pops—you lose it all. The system interprets this behavior as a sign of your risk profile. If you pop too many, maybe you’re reckless. If you cash out early, maybe you’re too cautious.
Seems clever. But there’s almost no peer-reviewed evidence linking this to actual on-the-job performance.
Games Disguised as Science
Schellmann finds that many of these games were designed not by industrial psychologists, but by startups that pivoted into HR tech after building generic apps. They promised objectivity, speed, and “engaging candidate experiences”—but often lacked even basic scientific validation.
In one case, a U.S. government agency pulled out of a vendor contract after discovering the game-based tools it had bought didn’t hold up to scrutiny. The company had no longitudinal studies. No proof that its tools predicted retention or performance. Just marketing decks.
Still, those same tools are widely used in corporate hiring pipelines.
Candidates Are Left Confused—and Powerless
If you’re a job seeker, this process can feel surreal. Instead of talking about your experience or ideas, you’re clicking boxes and wondering what the software is learning about you. And once again, there’s no feedback, no explanation, and no right of appeal.
“We’ve outsourced job interviews to behavioral proxies,” Schellmann writes, “but we haven’t stopped to ask if they’re actually measuring anything.”
These systems claim to be more neutral than human interviewers. But they often turn hiring into a gamified simulation of judgment—with little transparency and no meaningful consent.
In the next section, we’ll go even further into algorithmic overreach: AI systems that scrape your social media posts to rate your personality—and decide if you’re a fit.
3D. Social Media Profiling: Your Personality, Scored by a Bot
Now let’s say you’ve passed every test. Your résumé made the cut. You aced the one-way video. You didn’t pop the digital balloon too soon.
But the algorithm’s not done with you.
In a growing number of companies, your social media activity—your tweets, Instagram posts, even LinkedIn updates—may be quietly scraped and analyzed. Not for red flags. For personality insights.
“Some hiring platforms claim they can assess your openness, extroversion, or emotional stability—just by looking at your online behavior.”
Hilke Schellmann pulls back the curtain on this emerging practice. Startups are selling AI tools that generate personality scores based on word choice, image use, emoji frequency, and post timing. These scores are then used to make hiring decisions—usually without your knowledge or consent.
What Is This Supposed to Measure?
These platforms typically market themselves as building a richer candidate profile. Why rely on a résumé, they argue, when someone’s digital footprint reveals so much more?
But Schellmann asks the right questions: How valid are these models? Is there peer-reviewed evidence that what you post on social media correlates with job performance—or even with who you are in person?
The answer, she finds, is no.
In one particularly telling case, a tool flagged a candidate as “low extroversion” because their posts didn’t feature group photos. That person turned out to be a high-performing sales rep. They simply preferred to keep their personal life offline.
A New Kind of Discrimination—One You Can’t See
Social media profiling isn’t just scientifically weak—it’s ethically dangerous.
People curate their online presence for all kinds of reasons. Cultural norms, safety concerns, personal boundaries. Using that to infer job fit can easily reinforce existing inequalities.
It also raises urgent privacy issues. These tools often pull from public data, but public doesn’t mean free for predictive modeling—especially when the outcome affects your livelihood.
“Candidates are being judged not just for who they are—but for how they appear online, to an algorithm that’s never met them.”
And the kicker? You’ll probably never know it happened. No employer will say, “We didn’t hire you because your Instagram suggested low agreeableness.”
But the decision gets made all the same.
In the next section, we’ll look at predictive scoring and workplace monitoring—what happens when the algorithms don’t just judge who to hire, but start evaluating you after you’re already in the door.
3E. Predictive Scoring and Monitoring: Judged Long Before You Fail
Even if you land the job, the algorithm isn’t done with you.
More and more companies are using AI not just to screen applicants—but to predict how well employees will perform, whether they’ll stay, and even whether they’re at risk of burning out. In The Algorithm, Hilke Schellmann shows how these systems create an invisible layer of evaluation that follows workers long after onboarding.
“The same logic used to screen résumés is now being turned inward—used to score employees while they’re still on the job.”
These predictive models use behavioral data collected from email, Slack, calendar invites, task systems, and even wearable devices. What time do you log in? How fast do you reply? Are you sending fewer messages? Meeting with fewer people?
That activity gets turned into a number—sometimes called an “engagement score,” “flight risk index,” or “leadership potential rating.” Managers may see these scores in a dashboard. The employee? Often, they’re never told.
One System, Multiple Misreadings
Schellmann shares stories of employees who were flagged as disengaged or unmotivated—not because they were underperforming, but because the algorithm misread their behavior. One person’s score dropped sharply because they’d recently returned from bereavement leave. Another was dealing with a long illness. Neither circumstance was part of the model.
This is what happens when data replaces dialogue.
The AI doesn’t ask, “How are you doing?” It infers. And its guesses—based on patterns trained on past data—carry real consequences.
Nudging Culture, or Just Control?
Proponents say these systems help surface issues early. They flag burnout. They catch morale problems. But Schellmann raises a tougher question: Are we using software to predict support—or to justify discipline?
If your “risk profile” drops below a threshold, you might be pulled into a check-in. Or passed over for promotion. Or quietly tracked more closely.
“A workplace becomes something different,” she writes, “when you know every keystroke could be interpreted as a warning sign.”
And remember: these scores are generated by tools your manager might not fully understand—and that you can’t question.
In the final part of this section, we’ll look at the most unsettling step of all: when AI systems aren’t just helping manage employees—but helping to fire them.
3F. Algorithmic Termination: Fired Without a Conversation
You’re doing your job. Your metrics seem fine. You haven’t had any major issues. Then one day, you’re pulled into a meeting—or worse, sent an automated notice.
You’ve been let go.
Not by a manager. Not after a performance review. But because the system flagged something, and no one asked questions.
In The Algorithm, Hilke Schellmann documents cases where AI tools weren’t just helping make hiring decisions—they were quietly making firing decisions, too.
“Some employees are being fired by systems their HR teams don’t fully understand—and can’t explain after the fact.”
That line should stop you in your tracks.
These systems, often sold as “productivity optimization” or “early risk detection,” crunch vast amounts of internal data. They identify who isn’t “engaged,” who’s falling behind in communications, or whose performance trend lines are dipping. Then, they flag those individuals as “low performers” or “not a fit.”
In several cases Schellmann investigates, these flags directly led to terminations—with minimal or no human review.
When the System Doesn’t Know the Context
One story she shares involves an employee whose performance score dropped suddenly. The algorithm had flagged them as a risk. What no one realized—or checked—was that they had just returned from a medical leave. Their email activity had dropped, their response times had changed. And that was enough to trigger a recommendation for dismissal.
The system didn’t understand the context. And no one stepped in to ask.
No Appeals, No Questions, No Accountability
This is where the promise of AI—efficiency, objectivity, scale—starts to collapse under its own weight.
There’s no clear appeals process. No opportunity for the worker to explain. In many cases, not even the HR staff know what specific inputs led to the score. They’re just following the output from a vendor’s dashboard. One where the logic is proprietary and the consequences are irreversible.
“When things go wrong, no one takes responsibility. Not the vendor. Not the company. Not the algorithm. Just silence.”
In Schellmann’s telling, this isn’t just about bad tools. It’s about a shift in power—from managers who once had to justify firing someone, to systems that do it invisibly.
And the result? A workplace where trust erodes, fear increases, and fairness becomes a matter of whether the software got it right.
In the next section, we’ll step back to reflect. What does Schellmann get right—and what should we take away from this book as founders, employers, and professionals navigating an increasingly automated economy?
IV. Strengths, Implications, and Reflections
What makes The Algorithm stand out is not that it critiques AI—it’s that it shows, in case after case, how these systems are already reshaping people’s lives without transparency, consent, or accountability.
Hilke Schellmann is not writing from a place of fear-mongering or anti-tech sentiment. In fact, her approach is impressively calm and methodical. She doesn’t argue that all algorithms are evil. What she argues is far more important: we’re letting these tools make high-stakes decisions without asking the hard questions first.
“You can’t sell an untested medical device to the public. But you can sell an unvalidated hiring algorithm—and companies will buy it.”
That’s the heart of the problem. The bar for deploying AI in employment decisions is shockingly low. There’s no equivalent of the FDA for HR software. No required audits. No standard definitions of fairness or accuracy.
As long as a vendor can claim it’s “objective” or “bias-reducing,” companies will adopt it. Sometimes without even testing it in-house.
What the Book Does Exceptionally Well
It’s grounded in evidence. Schellmann doesn’t rely on hypotheticals. She files FOIA requests, tests tools herself, and interviews people across the AI hiring pipeline—from engineers to victims of algorithmic rejection.
It focuses on real stakes. This isn’t about philosophical debates. It’s about people who lose jobs, get flagged, or are pushed out—without explanation.
It balances systems critique with human storytelling. The book stays readable because it always comes back to lived experience. And that gives it moral weight.
For Employers and Builders, This Book Is a Warning
If you’re a product leader, HR executive, or founder using or building these tools, The Algorithm is essential reading. It asks you to step back and ask:
What exactly are we measuring?
Are we validating our models across different populations?
What happens when the system gets it wrong?
Because when the tools are wrong—and Schellmann shows they often are—it’s not just a bad hire or an overlooked résumé. It’s a real person’s career, sometimes their livelihood, that’s affected.
And the harm is often invisible. No one gets an email that says: “You were rejected by an algorithm trained on outdated assumptions.” They just hear “no.”
In the final section, we’ll share concrete takeaways from the book: what you can do to build, adopt, or question these systems more responsibly.
V. Actionable Takeaways & Closing Reflections
If The Algorithm makes one thing clear, it’s this: AI is no longer a future threat to the world of work. It’s already here, shaping outcomes in ways most people never see.
But this isn’t a call to ditch AI. It’s a call to use it responsibly, with transparency, rigor, and human oversight. Below are key takeaways for anyone building, deploying, or simply navigating modern hiring systems:
If You’re a Founder or Product Builder:
Validate before deployment. Don’t ship scoring systems trained on narrow or biased data. Ask: What are we actually measuring?
Test for fairness. Evaluate how your system performs across gender, race, age, disability, and neurodiversity. Don’t assume neutrality.
Make outputs explainable. Users—and candidates—should understand why a score was generated and what it means.
If You’re in HR or Leadership:
Ask vendors for evidence. Don’t adopt AI tools just because they promise efficiency. Demand peer-reviewed validation or third-party audits.
Audit your funnel. Who’s being filtered out? Who’s getting flagged? Patterns of harm can emerge quickly and invisibly.
Build in escalation paths. Algorithms should never be the final word—especially in rejections, promotions, or terminations.
If You’re a Worker or Candidate:
Know the signs. One-way interviews, no-feedback applications, gamified assessments—all suggest AI is in the loop.
Request transparency. You have the right to know how you were evaluated. Ask.
Advocate for regulation. These systems need external standards—just like consumer finance, medical devices, and aviation.
“If we don’t shape these systems,” Schellmann writes, “they will shape us.”
That’s the book’s quiet, urgent message. Not that AI is evil. But that unchecked automation of human judgment leads to invisible harm—and sometimes irreversible loss.
The Algorithm isn’t an alarm bell. It’s a flashlight. It shows us what’s already happening in the corners of the workforce—and asks us to look closer, ask harder questions, and reclaim our right to be seen, heard, and evaluated as more than just a score.
It’s one of the most important books on AI and labor published in the past decade. Not because it’s loud—but because it’s precise.
If you hire people, build tools, or depend on your résumé to tell your story, you need to read this book.