AI Is Eating the Entry-Level Job: What New Grads Are Actually Facing
For decades, the deal was simple: get a degree, get a white-collar job, learn the ropes doing the grunt work, move up. The grunt work mattered. It was how you built judgment. It was also the thing employers paid you to do while they figured out whether you were worth keeping.
AI is quietly canceling that deal, and new graduates are the first ones to feel it.
The College Premium Is No Longer a Shield
The standard assumption about automation was that it threatened factory workers and call center staff. Knowledge workers, the logic went, were safe. Their jobs required judgment, creativity, communication. Hard to automate.
That assumption is now wrong. Research from Eloundou et al. (2023) found that roughly 80% of the U.S. workforce could have at least 10% of their tasks affected by large language models, with the highest exposure concentrated in high-wage, high-education roles. The jobs college graduates actually want are the most exposed.
A follow-up finding makes this sharper. Ozgul and Fregin (2024) showed that AI susceptibility rises with skill level. Higher-skilled workers in analytical, non-routine roles face more automation risk than lower-skilled workers, not less. So the college degree that was supposed to protect you from automation has, in the current moment, put a target on your back.
This isn't displacement in the dramatic, robots-taking-jobs sense. It's subtler. The tasks being absorbed by AI are the specific tasks that entry-level analysts, junior copywriters, paralegals, and financial associates spend most of their first two years doing: drafting, summarizing, researching, formatting, and generating first-pass work. Those tasks aren't disappearing from the economy. Someone is still getting paid to oversee them. That someone just doesn't need to be twenty-three with a fresh diploma anymore.
The Market Is Splitting in Two
Not every new graduate faces the same situation. The labor market isn't collapsing uniformly; it's bifurcating.
Ganuthula and Balaraman (2025), analyzing hiring and wage data from the U.S. and India between 2018 and 2023, found that AI preparedness is fragmenting the college-educated workforce into two distinct tiers. Graduates who can work with AI tools are pulling ahead on wages. Graduates whose primary value comes from tasks AI handles well are seeing stagnating prospects.
The split isn't about who went to a better school. It's about who shows up able to use these tools as a multiplier rather than a replacement.
Employers are signaling this loudly. Ahmadi and Khosh Kheslat (2024), analyzing job postings across major U.S. platforms from mid-to-late 2023, found rapid growth in employer demand for generative AI skills in white-collar job listings. ChatGPT proficiency in particular moved from a differentiator to a baseline expectation within a single hiring cycle. That's a fast shift. Employers aren't asking if you know about AI. They're assuming you do, and filtering out candidates who don't.
There's also a structural change in how some employers think about degrees at all. Bone, Ehlinger, and Stephany (2023) found that in AI-intensive fields, employers are deprioritizing formal credentials in favor of demonstrated technical skill. If your degree doesn't come with hands-on AI experience attached, it may not carry the weight you're expecting at the application stage.
The Productivity Trap Hidden in All of This
There's a version of this story that sounds reassuring. AI makes new graduates more productive. They can do more with less experience. They can punch above their weight. Some of that is true.
The problem is what gets lost in the process.
Di Santi (2026) draws a useful distinction between cognitive amplification and cognitive delegation. Amplification is using AI to extend your thinking: faster research, better drafts to react to, help catching errors. Delegation is handing your thinking to the AI entirely, skipping the reasoning step because the output looks good enough.
Delegation is the trap. The grunt work that new graduates used to hate, summarizing documents, drafting memos, pulling together research, was also the process through which they built judgment. You learned to recognize a bad argument by writing a lot of mediocre ones first. You developed instincts about what mattered by reading through a lot of material that didn't.
A junior employee who uses AI to skip that process produces cleaner output faster. They also risk arriving at year three of their career without the critical thinking and domain expertise that their title implies. At some point, a manager notices. The AI covered for the skill gap until it couldn't.
This isn't an argument against using AI. It's an argument for being deliberate about which parts of your work you hand off and which parts you protect, precisely because they're building something in you.
What to Actually Do With This
The graduates who will do fine are the ones who treat AI proficiency as table stakes and invest in the things AI doesn't replace: domain expertise, judgment, the ability to tell a client something they don't want to hear, knowing when the model is confidently wrong.
That means learning the tools. It also means not mistaking fluency with the tools for the substance behind them. An AI can draft your analysis. It can't yet be accountable for it.
Employers are watching both sides of this. They want people who can use AI to work faster. They still need people who can think. The graduates who demonstrate both will have more options than any previous cohort. The ones who demonstrate only one of those two things will find the market less forgiving than their parents suggested it would be.