Your AI is not your mentor
Use it relentlessly for code tutoring. But the formation work — being seen, stretched and told the truth — has to come from a person.

A junior developer told me they’re using Claude as a mentor. They liked the patience that never wore thin, the availability that never closed, the way they’re never made to feel doltish.
Here’s what worried me: they said mentor. Not tutor. Mentor.
I don’t think they’re wrong about what they’re getting. But mentorship isn’t about answers. It’s about being seen — watched while you struggle, nudged when you’re avoiding hard conversations, sent on projects that stretch you in ways you didn’t ask for. Mentors have the nerve to tell you the truth in private because they’re invested in who you’re becoming.
AI can’t do that. Because it isn’t invested in you. Because it’s not “in the room.”
If you’re new, welcome to Customer Obsessed Engineering! I publish about one article each week. Free subscribers can read about half of every article, plus all of my free articles.
Anytime you’d like to read more, you can upgrade to a paid subscription.
What AI is genuinely good at
Let’s start with what’s actually working, because this isn’t a piece about being suspicious of the tools. The tutoring story is real. It’s the part of the AI hype that’s actually earned.
Benjamin Bloom’s famous 1984 paper described what he called the “2 sigma problem:” students who received one-on-one tutoring performed two standard deviations better than students in conventional classrooms — that’s the difference between an average student and the top 2 percent. The catch was scale. Personal tutors are expensive, and most students will never have one. For forty years that gap has been the unsolved promise of education technology.1
A well-configured AI is one way we’re closing it. When Khan Academy rolled out Khanmigo, built on GPT-4, the pitch was exactly this — a patient, infinitely available one-on-one tutor for every learner. In specific narrow tasks, the early returns are hard to dismiss. Harvard published a physics study in 2024 that found students using a properly designed AI tutor learned over twice as much in less time than their peers. They reported feeling more engaged and motivated.23
I see the same thing in software. With the right model, a serious system prompt and a few well-built skills, AI can:
Walk a junior through a debugger session, naming what each frame means and why it matters.
Explain why a piece of code is the way it is — not just what it does — and adjust the explanation when the question lands a level too high.
Run unfamiliar code, watch it fail and reason about the failure with the learner in real time.
Drill someone on a foreign language, a SQL idiom or a regex until it’s automatic.
Take the same question for the third time without sighing.
That last one matters more than people admit. A great deal of human learning gets blocked by the social cost of asking again. Removing that cost is genuinely useful. It’s what the junior in my opening responded to. They felt safe.
Hey, quick favor? Writing this stuff takes real work — and referrals keep the publication alive. If you’re finding the playbook valuable, share it with a friend who’d appreciate the link.
How tutoring goes wrong
It’s useful, but not magical. The same AI cheerfully invents APIs that don’t exist, pattern-match on the wrong context and confidently produce code that looks right but isn’t. Apple researchers showed last year that current LLMs aren’t doing genuine reasoning at all — performance on math problems dropped by as much as 65 percent when the team added a single irrelevant clause to the prompt. Andrew Zuo’s piece on the “-10X developer” makes the harder point: when an LLM writes code at or above your level and introduces a subtle bug, you may be the worst possible person to find it.45
A newer concern is harder to wave off. An MIT Media Lab study published in mid-2025 used EEG to compare brain activity across people writing essays with no help, with a search engine and with an LLM. The LLM group showed the weakest neural connectivity and the worst ability to recall their own writing minutes after producing it. The researchers called it “cognitive debt.” That tracks. Outsource the struggle and the learning never lands.6
So a serviceable tutor, yes. A replacement for thinking, no. None of that, though, is the central point. The central point is that tutoring and mentoring are not the same job, and the public conversation keeps blurring them.
What a mentor actually does
If you ask me what my mentors did for me over the last forty years, almost none of it was answering technical questions. The technical content I could get from books, from peers, eventually from Stack Overflow and now from Claude (as long as I’m careful and look for fantasy hallucinations). What my mentors did was something else entirely.

