← Blog

Your Technical Interview Is Testing for 2015

I’ve been on both sides of the technical interview table hundreds of times. As a candidate, as an interviewer, as the person designing the interview process. And I need to say something that a lot of people in hiring are thinking but won’t say out loud:

Your technical interview is broken.

Not slightly off. Not “could use some updating.” Fundamentally measuring the wrong things for the world we actually live in.

The ritual

You know the drill. Candidate comes in. Gets a coding problem. Implement a linked list. Reverse a binary tree. Find the longest palindrome substring. Write it on a whiteboard, or in a shared editor, while someone watches and judges.

The logic behind this was always questionable — testing memorization of data structures under artificial stress — but at least you could argue it measured something. Can this person write code? Can they think algorithmically? Can they work through a problem step by step?

The problem is: that “something” isn’t what matters anymore.

What a linked list proves in 2026

When you ask someone to implement a linked list from memory, you’re testing for:

  1. Have they memorized a linked list implementation?
  2. Can they type it correctly under observation?
  3. Have they practiced enough LeetCode to recognize the pattern?

That’s it. You’re not testing for engineering ability. You’re testing for interview preparation. The candidates who pass are the ones who spent weeks grinding algorithmic puzzles, not necessarily the ones who’ll build great systems.

Meanwhile, here’s what you’re not testing:

  • Can they design a system that handles 10,000 requests per second?
  • Do they understand the trade-offs between different architectural approaches?
  • Can they debug a subtle distributed systems failure in production?
  • Do they know when not to optimize?
  • Can they evaluate whether an AI-generated solution is correct?

You know, the actual job.

The AI elephant in the room

Here’s the part nobody wants to deal with: any problem you’d ask in a traditional coding interview can be solved by AI in seconds.

Literally. Right now. Today.

“Implement a linked list with insert, delete, and search.” Done. “Find the shortest path in a weighted directed graph.” Done. “Design an LRU cache.” Done.

These are solved problems. They’ve been solved for decades. The fact that you’re asking a human to solve them from memory, by hand, in 2026, is absurd.

If the test can be passed by pasting the question into a chat window, the test isn’t measuring what you think it’s measuring.

What actually matters now

The engineers I want to hire — the ones I’d trust to build critical infrastructure — have a different set of skills entirely:

Systems thinking over algorithm memorization. Can they break down a complex problem into components? Can they identify the bottlenecks before writing a single line of code? Can they reason about how a system will behave under load, under failure, under adversarial conditions?

Evaluating, not generating. Can they look at a solution — whether written by a human or a machine — and accurately judge whether it’s correct, efficient, and maintainable? Can they spot the subtle bugs? The edge cases? The security vulnerabilities?

Understanding why, not just how. Any engineer (or AI) can implement a red-black tree. But do they understand when to use one? What problem it actually solves? What the alternatives are and why you’d pick one over the other?

Domain reasoning. Can they take a real-world problem — “our payment processing pipeline has a race condition that causes duplicate charges under high load” — and reason their way to a solution? Not a textbook solution. A real solution that accounts for the specific constraints, existing architecture, and operational reality.

Communication. Can they explain their thinking? Can they disagree productively? Can they articulate a technical trade-off to a non-technical stakeholder?

None of these skills are tested by reversing a linked list on a whiteboard.

A better interview

I’ve been experimenting with a different approach. Not perfect, but closer to what matters:

Give them a real problem from your codebase. Not a toy problem. A real one. Anonymized if needed. “Here’s a service that does X. It has these issues. How would you approach fixing them?”

This tests everything: code reading, problem identification, systems thinking, communication. And it has the nice side effect of showing them what actually working at your company would be like.

Let them use AI tools. Seriously. If you’re banning AI tools in interviews, you’re testing for a work environment that doesn’t exist anymore. Let them use whatever they’d use on the job. Watch how they use it. Do they ask the right questions? Do they evaluate the output? Do they know when the AI is wrong?

This is infinitely more informative than watching someone struggle to remember the syntax for a priority queue.

Pair on a design problem. “Design a system that handles X, with these constraints, at this scale.” Work through it together. See how they think. See what questions they ask. See how they handle ambiguity and incomplete requirements.

Review code together. Give them a real PR — one that was actually merged. Ask them what they’d change. What’s good, what’s risky, what’s missing. This tests code evaluation skills directly, which is arguably the most important skill in an AI-assisted workflow.

The uncomfortable truth for hiring managers

If you’re still running traditional algorithmic interviews, you’re selecting for the wrong candidates. You’re favoring people who are good at interview theater over people who are good at engineering. You’re filtering out experienced engineers who haven’t touched a binary tree since university but can debug a distributed system in their sleep.

Worse, you’re filtering for conformity. The people who pass traditional coding interviews are the people who’ve optimized for traditional coding interviews. That’s a specific, narrow skill that correlates weakly with actual job performance.

The best engineers I’ve hired — the ones who moved the needle on real products — would struggle with a random LeetCode medium under time pressure. They’d also design circles around the LeetCode champion in a real system design conversation.

The transition

I’m not saying toss out everything. You still need to verify that candidates can code. But the how matters.

Stop testing for memory. Test for judgment. Stop testing for speed. Test for depth. Stop testing for textbook solutions. Test for real-world reasoning.

The engineers who will thrive in the next decade aren’t the ones who can implement a linked list fastest. They’re the ones who can look at a complex system, understand it deeply, identify what needs to change, and direct the tools — AI or otherwise — to make it happen.

If your interview process can’t identify those people, your interview process is the bug.