Back to Home

AI Insights

Illustration of technical interviews in the age of AI

The Experience Detector: Interviewing in the Age of ChatGPT

I do remember clearly in previous years how a technical interview used to start: a zip file containing a coding task, a deadline set a few days out, and the solemn expectation that the solution would be returned, pristine and wholly original. That era is over. The rise of Generative AI has demolished the old rules of assessment, forcing hiring managers to pivot from testing knowledge to validating true, deep-seated experience.

If candidates can now use ChatGPT to generate boilerplate code or quickly look up complex algorithms, how can the data science community accurately discern genuine expertise from well-prompted answers?

The Evolution of the Technical Challenge

The shift away from isolated, asynchronous take-home assignments is both real and necessary. We've moved toward collaborative, real-time shared environments.

The current standard involves tools like Visual Studio Live Share. This extension for VS Code enables instant, real-time collaboration, allowing the interviewer and candidate to share a codebase, debug, and edit files together, all from their own preferred editor setup.

However, even in this shared setting, candidates can easily use a secondary screen for ChatGPT or Copilot. The challenge is no longer can they solve the problem, but can they explain the solution's soul.

The Illusion of Correctness: Why Verbal Answers Fall Short

In a world where AI can flawlessly articulate the difference between a Type I and Type II error, or the purpose of Gradient Descent, the classic "Define X" verbal question has zero value in assessing seniority. A perfect answer might simply be a perfect paste from an LLM.

The core problem is that AI-generated answers often lack the contextual nuance, trade-off rationale, and war stories that define true experience.

🧠 The Experience Detector: Questions Only Peers Can Answer

The only defense against the "perfectly scripted" candidate is to abandon academic trivia and focus on behavioral questions rooted in complex, real-world failure and success. Only another experienced person can recognize the hallmarks of a candidate who has genuinely been in the trenches.

To detect this, interviewers must adopt "Tell me about a time..." or "What was the most difficult..." questions that probe specific past experiences:

Area of Expertise The AI-Proof Question What the Interviewer is Listening For
Model Drift/Failure Tell us about a time a model deployed into production started providing nonsensical results. What was the first thing checked, and how was the damage mitigated before fixing the root cause? Immediate action (monitoring, rollback, kill-switch); ability to differentiate data drift from model decay; demonstrated calm under pressure.
Design/Trade-offs Describe a time when a choice had to be made between two viable solutions (e.g., a simple linear model vs. a complex deep learning model). What was the non-technical reason (cost, latency, business adoption) that ultimately drove the decision? Quantifiable trade-off rationale; understanding of engineering cost; ability to balance model performance with business constraints.
Data Quality/Debugging What was the most obscure data flaw encountered that made a model behave strangely? How was the hidden issue hunted down, and what specifically was the root cause? Specific, gritty details (e.g., time zone alignment issues, subtle data leakage, non-random sampling bias); methodical debugging process.
Communication/Influence Describe a time a deeply held technical approach was challenged by a stakeholder or manager. How was the conflict handled, and what was the final outcome? Communication skills (simplifying complexity for non-technical audiences); ability to influence peers without authority; self-reflection on moments of being wrong.

The Future of Technical Interviews: Collaboration and Auditing

The interview process must now mirror the modern workplace:

Embrace the Tools: Don't forbid ChatGPT. Instead, set the expectation: "Candidates may use any tool, but they must be able to justify and defend every line of code they produce." This shifts the evaluation from code generation to code auditing and critical thinking.

Focus on Modification: Instead of asking a candidate to write a function from scratch (which AI can do), hand them a buggy or sub-optimal piece of code (AI-generated or otherwise) and ask them to optimize it for a new constraint (e.g., "Optimize this for 10ms latency," or "Rewrite this to be 100% idempotent"). This requires reading, understanding, and modifying—skills AI cannot yet fully fake.

Prioritize Architecture: For senior roles, move the assessment up the stack. System design questions—especially those that require real-time diagramming (whiteboarding)—are harder to cheat. Ask the candidate to design a scalable system for real-time inference, and then probe every minor decision: Why Kafka over RabbitMQ? Why choose a single-master database?

The future of hiring is about finding the human intuition that knows when the AI-generated solution is beautiful but fundamentally wrong for the business.