ChatGPT lets candidates ace verbatim LeetCode questions but struggles with custom problems, proving companies need to replace standard questions with original ones to stop cheating.
Technical leaders need to understand that ChatGPT is not just a productivity tool-it can be a cheat engine for interview questions that are lifted straight from public repositories. Interviewing.io ran a controlled experiment with 37 mock interviews where participants were instructed to rely on ChatGPT. When asked verbatim LeetCode questions, 73% of candidates passed, and with only minor modifications to the same problems the pass rate stayed high at 67%. The model simply regurgitated known solutions, giving candidates perfect answers.
The same study showed a stark drop when interviewers asked truly custom problems. Only 25% of candidates passed, a rate lower than the platform's normal 53% success baseline. Interviewers reported no suspicion of cheating in any case, and candidates felt confident they were getting away with it. The data makes clear that standard algorithmic questions provide little signal when AI assistance is possible.
The actionable insight for engineering managers is to redesign interview curricula immediately. Custom, domain-specific problems force candidates to demonstrate original thinking and prevent AI-driven shortcutting. This shift also improves candidate experience and yields more reliable hiring decisions, reducing the risk of hiring people who only memorized or copied solutions.
Check out the full stdlib collection for more frameworks, templates, and guides to accelerate your technical leadership journey.