By Jack Ferreri•Hiring insight•Published Interview Security: How to Combat Cheating on AI Interviews
One of the newest challenges in remote interviewing is not always obvious cheating. It is invisible assistance.
There are now tools that can sit as an overlay on a candidate's screen and provide real-time suggestions while they answer questions. In some cases, these tools are designed to help people respond more confidently during interviews, meetings, or sales calls by giving them prompts without making it obvious to the person on the other side.
That creates a difficult problem for hiring teams.
A candidate may appear to be answering naturally, but they could be reading from an AI-generated response, checking a second screen, or receiving help from someone in the room. The interviewer might not notice because the candidate is still looking generally toward the camera, speaking clearly, and giving polished answers.
This is why interview security has to move beyond basic video recording. Hiring teams need a way to tell not only who is on camera, but whether the answers appear to be genuinely coming from that candidate in the moment.
How eye tracking helps address this
Eye tracking and attention signals can help identify when a candidate may be relying on outside help.
For example, if a candidate repeatedly looks to the same off-screen area before answering difficult questions, that could suggest they are reading from a second monitor, notes, or another person nearby. If their gaze shifts in a consistent pattern during technical or high-pressure questions, that may be worth flagging for review.
The same applies to screen overlays. If a candidate is reading from an on-screen assistant, their eye movement may look different from someone thinking naturally. They may scan text, pause at unusual moments, or maintain a pattern that suggests they are following prompts instead of forming answers independently.
This does not mean eye tracking should automatically accuse someone of cheating. People naturally look away when thinking, and nervous candidates may behave differently under pressure. But when eye movement is combined with response timing, speech patterns, transcript analysis, and video review, it gives hiring teams a stronger picture of whether the interview environment looked trustworthy.
How Evy combats this
At Evy, we are building interview security around the reality of modern AI tools.
Candidates are no longer just preparing with AI before interviews. Some may try to use AI during the interview itself. That is why Evy looks beyond the words in the transcript and pays attention to the behavior around the answer.
By using eye tracking, attention patterns, response timing, and other interview metrics, Evy helps companies detect signs that a candidate may be reading from an overlay, checking a second screen, or getting help from someone off camera.
The goal is not to punish candidates for being nervous or looking away once. The goal is to identify repeated patterns that suggest outside assistance may be influencing the interview.
A strong interview process should protect both sides. Candidates who answer honestly deserve a fair evaluation, and companies deserve confidence that the person they are evaluating is truly demonstrating their own ability.
That is the security challenge AI interviews have to solve. Not just faster interviews. Not just better summaries. More trustworthy interviews.
Related reading: Risks, fairness, and fitting AI screening into your processAI candidate screening: FAQ and next steps