Want A New Job? You’d Better Impress The AI Interviewer

Image generated via Adobe Firefly AI
- AI may be changing how we act in job interviews, according to new research from the Rotterdam School of Management, Erasmus University (RSM).
- Candidates interviewed by AI highlight their analytical capabilities and downplay more intuitive or emotional qualities because they believe they’ll gain a better score.
- This research raises many questions about the future of recruitment and how companies should use AI going forward.
If you’re applying for a job in 2025, you may be interviewed by an AI bot. In fact, according to a recent study from Resume Builder, 82 percent of firms now utilise AI to review resumes, while 40 percent employ AI chatbots to communicate with candidates. It’s a number that’s expected to drastically increase in the next few years.
These bots can screen, shortlist and talk to job candidates, before analysing their answers and feeding the results back to the recruiter.
Candidates are racing to TikTok to share their experiences of AI interviewers – and the results are less than positive. One viral video, from user Leo Humps, showed an AI bot badly malfunctioning on an online interview, with the caption, ‘POV: After months of trying you finally land an interview with your dream job and this happens.’
Others show candidates using ChatGPT on their phones to answer AI bots, in what is being dubbed ‘the AI final boss’.
Even companies that don’t use an AI bot in interviews may still use AI tools to track and analyse an interviewee’s answers, rating the person against set criteria – something that was previously done by humans.
“We are humans, and a significant part of our communication skills are nonverbal,” writes LinkedIn user Diana Silva Head, about her job interview with an AI bot. “We use nonverbal cues to help us understand if our answers resonate with interviewers, evaluate hiring teams, and determine whether this is an organisation where employees are enabled to do their best work.”
These non-verbal skills are in danger of being undervalued in the hiring process, and in some cases abandoned altogether.
How should you act in front of an AI bot?
New research from the Rotterdam School of Management, Erasmus University (RSM) suggests we may be acting differently in job interviews, too.
Candidates who think they are being assessed for a job by AI are more likely to highlight their analytical capabilities and downplay more intuitive or emotional qualities because they believe they’ll gain a better score.
The researchers, Dr Anne-Kathrin Klesse, Professor of Consumer Behaviour & Technology at RSM, and co-researchers PhD candidate Jonas Görgen and Dr Emanuel de Bellis, conducted 12 studies with over 13,000 participants. They recorded how people behaved (or said they would behave) when they were assessed by AI compared with a human assessor in real and simulated assessment settings.
For comparison, the researchers also collaborated with a Rotterdam-based start-up which offers competency-based and fair hiring software that doesn’t use AI. The start-up surveyed applicants after they completed the application process.
By simulating a job selection process and documenting how applicants presented themselves either to AI or human assessors, the researchers were able to document how their behaviour can have consequences for who gets selected – or rejected – for a position.
The results indicated that candidates acted differently if they thought they were assessed by AI, something which the researchers stake could result in employers perhaps not recruiting the type of candidate they were hoping for.
“The finding that people strategically highlight certain capabilities or characteristics implies that candidates present a skewed picture of who they really are,” said Dr Klesse.”
Furthermore, by downplaying or overlooking those emotive skills, employers might be cutting themselves off from harnessing a highly valuable skillset in their staff.
The researchers suggest organisations must take greater care when designing and communicating AI assessment tools to avoid unintentional biases.
“If your organisation uses AI in hiring, promotion, or evaluation, you should be aware that these tools do more than just change the process. They may also influence who gets the job,” Dr Klesse says.
Is AI destroying our ability to think?
But it’s not just the hiring process that’s using AI. If you do get the job, it’s likely you will be encouraged – or told – to use AI tools. They are now used in almost every organisation.
More than three-quarters of respondents now say that their organisations use AI in at least one business function, in a McKinsey Global Survey on AI published earlier this year.
However, in many cases this content is taken at face value, and not subjected to human scrutiny. Just 27 percent of people surveyed say that employees review all content created by gen AI before it is used. For example, before a customer sees a chatbot’s response or before an AI-generated image is used in marketing materials.
A similar share says that 20 percent or less of gen-AI-produced content is checked before use in their organisation.
And the risks of AI are wider than just potential damage to companies’ brands – it may also be impacting the capabilities of employees themselves.
A recent MIT study that has been trending on LinkedIn, finds that using tools like ChatGPT can lead to cognitive debt and a likely decrease in learning skills.
The researchers asked a team of 54 adults to write a series of three essays over four months, using either ChatGPT, a search engine, or their own brains. MIT measured their cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays.
AI had a notable effect. The individuals who used ChatGPT had significantly lower cognitive engagement than the other two groups. They also had a harder time recalling quotes from their essays and felt a lower sense of ownership over the final product.
Some participants then switched roles for a final, fourth essay. Those who previously used AI but were subsequently required to use their brains were found to perform worse, and their engagement was far lower than that of the brain-only group.
It is worth noting, though, that only 18 participants (six per condition) completed the fourth, final session. Therefore, the researchers add, these findings are preliminary and require further testing.
What will the future of work look like with AI?
Much like the public introduction of the internet in the 90s, it’s impossible to know where AI might take us in the future.
At the time of writing, there is very little AI-specific legislation, and the future of this has been up for debate recently in the US, especially. This is likely to have a significant impact on how AI is able to evolve.
Workplaces are adapting quickly and testing out new ways of using AI in day-to-day processes like recruitment, and for many professionals, it is a time-saver. For others, it’s another step toward removing the humanity from many processes.
And companies which lose their human touch, stand to lose their custom too.
After all, analytics and critical thinking may be valued by robots, but empathetic leadership will always be valued by humans.
Enjoyed this article? You may like this…