Tools

Hiring in the age of ChatGPT: Designing assessments that reveal real skills

By Charlotte Carnehl

“Did the candidate solve this… or did ChatGPT?”
Since generative AI went mainstream, that question hangs over take-home tasks and coding challenges. The reality: AI can already produce passable code, decent marketing copy and solid data analysis. If we continue to assess candidates in the same way, we risk hiring for output that doesn’t reflect genuine skill.

Practical, job-relevant assessments are among the strongest predictors of hiring success. Geoff Tuff and his co-authors emphasize that giving candidates assignments or work simulations is far more effective than interviews alone, as it provides a “minimally viable demonstration of competence”. These insights and findings from other studies underscore why abandoning assessments altogether in the AI era would be a mistake: the challenge is not whether to use them, but how to adapt them so that they remain predictive and fair in the era of ChatGPT.

So what does that mean concretely? You can try to AI-proof your assessments – designing exercises that surface judgment, creativity and lived experience. Or you can embrace AI as part of the process – evaluating how candidates prompt, verify and apply AI output to real problems. Most teams will benefit from a thoughtful mix, tailored to the role.

This article shows you how to adapt. We’ll explain why traditional take-homes are less predictive today, outline two practical paths (AI-proof vs AI-integrated) and point to tools and platforms that help you test skills, not just shortcuts. 

Why AI challenges practical assessments

Generative AI has lowered the barrier to producing “good enough” work. Tasks that once separated strong candidates from weaker ones – writing an essay, building a simple app, designing a presentation or analysing a dataset – can now be solved quickly with a well-crafted prompt.

That creates two main risks for hiring teams:

  1. A false sense of skill: A polished submission doesn’t always reflect a candidate’s true abilities. You might end up hiring someone who can use ChatGPT well, but lacks the underlying knowledge or judgment required to succeed in the role.
  2. Losing predictive power: The point of a case study or coding challenge is to predict on-the-job performance. If AI can generate passable answers, assessments risk turning into a test of who has the best prompt, not who will thrive in your team.

And this isn’t limited to developers. Marketing candidates can ask ChatGPT to draft campaign copy. Analysts can request instant data summaries. Even HR professionals can get ready-made policy documents. Almost every domain is affected.

In short: traditional take-homes are no longer a safe proxy for skill. Without adjustments, they risk telling you more about a candidate’s ability to copy-paste than their ability to think, decide and deliver in your context. So what can you do about that?

Option 1: Make assessments “ChatGPT-proof”

If your goal is to understand what a candidate can do without leaning on AI, the first step is to make sure your assessments can’t be solved with a quick prompt. That doesn’t mean banning AI altogether, but designing exercises that surface unique human perspectives like creativity, judgment or personal experience.

Here are a few ways to do that:

  • Run your test through ChatGPT first. Before using a case or assignment in your process, feed it to ChatGPT and see what comes back. If the answer looks convincing, refine the task so it requires more nuance, context or originality.

  • Focus on creativity and judgment. Instead of asking for generic solutions, design tasks that involve trade-offs, contextual decisions or a personal point of view. For example, instead of asking candidates about the pros and cons of a specific programming language, let them tell you what enthuses or frustrates them in their daily work – or what feature in an app they’d like to see delivered next and why.

  • Ask for a demonstration of past work. Invite candidates to walk you through something they’ve built or led, be it a piece of code, a UX design or a marketing campaign. Ask them to explain their thinking, what challenges they faced and what they would do differently now.

  • Include live components. Adding a real-time element makes it harder to rely on AI alone. This could be a coding challenge done in a shared environment or a simulated phone call where candidates role-play handling a customer query.

The aim isn’t to trick candidates, but to create conditions where genuine skill and thought process become visible.

Option 2: Integrate ChatGPT into the process

Instead of trying to block ChatGPT, another approach is to recognise it as part of how work is already done. Developers, marketers and analysts increasingly use AI tools to speed up routine tasks and free time for deeper thinking. So why not design your assessments with this reality in mind?

By allowing ChatGPT (or similar tools) in the process, you can test how a candidate uses it rather than if they do. The key is to evaluate their ability to:

  • Formulate effective prompts. Good results with ChatGPT start with good inputs. Ask candidates to submit the prompts they used as part of their assignment or walk you through their prompting live. This lets you see if they can translate vague problems into precise instructions – a critical skill in many roles.

  • Interpret results critically. ChatGPT outputs are not always correct, complete or context-appropriate. Evaluate how candidates question what they get back. Do they spot errors in the code? Challenge generalisations in a market analysis? Adjust tone and style in copywriting? A strong candidate won’t just accept the output; they’ll refine it.

  • Apply AI to real-world problems. Ultimately, you want to know whether someone can bridge the gap between generic AI output and your company’s context. A great way to test this: give them an AI-generated draft (a piece of code, a policy, a campaign outline) and ask them to improve it. Look for thoughtful edits, clear reasoning and creativity in making the result fit the situation.

This approach mirrors the real workplace, where AI will increasingly act as a collaborator rather than a competitor. It shifts the focus from “can they beat ChatGPT?” to “can they work smartly with it?”

AI-safe tools to assess candidate skills

You don’t always need to reinvent your assessments from scratch. Instead of sending out tasks through your ATS or email, you can lean on dedicated assessment platforms that provide libraries full of skill assessment tasks and are already designed with integrity and reliability in mind. Many of these tools offer built-in safeguards against over-reliance on AI, alongside features like live testing, proctoring and structured evaluations. From coding-specific platforms to broader assessment suites, these tools can help ensure you’re really testing for the skills that matter.

1. Coding & technical assessment tools

Platforms like CodeSignal, Codility, HackerEarth or HackerRank give you a structured way to evaluate real-world coding skills rather than polished submissions that could easily be produced with ChatGPT. These tools typically provide a mix of timed challenges and role-specific tasks. They come with anti-cheating measures such as browser monitoring, copy/paste detection or code-tracking. 

HackerRank: Various levels of AI integration in assessments

HackerRank makes explicit that the future of software development is “Human plus AI”. That’s why they designed their assessments to mirror this: You can decide when in your hiring process candidates can (and cannot) use AI. The platform has a built-in plagiarism detection model that monitors signals to identify unauthorized AI usage and other suspicious activity, such as multiple individuals taking the assessment.

2. Psychometric and cognitive assessment tools

Not every critical skill can be captured in lines of code. Platforms like Arctic Shores and Cyquest focus on measuring soft skills, personality traits and cognitive ability. These assessments are especially valuable for roles where problem-solving style, resilience or interpersonal fit are just as important as technical know-how. Because the tasks are interactive, game-like or involve psychometric design, the results are harder to fake and more predictive of how someone will behave and adapt in a real team setting.

Arctic Shores: Interactive, visual tasks 

Arctic Shores uses visual tasks that feel more like games than traditional psychometric tests. Instead of asking candidates to self-report through questionnaires (where AI-generated or rehearsed answers could slip in), the platform captures behavior in action – how a candidate responds to challenges, takes risks or makes decisions under pressure. This creates a richer and more authentic picture of personality and cognitive style, helping employers identify traits that align with role demands and company culture.

3. Platforms for multiple skills and roles

In addition to the more specialized tools above, there’s a wide range of platforms that cover multiple skills and domains. Tools like Adaface, Harver, Selectic, TestGorilla and Xobin offer libraries of tests that span technical, cognitive and soft-skill areas. They’re designed to give hiring teams flexibility: you can assess different roles using a single system, often with built-in safeguards like proctoring, plagiarism checks or question randomization.

TestGorilla: Multi-skill assessments with integrity measures

If you're looking for a versatile platform that supports role‑agnostic assessments while incorporating strong AI‑aware safeguards, TestGorilla could be the right choice for you. This tool combines a library of skill‑based tests (technical, cognitive, behavioral and role‑specific) with several integrity features like full-screen enforcement (so candidates cannot switch windows), copy-paste disabling and IP monitoring (e.g. to detect rapid actions that suggest gaming the system).

Hiring with confidence in the age of ChatGPT

There’s no one-size-fits-all answer to making candidate assessments ChatGPT-safe. The right approach depends on your role, your team and your philosophy on AI in the workplace. Some companies will want to keep AI firmly outside the process, focusing on tasks that highlight uniquely human skills like creativity, judgment and experience. Others will choose to embrace AI as part of the assessment, testing how candidates collaborate with the tools that will likely shape their day-to-day work. What’s clear is that assessments can no longer remain static – they need to evolve. By combining thoughtful design with the right platforms, you can build processes that remain fair, predictive and future-ready, helping you hire with confidence in the age of AI.

🤝Are you looking for support to find new team members and design great hiring processes? Let us know

September 22, 2025