Overview
Below are things to look out for when vetting talents. Below are some of the general thoughts on the 3-4 biggest criteria that I have for vetting high-level engineers that have used AI to intelligently leverage their natural skills.
Of course, it begs the question, what are the candidates' natural skills? It shouldn't be coding alone, but a holistic understanding of software engineering, as well as understanding the complex nuances that come up in computer science. Itโs the bridge
1. Vetting for genuine AI-assisted development
First off, when it comes to AI for development, it's not just about whether someone can use an AI coding assistant. Frankly, anyone can get Copilot to spit out some boilerplate. The real question is, does it actually make them a better, faster, or more efficient developer? Are they using AI to transcend their previous limitations or just as a crutch for stuff they should already know? You need to look for symbiotic integration. Can they take the AI's suggestions, critically evaluate them, refine them, and ultimately produce code that's cleaner, more performant, or more robust than what they'd write solo, or even what a less skilled developer with AI would produce?
Think about it like this: giving a kid a calculator doesn't make them a mathematician. Giving a seasoned engineer a powerful AI tool should amplify their existing expertise, letting them focus on architectural decisions and complex problem-solving while the AI handles some of the grunt work. So, your vetting should probably involve practical coding challenges where AI use is encouraged, but the focus is on the final output quality and their ability to explain the why behind their choices, including how the AI contributed or where they had to overrule it. It's about intelligent leverage, not blind reliance.
- Ask these: "Walk me through a specific, complex coding task where you integrated an AI assistant. Where exactly did it accelerate the process? What kind of prompts did you use? More importantly, where did the AI screw up or give you suboptimal code, and how did you identify and correct that? What was your intellectual contribution beyond just accepting suggestions?"
- Look for these: Evidence of critical evaluation. They shouldn't be blindly copying and pasting. They should be able to articulate how they directed the AI, wrestled with its outputs, and ultimately produced superior code because of their skilled interaction with the tool, not just because the tool exists. Are they still the chief architect of the solution?
- Red flag: If they just say "it wrote the code for X feature" and can't detail the nuanced interaction, the iterations, or how they pushed the AI beyond its first, often mediocre, suggestion. If they treat AI like a magic black box, they don't get it.
2. Distinguishing AI sparring partners from glorified search users
Now, this idea of AI as a sparring partner for ideas, especially at a supposed "PhD level," is where things get really interesting and, frankly, where most will fall short. Is the candidate truly engaging in a dialectical process with the AI, pushing its boundaries and using it to forge novel insights? Or are they just treating it like a super-Oracle, asking questions and taking the first answer as gospel? That's the crucial difference. A PhD-level intellect doesn't just ask questions; it formulates hypotheses, critiques information, synthesizes disparate concepts, and generates new knowledge.
If the AI is "PhD level," the human needs to be its research advisor, guiding it, challenging its assumptions, and steering it away from plausible-sounding nonsense. Are they asking incisive questions that force the AI beyond its canned responses? Can they identify the AI's biases or limitations in a given context and work around them? This isn't about the AI having all the answers; it's about the human's ability to use the AI to explore complex problem spaces more effectively. Forget canned questions. You need to observe them tackling an ambiguous, multifaceted problem, live, using AI as their collaborator. Watch their thought process. Are they truly thinking with the AI, or just prompting it? The goal is to find individuals who achieve cognitive leverage, using AI to extend their own intellectual reach, not just echo existing information. They should be the ones making the AI look smart, not the other way around.
- Ask these (or better yet, make it a live exercise): "Describe a situation where you used an AI, like a large language model, to explore a really ambiguous or novel problem โ something without a clear answer. How did you structure your interaction to elicit genuinely creative or non-obvious lines of thought from the AI? Give me examples of how you challenged its assumptions or guided it towards a deeper analysis."
- For a live exercise: "Here's a thorny strategic challenge we're mulling over. You've got 15 minutes and access to an AI tool. Show me how you'd begin to use it to dissect this problem and brainstorm potential pathways. Talk me through your prompts and your reasoning."
- Look for these: The ability to engage in a dialectical process with the AI. Are they asking layered, sophisticated questions? Are they synthesizing the AI's outputs with their own knowledge? Can they spot biases or hallucinations and steer the AI back on course? You want someone who can make the AI perform at a higher level.
- Red flag: Candidates who ask simplistic questions, take the AI's first response as definitive truth, or can't demonstrate how they iteratively refined their approach with the AI. If their "sparring" looks more like a Q&A with a slightly dim intern, pass.
3. Grading AI-driven experimentation
Using AI for experimentation; this is where I believe the real acceleration can happen. Whether it's rapidly prototyping code, A/B testing design ideas, or simulating complex systems, AI can compress timelines dramatically. But again, the tool is only as good as the hand wielding it. You're looking for a methodical and creative experimental mindset, amplified by AI.
Can the candidate define a clear hypothesis? Can they design an experiment where AI is used to generate variations, simulate conditions, or analyze results in a way that wouldn't be feasible manually? For instance, can they use AI to explore a dozen different algorithmic approaches to a problem in an afternoon, rather than spending weeks on just one or two? It's about the velocity of iteration and the ability to learn from these rapid experiments. You'd want to see if they can not only set up these AI-powered experiments but also critically interpret the outputs, understand the limitations, and then iterate further. Give them a challenge like optimizing a piece of code for an obscure metric or generating a range of creative solutions to a design problem, and see how they employ AI to explore the possibility space. Are they just throwing things at the wall, or is there an intelligent strategy behind their AI-driven exploration? This is about using AI to navigate uncertainty and discover optimal paths faster. It's the difference between randomly digging for gold and using advanced sensors to pinpoint the motherlode.
- Ask these: "Tell me about a time you used AI to quickly prototype an idea, test a hypothesis, or explore multiple solution variants. What was the core question you were trying to answer? How did AI allow you to conduct these experiments faster or at a greater scale than traditional methods? What did you learn from the process, even if the experiments 'failed'?"
- And these: "Let's say we need to radically improve [specific product feature] for [a niche user need]. How would you leverage AI to design and run a series_of_experiments to find breakthrough solutions, rather than just incremental tweaks?"
- Look for these: A methodical, hypothesis-driven approach. They should be able to articulate how AI can be a core part of the experimental loop โ from generating ideas and creating prototypes to simulating outcomes and analyzing results. It's about increasing the velocity of learning.
- Red flag: Vague talk about "trying things." If they can't describe a structured experimental process where AI plays a key role in compressing timelines or expanding the scope of exploration, they're likely not operating at the level you need.
4. Proactivity in tool exploration (best-to-have)
Think of it this way: some people wait for the company to provide them with a map and a compass. The innovators, the real 10x engineers in this new paradigm, are out there with their own telescopes, spotting new constellations of tools before they even hit the mainstream charts. They're the ones who see a new AI model pop up on a tech blog, a research paper, or even a random YouTube deep-dive, and their immediate instinct is, "Huh, I need to get my hands on that. Now. How can I break it? How can I make it do something amazing? How does this fit into my arsenal?" This isn't about following a training manual; it's about an insatiable curiosity and a drive to continuously upgrade their own capabilities.
This self-initiated tinkering is, I believe, the bedrock of creative problem-solving and genuine experimentation. Itโs not enough to be proficient with the tools youโre given; the real value comes from those who possess an intrinsic motivation to discover, evaluate, and integrate emerging technologies into their workflow, often before anyone asks them to. They see a new AI painting tool and wonder if its core architecture could be adapted for, say, anomaly detection in sensor data. That's the kind of lateral thinking and proactive engagement that separates the doers from the true innovators.
- Ask these: "What's the most interesting or powerful AI tool or technique you've explored on your own time in the last few months? What made you look into it? How did you kick its tires? Did you see any unexpected potential applications, even if they're not directly related to your current work?"
- And these: "How do you personally keep pace with the insane speed of AI development? Can you give me a concrete example of something you learned from a source like a research paper, a tech community, or even a YouTube video that has since changed how you approach problem-solving or development?"
- Look for these: Genuine, unprompted curiosity. They should light up when talking about new tools. They should have specific examples of self-initiated learning and tinkering. This demonstrates a passion and a drive to continuously upgrade their own capabilities. This is about tool scouting as a habit.
- Red flag: Candidates who only know the standard corporate-approved tools, seem unaware of recent breakthroughs, or show no personal initiative in exploring the AI frontier. If they're waiting to be told what to learn, they're already obsolete.
Quick cheat sheet: green lights vs. red flags
- Green lights:
- Deep, nuanced understanding of AI capabilities and its current limitations.
- Concrete, impressive examples of how AI has tangibly improved their work or thinking.
- Clear evidence of critical thinking with AI, not just reliance on AI.
- A palpable passion for exploring new AI frontiers and tools proactively.
- Ability to articulate a compelling vision for how AI will reshape their domain.
- They challenge your thinking about AI, in a good way.
- Red flags (proceed with extreme caution, or just don't):
- Heavy on buzzwords, light on specific, verifiable examples.
- Over-reliance on AI for tasks they should be able to do themselves; using it as a crutch.
- Inability to critically evaluate or discuss the flaws/biases in AI outputs.
- A shocking lack of curiosity about new tools or developments in the AI space.
- Defensive or dismissive when discussing AI's current limitations or ethical concerns.
- They sound like they just read an "AI for Dummies" book yesterday.
These individuals will be force multipliers. Don't settle for someone who just knows how to prompt. Find the ones who know how to think with AI, experiment with AI, and are constantly driven to find the next AI.