Merit, Learning, and Talent Acquisition in the AI Era (Part 1: Hire)
This is Part 1 of a short series I’ll publish as summer winds down; think of it as “reading club meets operating model.” Each installment will pair one phase of the Talent Lifecycle with a research article, essay, or data set that helps us rethink that phase for the AI era. For this installment, David Brooks’s Atlantic essay is ostensibly about elite college admissions, but it reads like an talent assessment brief for the workplace you may run.
Even when I don’t land on the same conclusion David Brooks does, I appreciate how he invites us to zoom out and ask, “What are we really rewarding?” His piece on elite admissions did that for me – except my brain immediately ran it through the filter of my day job: helping executive teams and HR leaders rethink how their workforce should adapt, grow, and deliver value in an era where AI is quickly taking over the testable parts of knowledge work.
Quick disclaimer before the pitchforks come out: I do not think universities are terrible. I owe much of my personal development to American University & Villanova University. I frequently guest lecture, I’ll teach as an adjunct, and some of my favorite people are academics. This isn’t an anti–higher ed rant; it’s a translation. Brooks argues that our prestige signals (tests, ranks, brand names) often miss what we really need at work: people who exercise judgment, build with others, and create value in messy, changing conditions. That hits especially hard right now, because AI is quickly absorbing the tidy stuff.
So, in this essay, I’m narrowing the aperture to one phase of the talent lifecycle: Hire. I’ll reflect on the Brooks piece and then offer concrete recommendations for updating your talent acquisition strategy for the AI era – how to evaluate candidates for judgment, collaboration, and AI fluency without falling back on… vibes. (As a former vibes hire myself, I say this with love: let’s retire vibes and hire to the job.)
And yes, I’ll sprinkle in a little of my own perspective – lightly humorous, mostly helpful, and with zero shade at your GPA (I’m sure it’s lovely).
First, let’s brief on Brooks.
In How Ivy League Admissions Broke America, David Brooks argues that elite admissions morphed from a college sorting mechanism into a national ideology – one that over-rewards what’s easy to test, favors narrow “spikes” of knowledge over well-rounded capability, and quietly stratifies opportunity. He threads in research (hello, Keith Stanovich’s dysrationalia) and examples of project-based education to make a bigger point: the real premium is on a person’s judgment, character, collaboration, and practical creativity. Brooks himself is a longtime New York Times columnist and commentator (PBS/NPR), and the author of The Road to Character and The Second Mountain. He’s known for mixing social science with moral philosophy in a way that makes you nod, sigh, and reconsider your life choices (in a good way).
Below is a summary of what I found most interesting, connecting education to the business world:
1) Brains don’t always equal better judgement
Brooks reminds us of dysrationalia, a term coined by cognitive psychologist Keith Stanovich: very smart people can be great at defending bad ideas. Also, the traits that schools reward – solo speed, rule-following, pleasing authority – don’t map neatly to modern work, which is collaborative, ambiguous, and customer-shaped. In other words, a transcript is a decent predictor of “can you complete well-defined tasks alone,” not “can you make sound choices with other humans when the problem is fuzzy and the stakes are real.”
2) The “spiky” vs. “well-rounded” trap
Admissions (and, in my experience, many hiring funnels) overvalue “spikes” – narrow, elite-caliber strengths – at the expense of curiosity, breadth, and connective tissue. The result: gorgeous résumés that struggle in cross-functional work. You need people who can learn quickly (what Leslie Valiant, a computer science professor at Harvard, refers to as “educability,” the ability to learn from experience) and play nicely across domains. Think “T-shaped”: deep somewhere, literate everywhere, and socially skilled enough to move the work forward.
3) Assess portfolios, not just scores.
A portfolio shows how someone thinks and collaborates; a score shows one-off performance. Guess which one is more useful when you’re deciding who gets the scary client meeting.
4) And yes: AI changes what matters
A huge chunk of what we used to label “elite cognitive work” is precisely what AI is already very good at: summarizing, structuring, pattern-spotting, and first drafts. That doesn’t make those skills irrelevant, it just means they’re increasingly automated. The premium shifts to problem framing, judgment, ethics, creativity, and the ability to wrangle humans and machines together into a result.
5) What to select and cultivate going forward.
Brooks argues for valuing other traits, and I believe it’s worth diving into what those skills are for my HR leaders reading this:
Energy: Bias for action. You see tasks move from “we should” to “we did.” Signals: shipped artifacts, documented experiments, the kind of person who shows up with a draft rather than a request for permission.
Initiative: Spots opportunities without being spoon-fed. Signals: self-started projects, role expansion stories, “I noticed X, so I pulled data Y and tried Z.”
Curiosity: Asks better questions, not just more of them. Signals: cross-domain reading, unusual pairings (“we tried a call-center tactic in claims ops”), and most importantly these days: a willingness to be the least smart person in a new room.
Generosity: Makes teammates better. Signals: well-documented work, reusable templates, mentoring, clean handoffs… things that go beyond the individual.
Sensitivity: Reads the room and the context. In some cases, diplomatic. Signals: adaptability with different stakeholders, evidence that they understand power, incentives, and data privacy/regulatory constraints.
Resilience: Processes feedback without either collapsing or ignoring it. Signals: a post-mortem mindset, storylines of setback → adjustment → improved outcome.
Commitment to the common good: Optimizes for the whole, not just theirs. Signals: decisions that trade personal credit for customer or team value; a sturdy ethical core around how AI is used (fairness, transparency, IP).
Bottom line: If you’re a CEO, CHRO, or business unit leader, this is an invitation to redesign your opportunity structure and assessment methods, so your company stops proxy-hiring for test-taking prowess and starts deliberately growing judgment, collaboration, and initiative.
Now, let’s talk about what this looks like for you.
Because this is a long blog – those of you who know me know I am long-winded – here’s the payoff. A practical playbook with the guidance I give to clients who want to modernize talent acquisition systems for an AI-rich workplace.
Integrate portfolio reviews into your talent acquisition funnel (with AI transparency): Ask for two or three artifacts (memos, analyses, dashboards, code, process maps, training docs.. the list goes on) and a 10-minute ‘defense’ of that portfolio from the candidate. To go a step further, require a brief AI appendix. Ask which tools they used, sample prompts/prompt patterns, why they chose them, how they validated outputs (sources, data checks, alt methods), and what they changed after feedback. In their talk, listen for a healthy ‘human-in-the-loop’ cycle – so framing → prompting → evaluating → revising → deciding, not copy-paste heroics. You’re testing sense-making and judgment: Do they collaborate with AI, question it, cite and verify, and document decisions? You want someone who treats the tech like a capable teammate, not a shortcut.
Leverage collaborative auditions, not solo puzzles: Run a small-group working session with an AI copilot available to everyone (and, if your team can handle the capacity, 2-3 members of the current team) to solve a case study. Observe how candidates frame problems, divide labor, prompt the model/teammates, challenge assumptions, validate outputs, and converge on a plan. (If someone tries to delegate all thinking to the model, that’s… a data point.)
Score what you actually need – with a real rubric, not vibes: Before interviews, write 2–4 concrete outcomes the hire must deliver in the first 6–12 months (e.g., “stand up an HR analytics dashboard,” “cut onboarding cycle time by 20%”). Build a simple rubric against those outcomes that includes the human traits that predict success in an AI-rich workplace, I have the traits from the Brooks article below to help you start. Use a 1-5 anchored scale with behavioral indicators, not adjectives. Have interviewers capture evidence and score independently before group debriefs, then decide on the rubric. Examples:
Energy: moves from idea to a low-fi prototype during the exercise.
Initiative: spots an opportunity and proposes a next step without prompting.
Curiosity: asks clarifying questions; tests alternative prompts/approaches.
Generosity: creates reusable docs/templates; invites others into the work.
Sensitivity: adjusts plan for legal/privacy or stakeholder concerns unprompted.
Resilience: incorporates feedback after a miss and improves on the next pass.
Commitment to the whole: optimizes for team/customer value over personal credit.
AI collaboration: frames the problem, shows prompts/patterns used, validates sources, documents decisions – no copy-paste shortcuts.
Down-weight pedigree to tie-breaker status: Make school, GPA, and employer tier a tie-breaker at most. Calibrate this change with regular validity checks: Which hiring signals are correlated with 6- and 12-month performance in your context? (Adam Grant’s point stands: academic excellence is a weak long-term predictor.)
Hire for T-shapes, not needles: Encourage a visible “major” and a couple of “minors” in the candidate’s skills and abilities. I love this except from the article: “In their book, Talent: How to Identify Energizers, Creatives, and Winners Around the World, the venture capitalist Daniel Gross and the economist Tyler Cowen argue that when hiring, you should look for the people who write on the side, or code on the side, just for fun. “If someone truly is creative and inspiring,” they write, “it will show up in how they allocate their spare time.” In job interviews, the authors advise hiring managers to ask, “What are the open tabs on your browser right now”
Closing
If Brooks is right (and I think he is) the signals we’ve relied on to predict success don’t line up with the outcomes we need. AI only widens that gap. Hiring is the leverage point: get the inputs right and everything downstream (hopefully!) gets easier. That’s why I started here, with practical changes to how you evaluate portfolios, collaborate in auditions, measure the human traits that matter in an AI-rich workplace.
This is also the work I do with clients - helping them redesign their talent acquisition strategy, build custom rubrics and interview guides, and run strategy workshops that align leaders on what success looks like in their AI era workforce. If your hiring still leans on pedigree and pleasant interviews, consider this your gentle nudge: ask for the work, listen for the judgment, and make space for curiosity, initiative, and generosity to show up. The résumé can be lovely. The portfolio is where the truth lives.