Compounding Gains: Tiny Moves That Add Up in an AI Era

This is Part 2 of my series I’ll publish as summer winds down; think of it as “reading club meets operating model.” This round, the theme is HR Strategy (annual planning-season edition), and the “texts” aren’t exactly beach reads: Wharton’s Projected Impact of Generative AI on Future Productivity Growth and the Federal Reserve’s Beige Book. Together, they make a clear case for small, durable changes to make to capture the AI productivity bump without losing sight of people.

I just got back from vacation and, in the spirit of light reading, cracked open two very different but surprisingly complementary reports: Wharton’s Projected Impact of Generative AI on Future Productivity Growth (published yesterday… yes, even I was very productive this morning) and the Federal Reserve’s Beige Book (nothing says “post-vacation” like anecdotes about input prices). One gives us the macro lens on where productivity is headed; the other shows what’s actually happening on the ground in companies right now. Together, they tell a useful story: the AI productivity bump is coming, but the real change shows up in the small, steady shifts leaders are making today.

First, let’s brief on the books.

  • The AI bump is real, but short-lived. Wharton models a hump-shaped effect: productivity growth rising roughly two-tenths of a percentage point at its early-2030s peak, then settling into a small but permanent lift (~0.04% a year). The Beige Book shows what that looks like in the field: firms piloting AI in back-office tasks, shaving cycle times, filling fewer roles via attrition, and staying cautious on white-collar hiring. Things are moving already!

  • Capacity without backfills is the new normal. The Beige Book points to flat headcounts and cautious white-collar hiring; roles go vacant and are covered through attrition, automation, and temps rather than layoffs. Wharton’s outlook explains why: productivity gains from task automation let firms meet demand without rehiring one-for-one. And for my HR Analytics friends: yes, RTO did double as an attrition lever in many places (we were all thinking it).

  • Mid- to high-wage roles are most exposed. Knowledge workers (engineers, analysts, consultants) sit in the ‘high-exposure’ zone for AI. District reports echo slower white-collar hiring even as technical and service roles persist. I should note: these findings run counter to my earlier “disappearing first rung” take that entry-level roles were most at risk. I still believe true subject-matter experts will be fine — and likely capture much of the productivity bump — but the report’s findings add nuance. There’s real uncertainty here, so plan for both: reskill the middle-to-upper wage tier while preserving high-quality early-career pathways.

  • Adaptability beats prediction. Both reports emphasize uncertainty. Resilience won’t come from guessing the exact curve; it’ll come from flexible org designs, clear governance, and pragmatic training.

Now, let’s talk about what this looks like for you.

It’s business planning season for my clients. That makes this the perfect time to step back and ensure your HR strategy lines up with what we’re seeing in the market. Wharton’s 0.04% projected productivity boost highlights how seemingly small changes can reshape the entire economy. Beige Book anecdotes — tweaks to hiring, automation, and contracts — add up to new cultural norms. I think the smart play for HR is to stack small, durable process changes now, not wait for a capital-T ‘Transformation’. And as leaders, we need to bake those shifts into our strategic plans.

Here are four suggested shifts:

  1. Set “AI drafts, humans decide” rules: Use your annual planning cycle to redraw decision rights and clarify workflows where AI is, or could become, part of the process. Who owns the “first draft” when AI generates it, and who signs off? Build AI governance into your operating model now; think lightweight ethics reviews, oversight boards, and escalation pathways. In practice: Set a policy that when AI drafts a client-facing document or report, the business owner edits and approves, while compliance provides final oversight. This preserves speed without losing accountability.

  2. Make AI fluency a requirement, not a side project: Budget for broad-based AI fluency in 2026 — not just for IT or data teams, but for every function. Pilot micro-skilling programs tied to real business use cases (prompt design for sales decks, data interpretation for HR reports, client-facing communication support, the examples are endless). In practice: Launch a short “AI for Managers” program where leaders practice using AI to summarize meeting notes, generate job descriptions, or analyze engagement survey data. The point isn’t technical depth; it’s confidence and fluency across the organization — and training people to resist the copy-paste urge and think critically about outputs.

  3. Listen, then level the playing field: Start by asking employees how AI is landing—what helps, what hurts, and where it feels unfair—and use that input to avoid creating “haves and have-nots.” Wharton suggests AI-exposed office sectors will grow faster, but the Beige Book reminds us people don’t always feel the upside right away. Your job is to make sure productivity gains are heard, shared, and evenly spread. In practice: Add 3–4 AI-readiness items to your next survey (e.g., empowerment, trust in outputs, perceived fairness), run function-level listening sessions, and publish clear eligibility criteria for tools. Rotate pilots beyond HQ, track adoption and output by team/role/shift, and close gaps with targeted enablement — and shared rewards — so no group is left behind.

  4. Train leaders to manage the miss: Incorporate AI into your 2026 leadership programs. Leaders need to know how to run hybrid teams where human and machine contributions blend seamlessly. And they need to know what skills to assess on their teams, doubling down on what AI cannot replicate: judgment, empathy, adaptability, and the ability to inspire trust. In practice: Build a simulation where leaders respond to an AI-generated error in a high-stakes project. The lesson isn’t about the technology; it’s about modeling transparency and showing how leaders handle uncertainty when the “co-pilot” makes a mistake.

Closing

Wharton’s permanent 0.04% may look tiny on paper, but tiny is where the magic happens. The Beige Book backs that up: organizations aren’t waiting for a big bang; they’re making a thousand small decisions that will set their trajectory for years. That’s our job this season: stack durable, human-centered wins. If we design clean workflows, teach practical skills, listen with intent, and grow leaders who can handle ambiguity, we won’t just surf the AI wave—we’ll come out better on the other side of it.

Next
Next

Merit, Learning, and Talent Acquisition in the AI Era