lost in a maze
Editorial

Finding Your Place in the AI-Driven Post-Skills Era

4 minute read
Owen Chamberlain avatar
By
SAVED
Why cultivating judgment, critique and contextual intelligence matters more than skills in an era defined by AI-driven automation.

We are living through a skills renaissance. Or at least, that’s what headlines suggest. Skills mapping, skills-first hiring, skills ontologies — across HR, learning and development and Edtech, there’s a collective push to reskill, upskill and pre-skill the workforce into future readiness. 

But what if we’re aiming at the wrong target?

As artificial intelligence (AI) tools become increasingly capable, not just in execution but in analysis, synthesis and even creative generation, we’re approaching a strange inflection point: skills are being devalued just as we’ve learned to champion them.

This isn’t to say that skills are obsolete. But we are rapidly entering a post-skills era, where the tasks that once defined expertise are outsourced to algorithms, and the remaining human value lies in something much harder to define: judgment, context and critique.

The Automation of Competence

Many of the tasks organizations once prized, such as summarizing data, writing briefs, even generating strategic options, are now well within AI’s wheelhouse. These tools are not perfect, but they are fast, scalable and tireless. A decade ago, you needed a competent analyst to extract insight from noise. Today, all you need is a prompt and a model.

But this efficiency has a hidden cost. What we’re at risk of losing isn’t just skill. 

It’s understanding. The kind that comes from wrestling with complexity, making mistakes and building fluency from the ground up. When AI handles the middle steps, we’re left with the output, but not always the experience to evaluate its quality. That instinct isn’t something you download. It’s something you cultivate. And that cultivation takes time, exposure and often, struggle.

Bloom's Taxonomy in Reverse

Bloom’s Taxonomy, a ubiquitous framework in educational theory, presents learning as a hierarchy. It stretches from remembering and understanding, up through understanding, analyzing, evaluating and creating. 

Most AI tools now handle the bottom half with ease. They “remember” everything, they “understand” patterns, they “apply” rules to generate outputs. Some even encroach on higher levels, producing passable creative work and surface-level analysis. 

Bloom’s Taxonomy
Information Technology University of Florida

But they lack something critical: awareness of context, consequence and care. They do not “know” what is at stake in their outputs. This is where human learning must shift. In the post-skills era, our attention should move to the top of Bloom’s Taxonomy. Not because we can’t automate the bottom, but because we must not automate the top.

Evaluating AI outputs, refining them and knowing when to trust and when to override require a human who has walked the terrain, not just handed the map.

Who Should be Responsible for Developing Skills?

The rise of skills-based discourse aligns with broader neoliberal trends in policy and work: the emphasis of responsibility for development onto the individual. This occurs through individual comparison, according to Stephen J. Ball.

We as learners need to get ahead of each other, and we do this by comparing ourselves, our achievements and our personal growth. Learn faster. Stay relevant. Stack your micro-credentials like digital currency. In this model, the pressure to remain “future-ready” is placed squarely on the worker.

But when AI increasingly handles execution, workers are now expected to interpret, mediate and contextualize technology they had no hand in designing. The burden of making sense of machine-generated outputs, often in high-stakes environments, falls on individuals without adequate time, support or reflective space. 

This is not just a skills gap. It’s a system failure.

A Generational Brain Drain?

Here’s a more uncomfortable possibility: We are about to forget how to think analytically, not because thinking isn’t valued, but because the conditions to develop it are disappearing.

Pre-AI tech natives such as Gen X and Millennials exist in a strange interstitial space. We came of age before widespread AI tools, meaning we were required to do the groundwork: gather, sort, interpret and build arguments from scratch. We developed critical muscles not because we were more virtuous, but because there was no shortcut.

Now, those shortcuts are ubiquitous. And while they offer tremendous advantages, they may also erode the foundation of critical discernment. You can’t develop editorial insight if you’ve never written a first draft. You can’t recognize the nuances of a flawed conclusion if you’ve never constructed a messy analysis.

It is harder to get to the moon today than it was in the 1960s, not because we lack intelligence, but because we let the collective expertise atrophy. We may be entering a period of cognitive de-skilling, where we risk forgetting not only how to do things, but how to know when they are done well.

Learning in the Post-Skills Era

So what does learning look like in the post-skills era?

We need to reframe our L&D approach, moving away from skills checklists and focusing instead on contextual intelligence, which is the capacity to:

  • Evaluate machine outputs in relation to human stakes
  • Recognize patterns of bias, risk or failure
  • Understand how knowledge is shaped by who creates it, and for what ends
  • Exercise ethical and aesthetic judgment, not just functional adequacy

AI could help here by having users reflect on outputs and by asking prompting questions. This reflection, however, requires mentoring, dialogue and lived experience. It requires environments that encourage slow thinking, not just rapid execution. In other words: We need to reinvest in wisdom, not just capability.

Learning Opportunities

The notion of pattern recognition raises another issue. Why is there so much training available on bias, frameworks for risk management or methods for avoiding failure? As humans, pattern recognition may be innate, but it is not something we do with equal competence across all that we encounter. Where AI and human driven pattern recognition intersect, could AI train us through its mistakes?

Moving Learning from Competence to Critique

The post-skills era is not a death knell for learning. It’s an invitation to think differently about what learning is for. AI has taken much of the cognitive heavy lifting off our plates. With AI, anyone can take on a different role without prior expertise. But that doesn’t free us from thinking. It raises the stakes of doing it well.

What we need now are not workers with more skills. We need thinkers who can question the output, navigate the ambiguity and bring human judgment to systems that increasingly operate without it. If we fail to cultivate this, we may find ourselves surrounded by tools that can do everything except understand why it matters.

Editor's Note: Read more thoughts on learning and development in a time of AI ubiquity:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Owen Chamberlain

Owen Chamberlain is a strategist, writer and speaker with 15+ years of experience in organizational transformation, remote work culture, and the future of leadership. He currently works at a Fortune 500 company, shaping strategy at the intersection of people, systems, and power. Connect with Owen Chamberlain:

Main image: susan y qin | unsplash
Featured Research