two colleagues having a discussion
Editorial

AI Is Only as Good as Your Knowledge — And That's a People Problem

4 minute read
Lynda Braksiek avatar
By
SAVED
The organizations getting the most from AI aren't just buying better tools. They're doing the unglamorous work of fixing their knowledge first.

AI-enabled search and workplace assistants are now part of everyday work. But organizations are learning a hard lesson: The tools are only as reliable as the knowledge they can access. When content is scattered across hubs, duplicates compete for attention and metadata is inconsistent, even basic retrieval is unpredictable.

APQC’s research has long positioned content management as a cornerstone of effective knowledge management, and generative AI has raised the stakes by turning content quality into a prerequisite for trustworthy automation.

Why Messy Content Creates Weak AI Outcomes

AI can only work with what it can access and can’t judge “truth” the way people do. If the knowledge environment includes duplicate documents, outdated guidance and orphaned content that hasn’t been reviewed in years, AI will surface conflicting results or generate summaries that blend old and new information that should never be combined, even when the underlying model is strong. 

This is why “out-of-the-box” AI rarely delivers its intended value. Organizations still need content management basics: defined locations for critical content, consistent metadata, lifecycle practices that keep information current and people responsible for curation and validation. Without these basics, content becomes fragmented, duplicates continue to multiply and silos can grow. This leaves AI with no reliable way to determine what is current or authoritative.

Practical Steps to Clean and Organize Knowledge for AI Success

APQC’s content management guidance points to the importance of having a defined strategy, developing lifecycle management and a disciplined approach to executing these capabilities. The fastest progress usually comes from focusing on a “minimum viable” set of improvements applied to the knowledge that matters most to the organization. The goal isn't perfection; it’s to make trusted knowledge easier to find, easier to validate and harder to misuse. 

1. Start With High-Value Knowledge, Not Everything

Prioritize content tied to high-frequency work (onboarding, service, policies) or high-risk decisions (compliance, safety). Focusing on what people use most helps build momentum and reduces the risk of AI exploiting outdated or conflicting guidance.

2. Simplify What’s There Before Adding New Layers

Before you redesign taxonomy, ontology or deploy new AI features, remove content that increases retrieval risk:

  • Retire duplicates and redirect users to a single source of truth
  • Archive obsolete guidance and clearly label outdated versions
  • Eliminate material with no clear audience or purpose

Reducing the noise in your content matters because AI magnifies what’s already there. If duplicates and outdated versions remain, the system will keep surfacing them and erode your organization’s trust.

3. Keep Metadata Simple and Consistent

Over-engineered metadata fails when contributors ignore it. Implementing a small set of consistent signals improves both findability and the quality of AI retrieval and output:

  • Plain-language title (matches how people search)
  • Purpose and audience
  • Owner (accountability)
  • Review cadence/last reviewed date
  • “Authority” indicator (source of truth vs. reference)

Metadata is not just tagging, it’s how you communicate authority, intent and relevancy throughout the organization.

4. Add Context so AI Uses the Information Correctly

AI can summarize what’s written, but it can’t guess what experts know implicitly. Add lightweight context to your organization’s most critical content:

  • When to use this guidance
  • When not to use it (exceptions)
  • Assumptions and prerequisites
  • Escalation path for edge cases

Context reduces misuse of outputs and helps AI retrieve and synthesize information more accurately.

The Role of Humans in AI-Powered Knowledge Systems

Organizations often treat AI as a way to reduce human involvement. APQC’s research takes a different view: AI augments knowledge work and improves efficiency, but it still depends on people to train the machine, validate accuracy, curate sources of truth, and manage the health of the entire lifecycle. Without intentional and structured human oversight, AI systems can produce redundant, outdated or inaccurate results and users will stop trusting them. In practice, human expertise shows up in three ways:

  • Subject matter experts make sure critical guidance is accurate and handle exceptions, especially in high-risk areas like compliance, safety or regulated work.
  • Knowledge stewards keep the system healthy by reducing duplication, connecting related content and retiring outdated information.
  • KM and business leaders set priorities, align knowledge efforts to business needs, and ensure accountability doesn’t fade over time.

Governance That Makes Automation Safe to Trust

AI readiness requires governance that clarifies decision rights: who owns critical knowledge, who validates it and where human judgment can override automation. In other words, governance tells the organization (and the AI) what to trust. This includes clear accountability for content validation, lifecycle management to prevent outdated content from influencing AI outputs, and defined roles for who owns and manages the knowledge. 

A practical model includes guardrails, not gates. Let AI recommend and summarize for low-risk knowledge, require human review for high-impact domains (legal, safety, regulated work), and build feedback loops so users can raise issues and knowledge owners can correct the source. This approach enables speed without sacrificing accuracy and protects the credibility that AI adoption depends on.

The Bottom Line

Preparing knowledge for AI is not about deploying another tool. It’s about strengthening the human systems behind knowledge — the practices that reduce the noise, validate what to trust, provide context and sustain content throughout its lifecycle. AI doesn’t create trust on its own. The responsibility rests with the people who own, curate and govern the knowledge. When human accountability is clear, AI can accelerate knowledge capture and retrieval within organizations. Without it, automation simply surfaces existing problems.

Learning Opportunities

Editor's Note: Catch up on other takes on the intersection between knowledge management and generative AI:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Lynda Braksiek

In her role as Principal Research Lead, Lynda Braksiek develops and executes APQC’s agenda for knowledge management research. She works remotely from her homes in Iowa and Wisconsin and has more than 25 years of experience leading and implementing knowledge management strategies and capabilities in the aerospace, pharmaceutical, and insurance industries. Connect with Lynda Braksiek:

Main image: charles deluvio | unsplash
Featured Research