Knowledge management initiatives have fallen short for thirty years’ running. Every new technology wave promises to fix it. Communities of practice would unlock tacit knowledge. Expertise directories would connect people to answers. Collaboration platforms would break down silos. SharePoint would ... well, nobody believed that one.
Now comes AI-powered knowledge management — what Cisco calls Connected Intelligence — and the pitch is identical: knowledge flows across human and machine boundaries, decisions happen at unprecedented speed and collaboration occurs without friction. The only difference is that artificial intelligence does the heavy lifting this time.
Whether that changes anything depends on a question no vendor wants to answer: What if the technology wasn’t the problem?
Automating the Easy Parts, Calling It Progress
Nobody struggles to store information anymore. Enterprises are drowning in it.
"Storing information isn't hard. What's hard is keeping the context intact and making past experience useful when real decisions are on the table," said Yancey Sanford, chief information and research officer at MSTRO. This is why knowledge management keeps failing despite ever-better technology. Organizations capture everything and make almost none of it useful. Context evaporates, confidence drops and repositories become graveyards.
AI knowledge management tools promise to change this dynamic. Mike Clifton, co-CEO at Alorica, described how his company abandoned "static repositories" for "dynamic, in-the-moment knowledge delivery" through its Knowledge IQ platform. Instead of hunting through wikis, Alorica's 100,000 employees receive context-aware knowledge embedded into customer service workflows. The company reports improvements in handle time and first-call resolution.
While Alorica measures handle time reduction, it can't measure whether reducing handle time transferred knowledge or just made scripted responses faster.
This is the fundamental measurement problem. Alorica tracks what AI makes easier: speed, accuracy and consistency. What goes unmeasured is what AI makes harder to see: whether knowledge actually transferred, whether understanding deepened and whether capability was built. The metrics improve while the organization learns nothing. That's not a bug in the implementation. That's the business model.
Ambiguity and Complexity Require Human Judgment
Even this success story reveals limits. "AI can't interpret conflicting policies, understand cultural nuance or resolve ambiguous or emotionally charged situations," Clifton acknowledges. Retrieval, summarization, validation and compliance checking can be automated. But ambiguity or complexity require human judgment.
This as a permanent boundary, said Lynda Braksiek, principal research lead for knowledge management at APQC. Her research shows AI handles "technical and operational work in KM such as classifying content, improving search, identifying patterns and accelerating knowledge flow,” she said. “But it cannot replace the strategic, contextual and human elements that make KM effective."
High-performing organizations still depend on human judgment to identify important, relevant knowledge and sustain the relationships that make sharing knowledge worthwhile.
“We're automating the mechanical work whilst the genuinely hard problems remain exactly where they've always been,” Braksiek said. “That's not transformation. That's efficiency theater with better special effects.”
Tools Fail Because People Do. AI Doesn't Change That
Peer-to-peer knowledge sharing consistently fails, said Shriram Natarajan, director at global technology research and advisory firm ISG. "The use cases were defined ahead of time, and the actual implementation seemed to impede the sharing of knowledge. The processes and tools were getting in the way."
Cisco's Connected Intelligence strengthens all three relationship types: people to people, people to AI, and AI to AI. But if P2P knowledge sharing has been broken for decades, bolting AI onto broken processes creates broken processes with better dashboards.
Braksiek's research also suggests AI strengthens P2P sharing, but only "when paired with strong, intentional knowledge management practices." Without guardrails, AI weakens human connection by reducing informal interactions or encouraging people to bypass colleagues, assuming the technology has answers.
The theory sounds reasonable. When AI handles retrieval and error detection, people spend time on judgment, empathy and problem-solving instead of system archaeology. Experts coach and mentor instead of correcting avoidable mistakes. Clifton calls this building "super humans, not replacing them."
Layering Sophisticated AI on Poor Knowledge Management Practices
Here's what nobody wants to confront: if Alorica needs to train 100,000 people to work alongside AI effectively, and most organizations can't manage basic change management, scalability becomes a fantasy.
"Peer-to-peer knowledge sharing has struggled for years because it depends on people doing all the right things at all the right times,” said Sanford. “Documenting what they did, updating it later and remembering where to look when they need it again."
"Technology amplifies the environment it's placed in," warned Clifton. "If your culture supports knowledge sharing, AI will accelerate it. If not, AI will simply expose the gaps more quickly."
That's the mechanism that dooms most implementations.
AI knowledge management technologies won't solve poor knowledge quality, weak governance, siloed ownership behaviors, strategic ambiguity, unclear priorities or misaligned goals. Braksiek's research shows even advanced AI ecosystems "cannot resolve challenges rooted in culture, trust and human willingness to share knowledge."
Deploy sophisticated AI into organizations with weak governance and misaligned incentives, and you get faster access to outdated information, more efficient distribution of inaccurate knowledge and automated reinforcement of siloed thinking. The dysfunction just operates at machine speed.
Organizational changes required are substantial. Natarajan outlined the sequence: investments in digitization, training for knowledge capture, governance of storage and processes, identifying applicability and tracking value over time. Most organizations stumble at step one. "AI and collaboration tools just create new silos with nicer interfaces,” Sanford said.
AI-powered knowledge management works only when tools support how people learn, decide and work together daily. That requires treating knowledge as something that grows as the organization learns, not static content filed away and forgotten. The technology enables that shift. Organizational politics and misaligned incentives prevent it. AI doesn't change that.
The Need for Algorithmic Accountability
When algorithms become intermediaries, accountability becomes slippery. "Ownership still rests with the organization and its people, particularly the teams responsible for stewarding processes,” Braksiek said. Algorithms speed things up, but "true ownership lies with KM leaders, governance structures and subject matter experts."
Clifton described Alorica's "human-in-command design" where every AI action includes confidence scores, source transparency and audit trails. "Algorithms facilitate knowledge flows, but accountability never moves away from people,” he said.
That’s fine in principle, but challenging in practice. Knowledge managers now need new skillsets, becoming "both digitally fluent and deeply human," combining AI fluency with facilitation, relationship building and change leadership. Most organizations lack people with these capabilities and can't easily acquire them. Training takes years, and the AI-powered systems run now.
"The impact of Connected Intelligence isn't measured by usage stats or activity levels,” Sanford said. “It shows up in faster decisions, fewer repeated mistakes, less dependence on a handful of experts and more confidence when choices have to be made without perfect information."
The standard needs to be business outcomes demonstrating capability improvement, not just adoption rates or dashboards. Yet most organizations won't track it because most organizations don't want to know the answer.
Solving the Knowledge Management People Problem
Three decades of knowledge management failure have taught us that problems are rarely technological.
Organizations struggle with silos created by structure and incentives, cultures that don't reward sharing, strategic ambiguity that makes it unclear what knowledge matters and the challenge of preserving context when experience transfers between people. These are human and organizational problems that don't yield to better software.
The new generation of knowledge management software is intended to accelerate knowledge flows and reduce friction. AI excels at both. But accelerate flows in an organization that doesn't know where it's going and you get chaos at higher speed. Reduce friction in systems where people aren't moving in useful directions anyway and nothing meaningful changes; it just happens faster and more efficiently.
Connected Intelligence will work for organizations that have already solved the hard problems. Everyone else gets expensive documentation of their dysfunction, updated in real time with superior analytics. The AI will be faster, smarter and more capable, but the organizations will remain as dysfunctional as before. In three years, when the next technology arrives promising to finally fix knowledge management, we'll have this conversation again.
Editor's Note: Catch up on other takes on enterprise knowledge management and knowledge transfer:
- 5 Key Challenges of Knowledge Management (and How to Conquer Them) — You’re likely sitting on a lot of valuable data and insights. But how you use and share it is everything. Some tips to build a knowledge-sharing culture.
- Don't Let Critical Knowledge Walk Out the Door — When expertise walks out the door, so does competitive advantage. It’s time to make knowledge sharing everyone’s job.
- Knowledge Management Means More Than Just Mining Digital Exhaust — If we want machine intelligence to play a role in knowledge creation, we need conscious design, not lazy accidents.