The Real Reason Knowledge Management Failed
And Why We’re Interested Again
There’s a particular kind of organizational tragedy I’ve watched play out too many times to count.
A senior person announces they’re leaving. Fifteen years of experience, relationships, judgment calls, lessons from projects that failed, shortcuts that worked. The organization scrambles. Someone schedules a knowledge transfer session — usually two hours, usually too late, usually with someone who is already mentally checked out and someone else who doesn’t yet know enough to ask the right questions. A document gets created. It goes into a shared drive. Six months later, nobody remembers it exists.
The knowledge walks out the door anyway.
At Tekdi, we’ve been building systems to prevent exactly this for over twenty years — intranets, knowledge management platforms, cataloging systems, learning repositories. We’ve done it for NGOs, for government projects, for large consulting firms, for enterprises. And if I’m being honest about what I’ve seen across all of it: we got some things right, and the field as a whole got some fundamental things wrong.
I’ve been thinking a lot about this lately, because for the first time in a long time, I think we’re actually in a position to do it right. But to understand why the potential is real now, you have to understand why the delivery went wrong.
The potential was always there. So was the gap.
The 1990s and 2000s saw massive investment in knowledge management. Intranets. Document management systems. SharePoint. Lotus Notes. Organizations hired Chief Knowledge Officers. Consultants built elaborate taxonomy frameworks. Approval workflows were designed. Storage systems were put in place.
The potential was real. The delivery, almost universally, was not.
Not immediately — there was usually an enthusiastic launch, a training session, some early adoption. But eighteen months in, the contribution rate had dropped to a handful of committed people, the search results were three years out of date, and the system that everyone pointed to as “our KM platform” was functionally a digital filing cabinet that nobody could find anything in.
The failure wasn’t mainly a technology failure, though the technology was often inadequate. The deeper failure was an assumption baked into almost every KM initiative: that humans, given a structured system and a process mandate, would behave like disciplined knowledge workers at scale.
They don’t. Not consistently. Not across an entire organization. Not over time.
Taxonomy designed carefully on a whiteboard collapsed under the weight of real content. Contributors used whatever folder was easiest to find, not the one that made sense for retrieval. Metadata fields went unfilled. Approval workflows became bottlenecks — domain owners were already stretched, queues backed up, contributors stopped submitting because nothing happened.
And search — the one thing users actually needed to work — returned results that were close enough to be frustrating and far enough to be useless. If someone searched for “risk assessment” and the document called it “hazard evaluation,” the system returned nothing. Search did improve as a technology over time. But those improvements didn’t make a visible dent in knowledge management at scale. The vocabulary problem, the metadata problem, and the trust problem were all still there.
Once a user searched twice and found nothing useful, they stopped searching. They asked a colleague instead.
That colleague — the one who knew where everything was, who had been around long enough to understand the context, who could tell you not just what the answer was but why it was the answer — that person became the real knowledge management system. And when they left, the knowledge walked out the door again.
This created a vicious cycle: poor content quality led to poor retrieval, which caused loss of user trust, which reduced contribution, which made the content quality worse. The system existed. Nobody used it. And its existence was used as evidence that knowledge management had been done.
The two things that were always broken simultaneously
Looking back at the systems we built and the ones we’ve seen fail, the pattern is clear: KM has always had two distinct problems that need to be solved together, and they almost never were.
The first is the capture and organization side. How do you ensure that knowledge is created, classified, reviewed, and maintained in a way that makes it retrievable? This requires real governance — someone who owns each domain and is accountable for its quality — and it requires taxonomy, workflow, and ongoing maintenance. The supply side of knowledge.
The second is the retrieval side. How do you ensure that the person looking for knowledge can actually find the right thing at the right moment? This requires good indexing, good search, good relevance ranking. The demand side.
The economics of retrieval failure are brutal. If it takes longer to search than to ask a colleague, the system loses every time. Users aren’t being lazy — they’re being rational.
Both sides were historically underbuilt. But retrieval failure is what users felt. It’s what caused abandonment. You could have a perfectly organized knowledge base — which was rare — and still fail completely because nobody could surface anything useful from it.
Here’s the uncomfortable part: getting both sides right required a level of sustained organizational discipline that most organizations were never realistically going to maintain. The people who needed to contribute were busy doing their actual jobs. The people who needed to curate were stretched thin. The people who needed to search found a faster way. The system degraded, slowly, until it became easier to start fresh than to fix it.
Why we’re interested in KM again — and what’s actually different this time
We’ve been building what we’re calling brAIn at Tekdi — an AI enterprise platform that includes knowledge management, skill management, and context extraction. And working on it has clarified something for me about why this moment is genuinely different from previous cycles of KM optimism. The reason we’re interested again isn’t nostalgia. It’s because with generative AI, both sides of the problem become more tractable at once — for the first time.
On the capture side: AI can assist in auto-classifying content on upload, suggesting the right taxonomy tags, extracting metadata from document content, and identifying near-duplicates. The human curator still validates — the human is always in the loop — but the effort drops dramatically. The compliance burden that killed every KM system before — eight required metadata fields before you can submit a document — can be largely automated. The barrier to contribution shrinks.
More interestingly: AI can begin to capture tacit knowledge through conversation-based extraction. Structured interviews, meeting summaries, decision logs, After Action Reviews conducted in natural language. The expertise that used to walk out the door when a senior person left can now be captured in ways that weren’t previously possible. And as more knowledge work moves toward talking to agents — which is already happening at pace — that interaction itself becomes another live source of capture. Knowledge generation and knowledge capture start to converge in the flow of actual work. That’s not a theoretical future. It’s very practical, very soon.
On the retrieval side: semantic search resolves the vocabulary problem that broke keyword-based systems. A search for “staff capacity building” surfaces documents that discuss “training and development” and “talent upskilling” because the system understands meaning, not word matching. The user and the document no longer need to use identical language.
RAG-based systems can synthesize an answer across multiple documents rather than returning a list of ten files. The user receives a response grounded in organizational knowledge — not a list of search results they have to manually triangulate. This is qualitatively different from anything that came before. That said, RAG has its own problems, and the indexing layer has been evolving fast — Page index, Graph RAG, LLM wiki. It’s not a solved problem, but the direction of travel is clear and the pace of improvement is real.
The most important shift: knowledge can now surface proactively at the point of execution. Available to the human doing the work, or to the agent executing a task, without requiring a separate search step. But it’s not just availability — the system needs to actively assist the person in surfacing and absorbing the right context, not just make it technically accessible. That distinction matters. The difference between “the knowledge exists somewhere in the system” and “the system helped you get to it at the right moment” is the whole game.
What I think this is actually building toward
I’ve started to think that “knowledge management” is the wrong frame for what’s becoming possible.
What organizations actually need is a living context engine — a structured, continuously updated organizational memory that humans and agents can both query, and that understands its own freshness and relevance. Not a document repository. Not a search index. An organizational memory with active maintenance and intelligent retrieval.
The way I think about it, a true context engine has to hold several distinct types of organizational context — and this is where it starts to look quite different from what we used to call KM.
There are the knowledge layers most people think about first. Foundational context: who the organization is, how it operates, its values and ways of working. Cross-cutting context: client relationships, project history, key decisions and their rationale, lessons from things that went wrong. Domain context: practice area expertise, product knowledge, regulatory and sector intelligence.
But in an AI-native organization, that’s not enough. Three more layers matter just as much.
Skills — not just what the organization knows, but what it can do. What capabilities exist in the human workforce, what skills specific people carry, and increasingly, what agent capabilities have been built and are available to be deployed. An agent trying to execute a task needs to know what skills it can call on — its own, its sub-agents’, and the humans in the loop.
Rules — the guardrails that govern how decisions get made and how agents are allowed to operate. Business rules, compliance requirements, approval thresholds, organizational policies, escalation logic. In a world where agents are executing on behalf of the organization, the rules layer is what prevents agents from doing technically correct but organizationally wrong things. This isn’t optional infrastructure. It’s what makes agent deployment safe enough to trust.
Agents and sub-agents — the AI workforce itself and how it’s organized. What agents exist, what they specialize in, which sub-agents they can delegate to, how they’re orchestrated. As we move toward what I think of as an Internet of Agents — where agents are coordinating with each other to execute complex tasks — the context engine needs to maintain a live map of that workforce, not just the knowledge it operates on.
Each of these layers serves both humans and agents, but the interface and synthesis differ. What’s constant is that the system has to do the work of bringing the right context to the right moment — not just make it technically accessible somewhere in a repository.
This is why I think “context management” will emerge as a distinct organizational function. Not knowledge management as it was — a system that sat to the side of work. Context management as infrastructure for work, maintained actively, queried constantly, kept fresh. Chief Context Officers aren’t a distant prediction. They’re the natural evolution of the KM function in an AI-native organization.
The organizational discipline that was always required — governance, ownership, taxonomy, maintenance — hasn’t disappeared. AI doesn’t replace that. What it does is make that discipline sustainable. The drudge work of classification gets automated. The retrieval experience improves enough to reward contribution. The vicious cycle can be interrupted.
The underlying insight, I think, is this: the value of organizational knowledge was never in its storage. It was always in its availability at the moment it was needed. That moment is now computable. For the first time, the infrastructure can actually deliver on what was promised — and that’s why we’re paying attention to KM again.
What I’m still working out
This is where I want to hear from people who are wrestling with the same questions.
The hardest part of what we’re building isn’t the technology. It’s the governance design — specifically, how you build a context management function that stays healthy over time without becoming another compliance burden that slowly suffocates under its own weight.
How do you make context ownership feel like a benefit to the people doing the work, not an additional tax on their time? How do you keep organizational memory fresh without creating a new class of maintenance work that nobody wants to do? And who actually owns this function — is it IT, is it strategy, is it something new entirely?
I don’t have clean answers to these yet. If you’ve seen organizations get the governance side right — regardless of whether they used AI — I’d genuinely like to understand what made it work.
Parth Lawate is Co-founder and CEO of Tekdi Technologies, an AI-native technology consulting company working at the intersection of AI, Digital Public Infrastructure, and organizational transformation. Tekdi is building brAIn — an AI enterprise platform designed to make organizations AI-ready from the inside out.


