New Information Management Architecture Brings Flexibility and Confusion
I speak with clients every day about their content management programs. Lately the discussions have been more and more about the trouble their users have understanding where they are when they access a document. They think they’re in a core content 'filing cabinet,' i.e., the content management system itself, where the files are stored, shared and managed. But in reality they’re in a separate collaboration workspace, pulling files from the file cabinet and editing them in a document staging area. In this use case, three systems are in play: the viewing interface or console, the core content store and the temporary staging area.
Welcome to the future of information management architecture, where the backend repository is separate from the user interface, making it possible for any device or app to pull from the storehouse of documents. This architecture means flexible access for the modern digital and complex workplace, but it also means sometimes feeling adrift in that workplace.
Content Management Gets Flexible
Traditional content management (ECM) systems tightly couple the presentation layer — the interface where you access and work with documents — and the backend — where the documents are stored. Your document experience is in one place, you know you’re in the company’s ECM product. Reassuring, yes, but realistically we need to share those documents in other systems and with other users who do not have access to the ECM platform or do not know how to use that technology.
In the new content management architecture, the viewing layer is separate from the storehouse in the backend, so any device or workspace such as smart phones, tablets, voice assistants or websites can access the documents via an application programming interface (API).
Related Article: What Role Content Services Play in the Digital Workplace
One Interface for All Information
This architecture allows a user to connect with multiple repositories in the backend through a single interface. These connections are easy to develop and maintain. The result is a unified user experience in a familiar interface with access to multiple knowledge bases and systems. Information stays close to where it’s created and managed, while remaining available for a range of different contexts. This also puts an end to the duplication of information currently so prevalent.
I worked on an information architecture design where the program manager had to 'hop in and out' of each project’s document set to audit and extract the necessary reports to build a summary view of the company’s projects. The project-based information sets made sense for the project teams, but they led to late nights for the program manager. His context was completely different. The information was permanently organized around someone else’s context. The next wave of content management services will be intelligent and personalized for context.
Related Article: Creating a Slice of Content Services Success
Metadata: The GPS for Content Management
No surprise then that the program manager championed the introduction of metadata in the content management solution design. He understood that meaningful labelling and indexing could help him surface exactly the items he needed from across repositories and project teams, giving him back his evenings.
Metadata makes this new information experience possible. Meaningful, standardized labels help users find exactly what they’re looking for and aggregate related information from across their systems. Think of metadata as gateways directing you to the information you want and quickly eliminating the rest.
Some of the biggest challenges in achieving metadata-driven content management:
- Negotiating with users on the right type and number of metadata descriptors and deciding on standard terminology for the metadata values. (While there are great standards and models to learn from, the metadata must make sense for your specific context.)
- Helping users transition away from folders and sub-folders to populating metadata fields when they upload documents. Keeping the number of metadata fields to a minimum, by auto-populating and providing simple dropdowns can make this less onerous.
- Educating users in how to apply the metadata to create personalized views and filters. Demonstrating before-metadata and after-content experiences is an effective way to show the effort and time put into good metadata is worth it.
Related Article: The Secret Sauce Behind Project Cortex: Good Metadata
New Intelligence Technologies Automate Content Capture and Tagging
The information traffic in your workspace comes from both inside and outside the organization. Knowing what it is and where to find it is the critical first step to controlling it. Users can then filter and search with better results. The next big opportunity is automating that capture for users with artificial intelligence (AI).
AI can recognize content types and user patterns and automatically organize and deliver relevant content to people. That program manager who had to hop around his company’s systems can now use intelligent capture to automatically bring together all the information that supports his work and keep it updated for him, ensuring a comprehensive and always-current knowledge base around that team or topic. AI can also be used to analyze content and extract metadata tags for you, further reducing the workload of assigning tags to information.
Related Article: Naming of Parts: What Taxonomies Bring to Enterprise Search
You Are Here: Navigating the New Experience
As our documents can appear wherever we are — and not only when we're plugged directly into the content management system — it’s not always clear where that document is and what we should do with it. The question arises so frequently, we’ve developed some visual maps for users.
The feeling of confusion is real and can pose an obstacle to user adoption and to the very success of the new system rollout. By making the information findable and demonstrating what’s possible, users will begin to gather their bearings.
About the Author
Andrea Malick is a Research Director in the Data and Analytics practice at Info-Tech, focused on building best practices knowledge in the Enterprise Information Management domain, with corporate and consulting leadership in content management (ECM) and governance.
Andrea has been launching and leading information management and governance practices for 15 years, in multinational organizations and medium sized businesses.