man talking on cellphone walking past a series of empty frames in an art gallery
News

Glean Targets AI Agent Sprawl With New Lifecycle Framework

4 minute read
Siobhan Fagan avatar
By
SAVED
Glean’s latest platform updates focus less on building AI agents and more on governing, measuring and controlling them at scale.

In Brief

  • Glean introduced a structured, seven stage Agent Development Lifecycle for enterprises.
  • New platform features focus on agent governance, context and measurement.
  • Today's announcement establishes a new AI battleground: agent lifecycle governance.

Glean today introduced its enterprise Agent Development Lifecycle (ADLC) framework, designed to tame agent sprawl. The ADLC aims to give CIOs a repeatable path to deploy, govern and measure AI agents.

Alongside the framework, Glean released new capabilities across its platform spanning agent building, governance and monitoring. Some features are generally available, while others remain in beta or are coming soon.

ADLC, Updated for the AI Agent World

The Application Development Lifecycle has long given teams a repeatable process for building applications from planning through maintenance. Glean is applying that same logic to a newer challenge: getting AI agents to work reliably inside large organizations.

Worth noting before jumping into the details: the framework assumes teams are starting fresh. For organizations already mid-sprawl, it's unclear where the framework fits. During the same event, one attendee asked the question: should companies audit their existing agents first, or pick a single high-value use case and rebuild it properly? Glean's framework doesn't explicitly address retroactive adoption — a notable gap given that agent sprawl is the very problem it claims to solve.

The framework has seven stages:

Opportunity identifies the business problem before anyone builds anything. Not every task requires an agent, and this stage forces the question early: where will an agent save time or improve outcomes compared with a simpler solution?

Next comes Design, where teams map what an agent should do, which systems it needs to connect to and how it should interact with people.

The company made a deliberate choice to include Performance before development begins. Setting success metrics upfront, rather than after launch, helps organizations avoid the trap of building something technically impressive that no one can prove is useful.

Input gives the agent access to the right company data and context, with the appropriate permissions. An agent that cannot reach the information it needs — or one that can access information it should not — is either ineffective or a liability.

The build and test stage comes next. During Develop, teams use Glean’s platform tools — including the Auto Mode builder, debug views and sub-agent architecture — to implement the workflow. They trace the agent's decision-making step by step to catch problems before they reach users.

Launch involves setting access policies, applying organizational guardrails and distributing the agent through a controlled library. The goal is to avoid the scenario Glean warns about: ungoverned agents proliferating across teams without oversight.

Finally, Monitor & Improve treats the agent as an ongoing task rather than a finished product. Teams track adoption, gather feedback, measure hours saved and refine the agent over time. The phase resembles the maintenance stage of traditional software development, but with a stronger emphasis on business value: did the agent deliver on what the Performance stage promised?

In practice, this final stage may be harder than it sounds. During Glean's live event, a practitioner noted that agents often struggle with the improvement part of the loop — when given feedback, they tend to drift off-task or lose sight of broader context rather than course-correcting cleanly. Glean's upcoming Insights Dashboard tracks performance trends, but tracking and closing the feedback loop are different problems. This is an open challenge across the industry, not just for Glean.

Updates to the Glean Platform

Glean announced a number of new features to support the different stages of the agent lifecycle. It broke them down into three practical needs.

Building Agents Faster

Auto Mode lets users describe what they want an agent to do in plain language instead of having to configure them step by step. New Debug and Trace views expose how agents arrive at decisions, giving builders clearer visibility into failures, reasoning paths and execution flow.

Glean also introduced sub-agents, allowing parent agents to delegate specialized tasks to smaller, focused components with the intention of keeping increasingly complex workflows manageable. Additional updates expand the agent sandbox with secure file handling and code execution, while new event-based triggers allow agents to respond automatically to content updates, scheduled actions and other workflow events.

Controlling How Agents Are Used

The new Agent Library Controls add verification badges, featured agents and departmental categories so organizations can manage which agents are visible and trusted.

Agent Access Policies is still in beta. With it, admins can set organization-wide rules, blocking sensitive content from being processed or restricting which teams can write to critical systems.

Measuring Whether Agents Deliver Value

An updated Agent Insights Dashboard, coming soon, can track adoption, top use cases, estimated hours saved and feedback trends over time.

Glean's Agent Insights Dashboard
Glean

AI Agent Lifecycles: Build, Govern, Scale

Enterprises are replacing standalone AI features with structured lifecycle systems to build, govern and scale AI agents across workflows.

As recent Reworked coverage noted, agents are shifting from isolated capabilities to coordinated systems for orchestration, security and workflow management, making governance and lifecycle management foundational requirements.

Learning Opportunities

Google's Four-Pillar Framework

Google's Gemini Enterprise Agent Platform offers one of the clearest lifecycle models. It breaks it down into four pillars: Build (low-code to full-code tools), Scale (infrastructure for long-running agents), Govern (centralized identity and access control) and Optimize (testing and automated refinement).

Security & Oversight Models

While vendors increasingly agree on the need for lifecycle governance, they differ sharply in how control should work.

Laserfiche's AI Agents inherit the permissions of the initiating user. OpenAI takes more of a marketplace approach with Frontier, assigning each agent an employee ID with explicit access controls. Wrike embeds its AI Agents directly into collaborative work environments, treating them less as standalone systems and more as operational teammates integrated into day-to-day workflows.

What's less clear is what happens when agents span multiple platforms with incompatible rules. Glean emphasizes open APIs, MCP interoperability and support for external models, but its approach still centers on a shared governance and context layer controlled through the Glean platform. Will enterprise AI converge around interoperable standards or will it continue to fragment across competing governance models?

fa-regular fa-lightbulb Have a tip to share with our editorial team? Drop us a line:

About the Author
Siobhan Fagan

Siobhan Fagan is the editor in chief of Reworked and host of the Apex Award-winning Get Reworked podcast and Reworked's TV show, Three Dots. Connect with Siobhan Fagan:

Main image: Clément Dellandrea | unsplash
Featured Research