AI generated image of a stressed out futuristic looking worker who has spilled his coffee on the console of his spaceship desk
Editorial

The Trap of AI Experimentation and How to Move Forward

6 minute read
Angelina Samoilova avatar
By
SAVED
The reasons AI efforts stall aren't mysterious. They’re banal: no shared documentation. Siloed tools. Scattered knowledge.

Every week, I meet with companies to learn about their AI plans. And I can categorize those teams into three categories: The first group is skeptical: smart people who don’t want to break what already works. The second group is dabbling: pilots, proof-of-concepts, a few agents here and there, nothing stitched together. The new third group is the “AI everything” crowd: good intentions, lots of demos, lots of vendor calls, chasing 10 things at once. Different starting lines, same end state: noise, not lift.

Here’s the uncomfortable bit: the reasons these efforts stall are not mysterious. They’re banal: no shared documentation. Siloed tools. Scattered knowledge. Endless time wasted chasing answers. We keep trying to summit Everest without training, without base camps, without a weather check. Then we blame the rope.

In the tech world, we strive to be solutions-oriented, and that’s absolutely fantastic. Over the last few years, “solutions-oriented” has turned into a habit of purchasing point solutions for everything. We’ve all done it, with the best intentions. Now, let’s take a beat. Before unleashing AI on your workflows like a snowstorm, let’s see how we can clear the path instead.

Start With the Stack You Already Have

One of the first traps I see is the rush to add yet another shiny tool. A wrapper with a nice demo. The temptation of a point solution that promises to automate X or Y. You buy it, try to Frankenstein it into your stack, and soon it under-delivers. Or worse: the vendor is a small team with a big idea but no resources to be a reliable partner. The result is another disconnected tool, another login, another line item on the budget.

Instead, start by talking to your current providers. Set up a call with your customer success manager. Ask them what their AI roadmap looks like. What’s coming in the next quarter? What’s already there that your team just isn’t using? You might be surprised at how much is available, sometimes even included in what you already pay for.

That’s the first lever: squeeze value from your stack before adding new weight. If nothing compelling shows up, then — and only then — look outside.

Experiment With Precision, Not Noise

When considering new tools, resist the urge to “try everything.” Be specific and precise about the use cases you want to test. If your team can’t clearly articulate the pain points, then it’s not a pain worth solving yet. Don’t experiment just to experiment. Anchor it to something acute, where the before-and-after impact can be felt.

And when you do trial, keep it tight. Long trials are almost always a trap. A two-month trial sounds generous but usually leads to drift, half-engagement and no clean results. Instead, run a one- to two-week trial with a small, committed tester group. Four to 10 people is enough, depending on the org. That size keeps feedback manageable and sharp.

Structure the trial: survey before, kickoff meeting, mid-point check-in, live channel for questions, post-survey. Collect feedback against pre-defined criteria. Look at the quantifiable impact, not just opinions. Then decide: does it earn a place in the stack or not?

Encourage Experimentation — With Boundaries

Teams should feel encouraged to bring forward tools they want to test. But experimentation without guardrails turns into tool sprawl fast. One way to balance this: let individuals trial a tool on a single license for one month. After that, if they want to keep it, require them to build a business case. Show the value. Outline the use. Propose how it would fit into existing workflows.

This way you get curiosity without chaos. Teams know they can explore, but there’s also accountability to prevent every new app from becoming a permanent distraction.

Foundations Matter More Than Features

Whether you’re skeptical, dabbling or “AI everything,” the failures usually trace back to the same foundations. Leaders set bold intentions. Teams spin up pilots. There’s a flashy demo. Two weeks later, nobody knows where the prompt lives, permissions are unclear, and the knowledge is stuck in someone’s head. Momentum dies. Another pilot starts. Rinse, repeat.

I recommend taking a different approach: pause new tool conversations and focus on foundational knowledge. AI will continue to generate nonsense if you have no information architecture it can navigate, no blueprints to define scope and no templates to reference what good looks like. It might sound boring, but it’s essential. Start by asking some basic questions: How do we create knowledge? How does information move? Where does it live? Who owns it? How easy is it for a new person to find, understand and reuse it? Not in theory, but on a messy Tuesday morning when half the team is remote, a customer is waiting and Slack is on fire.

Try this exercise: pick a simple question like “What’s our pricing for X?” Map the actual steps someone takes to get the answer. Count the pings, the waiting, the context switching. Then try a more complex flow: an RFP, a security review, a post-mortem. Watch the blockers multiply. At some point, the person stops working and starts chasing. That’s the truth of your workflows.

From there, build foundations. Not glamorous, but compounding:

  1. Documentation as a product, not an afterthought. Write the “how,” not just the “what.” Default to links, not files. Make it easy to propose edits. If it’s painful to update, it will rot.
  2. Tool consolidation with intent. Fewer systems, clearer ownership. Decide the system of record for notes, decisions and code. Say it out loud. Enforce it.
  3. Knowledge hygiene as a ritual. Templates, naming, review cadence, pruning. Assign a rotating “librarian.” Small habits, huge payoff.
  4. Access by default, exceptions by policy. Most delays hide in permissions. If someone has to ask five people for a doc, velocity is capped.

Only once these foundations exist does AI become compounding. You can see a clear path to automating otherwise tedious and laborious tasks like RFP responses, legal clauses comparison, proposal generation, and PRDs generation, all based on information architecture and solid templates to provide guardrails and standards for AI. Without them, models are just fancy indexes of chaos. They produce outputs that look impressive but collapse in real use. That is the difference between a few cool demos and reliable outcomes that stand up on a busy Tuesday.

Plays that work in practice:

  • For skeptics, frame AI as search plus memory before automation. Start by making existing knowledge queryable. Use AI to tag, summarize, and surface duplicates. Quiet wins beat loud pilots. The goal here is confidence: if people can find, trust, and reuse what already exists, they will be far more open to the next step.
  • For dabblers, pick one workflow and make it boringly great. Example: security questionnaires. Define the handover template, the prompt, and the DRI. Measure cycle time before and after. Socialize, then repeat. When one team sees a full, end-to-end improvement, others will borrow the pattern, and you will not need to sell it twice.
  • For AI everything, declare a moratorium week. Freeze experiments. Catalogue them. Score on impact, adoption, and cost. Kill half. Assign owners and SLAs to what is left. This creates focus, cuts noise, and turns scattered trials into a small number of well-run services that the organization can rely on.

Leadership, Pace, Everest

Leadership is not about picking the right model. It’s about slowing down, naming the mess and setting standards. Leaders who rebuild the base create conditions where AI compounds. Leaders who chase shiny objects create unpaid volunteer work — and burn their teams.

The pace doesn’t need to be dramatic. One foundation fix per week. One workflow end-to-end per quarter. One new ritual per team. Boring, visible progress.

Learning Opportunities

And yes, the Everest metaphor holds. You don’t summit on day one. You train, acclimatize, learn the gear, respect the weather and build trust. AI at work is the same. Some days will feel like whiteout. That’s fine. If the base camps are strong, you don’t get lost.

The trap is chasing every experiment. The way forward is building a base where experiments turn into durable, compounding wins.

Editor's Note: Read more tips on using AI to deliver actual results below:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Angelina Samoilova

Angelina Samoilova is a tech sales executive and writer focused on AI at work, digital workplace strategy, and knowledge management. At Notion, she works with global teams to rethink how they consolidate tools, document knowledge, and scale AI initiatives without the hype. Connect with Angelina Samoilova:

Main image: Generated by Angelina Samoilova using nanobanana
Featured Research