a circular filled fish tank, lit in blue with a dense population of fish, swimming in circles
Feature

How Businesses Are Turning Generative AI Into Measurable Value

3 minute read
David Barry avatar
By
SAVED
Successful GenAI rollouts share at least one thing in common: the initiative starts by identifying and aligning with a core business goal.

McKinsey predicts generative AI (GenAI) could contribute nearly $20 trillion to the global economy by 2030, yet RAND estimates over 80% of AI projects fail.

Why the disconnect?

The answer often lies in one critical failure — a lack of alignment between GenAI initiatives and core business goals. Without this alignment, promising projects quickly become costly experiments. But a growing number of experts and companies are showing a better path forward: one rooted in clear metrics, ROI-focused planning and cross-functional collaboration.

Why Strategy Must Come Before AI Tools

The foundation of any successful AI initiative is a problem-first approach, not a technology-first one, Flexera's chief architect Jesse Stockall told Reworked.

“AI projects, like all others, must have measurable metrics and clearly defined outcomes,” Stockall said. “Initiatives should begin with a business problem and define a desired outcome — not the other way around.”

He cautions against jumping straight into GenAI without first evaluating whether simpler, cost-effective methods — like traditional machine learning — might achieve the same result.

Stockall also recommends that organizations work with existing AI vendors, instead of developing everything in-house. This frees up internal teams to focus on what they do best: solving business challenges.

Success, he added, depends on defining the right metrics. These should fall into key categories:

  • Financial impact.
  • Operational efficiency.
  • Adoption and engagement.
  • Risk and compliance.
  • Customer experience.

Every potential AI project should be scored across multiple dimensions, including business impact, feasibility, stakeholder readiness and risk level. A high-impact use case that lacks technical readiness or stakeholder buy-in might consume more resources than it returns.

How Orcus Ties AI to Business Outcomes

Orcus founder and CEO Nic Adams, goes further: If an AI model isn’t tied to a core business metric, it shouldn’t get built. “Success starts at the top,” Adams said. “If an AI initiative doesn’t improve operational efficiency, boost revenue, reduce costs or increase speed, it gets shut down.”

To stay focused, Adams recommends using financially grounded metrics like:

  • Net present value (NPV)
  • Margin lift
  • Payback period
  • Time-to-close
  • Customer resolution rates
  • Upsell success

The KPIs should be embedded into company OKRs and tracked with the same seriousness as revenue.

Adams also emphasizes choosing high-impact, scalable use cases — especially those involving high-volume, repetitive tasks with bottlenecks. He warns against falling for niche or trendy applications that lack real-world value or scalability.

“Most organizations will get stuck in AI governance,” Adams said. “The winners will be the ones who move fast, deliver outcomes and use tools strategically — not just follow the hype.”

Keeping AI Agents Grounded in Business Reality

AI projects can easily spiral out of control without a structured approach, said CTO and independent consultant Andy Winskill. He developed an AI Agent ROI Framework to address this, a six-phase model designed to make sure generative AI deployments are directly linked to measurable business value.

His framework emphasizes discipline from the outset. Each project begins with a problem canvas, an ROI hypothesis and clear alignment with business stakeholders. If it can’t be mapped to tangible outcomes, such as cost reduction, time savings, accuracy improvements or revenue uplift, it doesn’t move forward. His evaluation process scores each use case by:

From there, AI agents are launched into real-world workflows — not as replacements for legacy systems, but as augmented layers of intelligence. Human-in-the-loop testing ensures adoption, while observability, cost controls and error-handling keep deployments safe and scalable.

Strategy Over Trends: How Revotech Makes GenAI Work

Revotech Networks treats GenAI as a strategic business lever, not a flashy trend. Real results begin with strong IT leadership and clear planning, Revotech president John Yensen told Reworked.

“Ensuring GenAI actually drives business value starts with embedding it into strategic planning,” Yensen said.

His team focuses on measurable KPIs, such as:

  • Support ticket resolution time.
  • Customer satisfaction scores.
  • Time to deploy.

If AI tools don’t move the needle, they’re either iterated or sunsetted.

Revotech identifies high-impact use cases collaboratively across teams and scores each based on AI ROI potential, ease of integration and data availability.

Learning Opportunities

Importantly, pilots run in cloud-native sandboxes to avoid clashing with legacy systems. Only after a model has proven its value is it integrated into core business platforms.

To manage risk, an AI steering committee oversees every deployment. This committee monitors for data privacy, algorithmic bias and regulatory compliance. Bi-weekly sprints ensure both technical and business teams remain aligned.

The Future of GenAI Is Business-First

One thing is clear across all of the examples above: successful generative AI implementation starts and ends with business alignment.

The winners in this space won’t be the companies with the most advanced models or the biggest budgets. They’ll be the ones who:

  • Tie every AI initiative to a measurable business goal.
  • Score use cases for impact and feasibility.
  • Align cross-functional teams from day one.
  • Focus on speed, governance and strategic adoption.

At a time when AI headlines change faster than business results, the key to long-term value isn’t chasing trends — it’s building with purpose.

Editor's Note: For further insights into AI initiatives:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Yi Liu | unsplash
Featured Research