neon sign of a question mark, on its side, lit up
Editorial

5 Questions Every Leader Should Ask Before Building AI Solutions

6 minute read
Sarah Deane avatar
By
SAVED
AI isn’t the enemy — or the magic fix. Most failures come from leaders skipping the hard questions. Here are 5 that separate hype from real impact.

Despite the flashy — and at times, scary — headlines, AI isn't a distant, science-fiction threat to humanity, nor is it inherently a danger to the workforce. It is a powerful, general-purpose technology that is rapidly reshaping how people and organizations operate, make decisions and interact. As with previous technological shifts, AI's impact will depend on how we guide, govern and apply it.

The excitement and investment in AI hasn't necessarily translated to meaningful business results. The challenge with AI initiatives isn't always the technology. More often than not, it’s a lack of clear strategy. Leaders who rush to adopt AI without clear goals in mind often end up running the risk of building and accumulating tools that are technically impressive but irrelevant. AI engineers who focus solely on the thrill of creating new technologies may end up with tunnel vision: focusing more on what they can build rather than what they should build to actually benefit people. And, in some cases, a visionary’s intentions may be grand and seem genuinely beneficial, yet without thoughtful consideration of potential fallout, even the best ideas can lead to unintended — and sometimes harmful — consequences.

Leaders must ask the right questions at every stage of an AI project to extract real value. Here are five critical questions every leader should consider, from a conversation I had with Daisy Grewal, Ph.D., social psychologist and director of Analytics Innovation and Automation, AI Strategy and Transformation at Korn Ferry.  

1. What Human Problem Are We Solving – and Is AI the Best Way to Solve It?

This first question may seem obvious, but it’s often overlooked. According to Grewal, the key is to identify a clear pain point and ensure it’s tied to tangible value creation or efficiency. “Plenty of AI projects are going on simply because leaders think they need to be doing ‘something with AI,’” she said.  

She encourages leaders to “stand above the crowd” by really thinking about what problem they are solving, reminding us that, “AI tools only matter if they help humans diagnose, decide, or act faster on real business problems.” 

Before adopting a technology, ask: Does this problem truly need AI, or could simpler or different solutions suffice or even do it better? By aligning AI with actual business needs rather than hype, organizations can avoid wasted effort and ensure resources are directed toward initiatives that matter.

2. How Are We Validating AI’s Output Is Accurate and Meaningful?

It’s widely known that AI models can hallucinate or produce inaccurate outputs. What’s less widely discussed is whether AI’s output is genuinely useful to humans. Grewal emphasizes the importance of creating a robust evaluation that is anchored to the problem you’re solving early on in the process. “You need to determine whether your tool is generating factual, usable information that humans will find not just credible but valuable,” she said.

Validation goes beyond technical accuracy. A model might perform well on test data but fail to influence real-world decisions. Leaders must establish metrics that measure usefulness and relevance to the problem at hand, as well as correctness. Regular testing against these metrics ensures AI outputs drive actionable insights rather than misleading information.

3. What Is the Human Role, and Are We Equipped to Fulfill It?

AI is not a magic wand. Human oversight is crucial. Grewal warns that too often, users generate AI outputs without sufficient training or context, creating the risk of misuse or error. “Your human users need to deeply understand why and how AI should augment judgment, not replace it,” she explained, saying that while there's a lot of fine print on AI tools to check and validate results, we’re still being bombarded by AI slop.

Ensuring human-in-the-loop effectiveness means equipping users to review AI output, understand its limitations and apply it responsibly. This process reduces the risk of high-profile mistakes — like Deloitte’s challenges in Australia — and helps organizations leverage AI as a supportive tool. Leaders need to create a culture of informed AI use. "Users need to understand enough about how AI works to feel empowered and equipped to critique the output, and not just assume that AI is performing magic,” Grewal said.

As AI continues to accelerate, the demand for specialized skills — and for employees with strong uniquely human capabilities — grows alongside it. Even as organizations make workforce reductions in response to AI, IBM CEO Arvind Krishna highlights an important trend: these shifts are creating opportunities to reinvest human effort into areas where skills like critical thinking, creativity and interpersonal acumen are increasingly valuable.

The question then for leaders is: is your workforce equipped to leverage their human edge? The behaviors and mental habits that drive Emotional Range, Ecosystem Thinking, Empathy, and Energy Contagion are becoming central to sustained organizational success in an AI-driven world.

4. How Will We Measure Business Impact, Not Just Usage or Excitement?

It’s easy to get caught up in metrics like adoption rates, clicks or internal enthusiasm. But these vanity metrics don’t reflect whether AI is driving meaningful business outcomes. Grewal stresses the importance of impact measurement: “There are lots of impressive AI-driven tools being built these days. To justify investment, you should be able to prove that a tool actually changed a decision, freed capacity or drove revenue in a positive way.”

Leaders need to define KPIs linked to operational efficiency, employee experience, customer experience or financial performance. This ensures that AI projects are held accountable for tangible results, not just novelty. By tying metrics to real-world outcomes, organizations can prioritize initiatives that deliver measurable ROI and avoid being seduced by flashy but ineffective tools.

5. What’s Our Long-Term Care and Feeding Plan for AI?

AI isn’t a one-time deployment. It’s an evolving system that requires ongoing attention. And unlike typical SaaS applications, the maintenance and enhancements required to sustain it are different from what many companies may be used to. According to Grewal, many AI tools fail not because of flaws in design but due to neglect: “Most of the AI tools being built today will likely die of neglect, not malice.”

Sustainable AI requires specific types of updates: fresh, high-quality data, periodic model retraining and integration into daily workflows, Grewal continued. The most successful AI systems are those that are “pushed” into daily operations rather than requiring users to seek them out. "Success will depend on making AI a living part of how work gets done, not a shiny detour,” she cautioned.

While being excited about the promise and possibility, leaders must also plan for the maintenance, monitoring and adaptation of AI solutions to ensure they remain relevant and effective.

One powerful way to uncover upstream and downstream implications is to use an “And then what?” line of questioning. For example: your AI-driven solution does [X] — and then what? How does the impact matter? What happens to the people in the broader ecosystem? What policies or supporting structures are required? How does the impact and risk unfold over time: in 30 days, 90 days, a year, three years, five years? 

Make sure you have a representation of expertise in the room. Having the right perspectives present ensures that potential risks, opportunities and unintended consequences are thoroughly considered. Asking these questions — and involving the right experts — helps leaders anticipate ripple effects, design more resilient systems and ensure that innovation delivers sustainable value.

Another useful approach is role-playing. Even with the best intentions for your vision, it can be valuable to imagine how the same capability might be used by a “bad actor” within the ecosystem. What risks emerge if this technology or capability falls into the wrong hands? The exercise can reveal vulnerabilities and help you identify the mitigation strategies or action plans needed to address potential threats before they become real problems.

Conclusion

Building or deploying AI is not only a technical exercise alone, it’s a strategic one. By asking these five questions, leaders can move beyond hype and focus on initiatives that solve human problems, create value and integrate into operations responsibly and sustainably:

  1. What human problem are we solving, and is AI the best way to solve it?
  2. How are we validating AI’s output is both accurate and meaningful?
  3. What is the human role, and are we equipped to fulfill it?
  4. How will we measure business impact, not just usage or excitement?
  5. What’s our long-term care and feeding plan for AI?

Answering these questions can transform AI from a buzzword into a tool that drives better decisions, enhances productivity and produces measurable business outcomes. Leaders who commit to this level of strategic thinking position themselves, and their organizations, to reap the real benefits of AI, not just the promise.

Learning Opportunities

Editor's Note: Read more leadership tips to make the most of AI investments:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Sarah Deane

Sarah Deane is the CEO and founder of MEvolution. As an expert in human energy and capacity, and an innovator working at the intersection of behavioral and cognitive science and AI, Sarah is focused on helping people and organizations relinquish their blockers, restore their energy, reclaim their mental capacity, and redefine their potential. Connect with Sarah Deane:

Main image: Jon Tyson | unsplash
Featured Research