graffiti on the wall reading: "trust your struggle"

Why Organizations Still Struggle With Deploying AI

September 26, 2022 Information Management
Siobhan Fagan
By Siobhan Fagan

A recent survey conducted by Fivetran and Vanson Bourne shows that enterprise AI capabilities are lagging expectations. The reason, the research found, is lack of trust in artificial intelligence among enterprise leaders.

According to the data, 87% of organizations consider AI vital to their business survival, but the same proportion also say they do not trust AI to make business decisions without input from humans. In fact, 90% of companies represented in the survey said they continue to rely on manual data processes despite AI capabilities.

“This study highlights significant gaps in efficient data movement and access across organizations," Fivetran CEO George Fraser said in a statement, adding that the failure with AI deployments is likely more the result of technical issues rather than a reluctance to turn to AI.

AI Remains a Work in Progress

Yan Yan, principal data scientist at Seattle-based Amperity, said evolving and emerging AI capabilities are likely to change this situation. Because AI has revolutionized the way companies identify, understand and connect with their customers, she sees these technologies as rapidly evolving to what she believes will be even more powerful solutions in the near future. 

“While early iterations of AI were slow to meet enterprise-scale ambitions, new advances in AI and machine learning have unlocked capabilities once thought impossible,” she said. 

Examples of that include new, more affordable tools that have emerged over the past couple of years to help companies take advantage of cloud computing and improve their integration capabilities. Thanks to AI, every organization now has the ability to manage massive datasets, ingesting raw data across every touch point.

But all this also requires the development or application of new techniques, Yan said. When generating AI-powered insights, it is advantageous to build the predictive modeling pipeline on top of a solid data foundation, where the data matching (or entity resolution) problem is solved.  

Related Article: AI at Work Still a Work in Progress

To Err Is Human

Interestingly, if humans don't trust artificial intelligence it is often because of our own inherent ability to introduce errors. When automating decision-making — or any process for that matter — the machine learning model is only as good as the data powering it.

A survey carried out by Monte Carlo shows that the average organization experiences approximately 70 data incidents per year for every 1,000 tables in their environment. At that rate, it's perhaps not surprising that data teams can't catch every error that occurs, though those errors can have significant impact for organizations.

“How can an executive trust an AI black box when they see bad data in their daily reports, or that their key analytics dashboard has crashed for the second time this month?" said Lior Gavish, CTO and co-founder of the San Francisco-based company.

Gavish says AI can generate valuable, counter-intuitive insights, but without trust, that strength turns into a weakness. In the machine learning field, there is a concept called "explainability," which is the degree to which humans can understand how the AI derived its decisions. 

“If a data science team creates more complex deep learning or neural network models, these can be 'black box' models with little transparency,” he said. 

Related Article: Modeling Business With AI

Adapting AI

Eddy Chavarria, VP of sales engineering at Austin, Texas-based Striveworks, said that in order to develop AI solutions that organizations can trust to make automated decisions, those solutions have to be adaptable — quickly.  

If, for example, it takes a data science team weeks or months to get a solution built, tested and deployed, those timelines may not align with the real business needs, he said.

He also believes that enterprises still struggle transitioning their legacy data generation processes to modern frameworks on top of which AI solutions can live. Much of the data that would be used to help drive automated decision-making lives in places that are not immediately useful for AI deployments. 

“This means that data teams spend a lot of time on infrastructure and data engineering vs. building the AI tools themselves,” he said. “Many of the interim solutions to help move data from legacy systems to modern data stores are not scalable.” 

But adapting AI isn't that easy because the process reaches beyond the realm of technical; it's an organizational, operational and strategic issue as well, said Collin Mechler, practice lead at Domo. And underpinning all of that is data, which is siloed, buried, messy and full of holes. “It’s the unsexy stuff that has largely contributed to slow AI advancements,” Mechler said. 

One of those challenges is that enterprises trying to deploy AI capabilities are unable to use traditional DevOps practices. As the success or failure to an AI deployment is iterative — a process where the design of a product or application is improved by repeated review and testing — it requires large quantities of good quality data. This creates dependency on interrelationship between training data, the model and the interface data.

However, too often, traditional IT sees too much iteration as a problem and in conflict with its goals, Mechler said. So in order for AI to work, this conflict needs to be resolved.

Related Article: What AI Automation Can Bring to Your Organization

Gaining Alignment

Another challenge companies deploying AI capabilities often encounter is that while DevOps and AI processes follow a series of steps that repeat throughout the development stages to achieve goals, AI development usually requires a different set of skills. It is also non-linear, which can be difficult for people working outside of AI to grasp.

Similarly, for an AI development process to work, Mechler advises teams to clearly identify the problem that AI is being deployed to solve. This means that the goals of AI and businesses need to be aligned. 

“Business, IT and analytic teams’ initiatives aren’t always the same," Mechler said. “The individual systems and tools in place for each team can make it challenging to streamline processes. This disconnect between multiple team goals and initiatives can complicate collaboration and present a major bottleneck for AI capabilities.” 

Companies that place value on deploying their AI capabilities need to invest in the required skill sets for the team, in addition to ensuring communication and collaboration across the enterprise so everyone is working toward the same objective. Conflicts need to be anticipated, and contingency plans laid out for both sides to work successfully.


Featured Research

Related Stories

bird feeding out of a person's hand, suggesting trust and care

Information Management

Why Responsible AI Should Be on the Agenda of Every Enterprise

Two dogs in robber masks

Information Management

Insider Risk: What Hybrid Companies Need to Know — and Do

tandem skydiving with what looks like a very tiny parachute

Information Management

Can You Trust Zero Trust Networks in the Remote Workplace?

Join Top Industry Leaders at the Most Impactful Employee Experience and Digital Workplace Conference of 2023

Reworked Connect