a glass door with the word "PUSH" visible on the far side — from our vantage point it is spelled HSUP
Feature

How Generative AI Is Pushing the Boundaries of User Interfaces

3 minute read
Mark Feffer avatar
By
SAVED
Voice interfaces, avatars and more are changing the way we interact with our computers. This has multiple implications for the workplace.

User interfaces are becoming more sophisticated as technology evolves to handle more and different types of work. We've come a long way from the time when computers were glorified word processors or adding machines.

As part of the evolution, user interface (UI) design has moved on from its reliance on metaphors, like deleting files by dragging them to a trash bin (there are only so many ways to depict a trash can). This is in part because UIs have greater loads to bear as more and more advanced capabilities are introduced. 

The introduction of generative AI and LLMs has pushed users to interact with their technology in new ways, which has made the menu-driven approach of yesteryear clunky. UIs now support multi-modal interactions, where our back and forth can move from text based instructions, to images, audio and video. The transition has implications for workplace training, productivity and even our definitions of employees.

‘How Can I Help You?’

Voice interfaces are potentially the most visible sign of this new human-computer interaction. Through 2029, the market for voice interfaces will grow at a CAGR of 22.6% to $69 billion, according to the Business Research Company. The firm expects advancements in natural language processing, multimodal interfaces, voice authentication and voice-enabled screens during those years.

At first derided as mistake-prone curiosities, voice interfaces have been widely adopted in both the business and consumer markets. About 82% of the 400 companies surveyed by San Francisco-based voice AI platform provider Deepgram use some kind of voice solution. More than two-thirds (67%) use it to improve employee productivity, while 45% use it to increase operational efficiency. Incorporating voice capabilities into HR-related chores like payroll and leave, employee management, claims management and performance management has helped increase employee satisfaction for about 32% of businesses, according to Capgemini.

Analysts see companies across a range of industries adopting voice interfaces. “The key challenge isn’t technological capability — the AI can already handle many tasks effectively,” Nikola Mrkšić, CEO of London-based AI chatbot company PolyAI told me. “Instead, it’s about companies building the right processes and technical infrastructure to support this transition.”

When Avatars Become Employees

While today’s AI user interfaces are weighted toward chatbots, a small number of companies believe avatars offer as much or more potential as they become faster, more secure and more knowledgeable. 

In Austin, AI UI developer UneeQ launched a platform to improve employee training and service delivery through avatars that, according to the company, “feel alive.” The platform, called UneeQ 2.0, is designed to bridge the gap between traditional AI and what the company sees as a growing demand for emotionally intelligent, realistic interactions.

UneeQ combines contextual knowledge, customer data, private and public large language models, natural language processing and multilingual capabilities. Taken together, these capabilities help drive customer loyalty and operational efficiency, according to the company. In addition, its avatars can work across training, sales and customer service, and offer improved, natural-feeling interactions with technology that understands context, emotions and complex queries.

Gartner predicts that by 2028, 45% of organizations with a headcount greater than 500 employees will use “employee AI avatars” to expand the capacity of their workforce. (Those avatars are typically defined as AI-based applications that look and sound similar to real workers.)

That’s not meant to imply that people are being removed from all aspects of business operations. "Many organizations still rely on human judgment for complex scenarios, and the shift requires careful consideration when codifying these decision-making processes into automated systems," PolyAI’s Mrkšić continued.

As AI grows increasingly more complex and users expect it to take on more complex tasks, the comprehension and flexibility of UIs becomes more critical. Ultimately, the solution may lie in an interface that minimizes the use of visuals in favor of plain spoken language. That, in turn, can alleviate training and adoption — because if people can speak, they’ll be able to interact that much more easily with AI. 

Learning Opportunities

Editor's Note: Read more about AI-human collaboration:

About the Author
Mark Feffer

Mark Feffer is the editor of WorkforceAI and an award winning HR journalist. He has been writing about Human Resources and technology since 2011 for outlets including TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages. Connect with Mark Feffer:

Main image: Jean Bach | unsplash
Featured Research