The State of Digital Assistants in the Workplace
Digital assistants, also called virtual assistants (VAs) or virtual employee assistants (VEAs), are rapidly becoming a ubiquitous part of the workplace. From scheduling meetings to sending emails, these AI-powered tools help employees automate tasks and increase productivity.
Gartner predicts that by 2025, 50% of knowledge workers will use virtual assistants daily — up from 2% in 2019 — and a recent Goldman Sachs report estimated that generative AI, with popular market offerings like ChatGPT, Bard and Bing, could impact 300 million full-time workers.
The future of digital assistants in the workplace looks promising, but how do companies use the tech to their advantage today?
The State of the Digital Assistant
Few companies have pushed full steam ahead with digital assistant tools and applications, according to JD Dillon, chief learning architect at Axonify and author of “The Modern Learning Ecosystem," but what early adopters have done is test the technology using narrow tools to support targeted programs.
"For example," he said, "they may use a chatbot to support a management development program, offering nudges and feedback via text.”
This area, he said, is about to accelerate, thanks to the rapid integration of AI within everyday work tools.
“AI is already embedded within most systems," Dillon said. "It’s moving front and center, shifting the way employees interact with technology. Companies must get ready. This means dedicating resources to assessing the AI landscape and proactively preparing their teams to adjust their workflows, mitigate risks and seize new opportunities.”
Related Article: How Generative AI May Fit Into Your Organization
Digital Assistants for Content Creation & Communication
James Walsh, director of solutions at Customer.io, said the common bots and assistantsuse cases he sees are mostly businesses looking to generate written content and communications for digital marketing channels.
Think email subject lines, personalized email copy, text messages and in-app notifications for example.
“Right now," he said, "AI offers considerable value when it comes to spinning up ideas, starting a paragraph from a blank page or iterating on an existing concept. This aspect alone can offer a lot of value for marketers, developers and other business stakeholders.”
Generative AI specifically, according to that Goldman Sachs report, stands out in its ability to generate novel, human-like outputs rather than regurgitate information. This means the technology can potentially complete creative endeavors the same way humans do — and not just with the written word but also with audio and video.
Digital Assistants for Learning & Development
Dillon said AI-backed digital assistants allow companies to support individual employee needs at the speed and scale of a modern enterprise.
“AI can help restore equity in workplace learning and development,” he explained. “Generative AI accelerates the content development process, allowing L&D to deliver solutions more quickly and address a greater number of performance challenges.”
Then, leadership can apply AI to personalize the learning experience for every employee, focusing development topics on individual performance gaps and career goals, he said.
AI also has the ability to help leaders connect the dots between learning programs and performance outcomes, “validating their impact and delivering a clear ROI for the business.”
Related Article: Are Digital Assistants the New Face of Search?
What to Know Before You Invest in Digital Assistants
Organizations can’t allow AI to become a black box of decision-making, said Dillon. Instead, he recommends first answering questions on the informed, strategic use of AI-enabled technology, particularly across these four areas:
Data Security and Privacy
First, Dillon said, how do these tools align with company and regional data security and privacy rules?
“Companies must be careful with priority information and avoid sharing it via unapproved technology.”
He recommends stakeholders partner with legal and IT teams to understand the terms of use for any approved AI-enabled technology and to "scrutinize their data storage and sharing practices.”
Information Sourcing With AI
Next, employees must be able to source AI-generated information in a high-risk environment — the workplace.
“If an employee queries an AI tool, they must be confident the information provided is accurate and compliant. Companies cannot allow AI to hallucinate (aka lie with confidence) or violate workplace rules.”
Learning Opportunities
This idea of AI hallucinations is something digital assistant users might already be aware of. Tools like ChatGPT and Bard, for example, can confidently provide incorrect or misleading information. Right now, users must comb through all outputs to check for accuracy.
Understanding How AI Is Trained
“Organizations must also maintain awareness of how AI models are trained and consider how bias may influence related decisions,” said Dillon.
Back in the 1980s, for example, a British medical school was found guilty of discrimination after it used a computer program to pick interview applicants. The program was developed to match human admissions decisions with a 90% to 95% accuracy, and, as such, was biased against women and those with non-European names.
Unfortunately, despite being more than 30 years later, we still have many of the same problems with bias in technology today. Understanding those biases and how they factor into AI training can help companies better determine where use cases are most appropriate and where oversight might be necessary.
Employee AI Training
Finally, companies need to decide on the skills required to effectively apply AI-enabled tools to the job and prioritize related training and support. Dillon said training should also include the risks associated with AI and all security-related policies.
A recent survey found that only 14% of companies train their employees on AI tools and platforms — although 49% of them say they need that training.
Related Article: Ready to Roll out Generative AI at Work? Use These Tips to Reduce Risk
Understanding the Limitations of Workplace Digital Assistants
Where AI tends to fall short, according to Walsh, is in its ability to understand the larger context of business goals and marketing channel best practices.
For example, he said, you can ask an AI assistant to write 10 email subject lines on a specific topic, including details like the campaign goals and content.
“But from there,” he said, “the AI likely wouldn’t be able to take into account the breadth of additional considerations a human marketer would.”
He posed questions like:
- Do the order of the words in this subject line ensure that it still makes sense (and remains impactful) when it’s clipped in the inboxes of mobile devices?
- Does each option align with the voice and tone of the brand?
- Do the variations represent content that would be meaningful to test against each other?
AI assistants can automate powerful personalized communications, but typically human decisions come into play when developing successful content, he said.
This statement holds true for most digital assistant use cases. When you have AI-backed tools that can lie, mislead and show bias with confidence, they can’t operate without human intervention.
The Future of Digital Assistants in the Workplace
Digital assistants are becoming more sophisticated and affordable, and experts predict their use in the workplace will grow.
While some concerns exist around privacy, security and employee training, the boost to productivity is too significant to pass up, especially for companies struggling with limitations to budgets, talent and resources.
As the use of digital assistants continues to grow, organizations must find new ways to integrate them into operations in a way that maximizes benefits while minimizing drawbacks.