cat out of the bag
Feature

It's Too Late to Stop Bring Your Own AI. So Get Your BYOAI Strategy In Place

5 minute read
David Barry avatar
By
SAVED
With workers already bringing external generative AI solutions into the workplace, managers need to develop Bring Your Own AI (BYOAI) strategies.

It seems like only yesterday when organizations debated the pros and cons of allowing employees to use their own mobile devices at work. The biggest — and completely justifiable — concern was the security of allowing work information to flow through these unregulated devices.

A 2013 article on our sister publication CMSWire discussed the threat to enterprise data posed by these devices and to what was described as the fracturing of the enterprise brain. “Simply put,” the article reads, “the fracturing of the enterprise brain refers to the process where so many employees are using so many personal devices that they are fragmenting enterprise data and content and creating what are, in effect, personal content silos."

Here we are, 11 years later, with all kinds of new technologies in the workplace, and we're still trying to solve the same problems.

This time, however, while the fundamental issue is still the same, the technologies in question are a lot more powerful and organizational leaders are understandably worried. The main problem now is generative AI.

Generative AI Challenges

A recent survey of 750 digital workplace leaders around the world found every organization was experiencing challenges when implementing artificial intelligence. The top challenge was issues related to data quality. It also found that 79% were considering investing in licensed AI such as Copilot for Microsoft 365, but less than half of organizations are confident they can use AI safely today.

Less than half of respondents have an AI Acceptable Use Policy, despite widespread use of publicly available generative AI tools — 65% of organizations use ChatGPT and 40% use Google Gemini today. An Acceptable Use Policy (AUP) outlines the guidelines and principles for employees and the organization to follow when using generative AI capabilities.

Forrester raised bring-your-own-AI in September 2023, which found more than a quarter of global AI decision-makers indicating that anywhere from 51% to 75% of their employees will use generative AI technology by the end of 2024. “Whether it’s generative AI tools like ChatGPT, AI-infused software, or AI-creation tools, employees are already using consumer AI services that their businesses don’t own," the statement read.

Related Article: Generative AI in the Workplace Is Inevitable. Planning for It Should Be Too

Weighing the AI Tradeoffs

As we deploy AI systems more broadly and deeply within our organizational networks, organizations will need to recognize and navigate a series of 'tradeoffs' to convert tech capability to competitive advantage, said Paul McDonagh-Smith, senior lecturer of IT and executive education with MIT Sloan School of Management.

The tradeoffs include:

  • performance vs. transparency
  • data privacy vs. explainability and
  • bias vs. fairness.

The last occurs when AI systems are trained on datasets not representative of the broader population, leading to unfair decisions that favor certain groups of people over others. The practice of BYOAI in the workplace brings a further set of tradeoffs, between potential advantages and disadvantages.

While bringing generative AI tools such as ChatGPT, DALL-E or Midjourney into an organization is an opportunity for employees to experiment early and fine-tune capabilities often first explored outside of the company, there is a flip side.

Once again it's a trade-off. Organizations need to offset any opportunities for enhanced productivity by mitigating risks such as data breaches and loss of control of commercially sensitive information. "These risks are compounded by difficulties related to ensuring all third-party AI tools comply with rigorous security standards and data protection laws for the domains and geographies businesses operate in,” McDonagh-Smith said.

Inconsistencies in output quality and style can also occur when a variety of AI tools are introduced, potentially undermining the uniformity of products, services or customer experience. This variability can impact a company's brand consistency and the interoperability of its work and operations across functions such as sales, marketing, finance and legal.

BYOAI also poses management challenges, as overseeing such AI tools complicate data governance and compliance monitoring.

“Decisions regarding BYOAI introduce trade-offs between fostering innovation and maintaining security and control,” he added. “Organizations considering this approach will benefit from carefully weighing up the potential for increased productivity and cost savings against the significant risks of security breaches and management challenges. Developing robust policies and frameworks for BYOAI is essential.”

Related Article: Are You Giving Employees Guidelines on AI Use? You Should Be

Another Potential Risk of BYOAI: Open Source AI

The trade-off math gets even more complicated when you introduce open source tools to the mix, according to Dana Simberkoff, chief risk, privacy and information security officer at AvePoint. "As more organizations authorize the use of AI tools, employees are also looking towards open-source tools to power productivity — and many are still unaware of the security risks this can pose to their organization," she said.

Stricter guardrails must be established for employees to guide their use of AI tools, she said. These guardrails should clearly outline which tools pose higher risk, which are safest to use, and what kind of information can (and can’t) be shared with open-source AI. Data governance is and should be at the forefront for any organization using AI technology.

“Organizations who do not provide use policies and comprehensive AI training — educating employees on developing threats and scams — are directly exposing themselves to potential data breach,” she said. "Especially as new AI tools are created seemingly daily, keeping employees constantly informed about AI evolution and risk should be an organization’s first line of defense in preventing cyber attack.”

Related Article: Public Cloud Security Questions Your Workplace Is Probably Ignoring

The AI Horse Is Out of the Stable

According to the Microsoft and LinkedIn 2024 Work Trend Index annual report, an estimated 75% of knowledge workers use AI today — with 78% of that cohort bringing their own AI tools to work. So it's no longer a question of if an organization should allow BYOAI, but a question of how leaders can help employees embrace AI while still protecting their data and networks, Russ Whitman, chief strategy officer for Launch Consulting told Reworked.

Learning Opportunities

He pointed out that much like other digital solutions that “exist” inside the enterprise, access to AI is easy with any mobile phone or browser, and the free price point of so many tools means they're being used at all levels of the organization.

“While security is critical, organizations must work diligently to build awareness around the risks of corporate data and confidential information being leaked,” he said. “At the same time, I think companies need to lean into the use of artificial intelligence and not try to prohibit its use as this effort is likely to fail.”

Whitman pointed to the numerous studies and anecdotal evidence that suggest AI increases productivity for employees across all levels and generations, with the least skilled often benefiting the most. Therefore, you can argue that AI can be a significant enabler.

The risk for organizations that fail to embrace AI is they will face “shadow” AI use by employees who are hesitant to disclose their usage, he continued. As a result, these companies miss the exponential gains achievable through innovative AI applications.

While organizations grapple with AI tools, employee access, cost management, security risks and policies — they miss out on one critical factor — learning how to use AI.

“AI is not your normal digital tool, it is unlike any we have experienced before it. There is an unlimited number of potential use cases, so it is important to skill up on how to use it. The best way is to experiment,” Whitman said.

Most importantly, he added, leaders need to bring BYOAI to the table, too. Execs need to learn the toolset as much as, if not more so, than their employees. AI is not a movement they should leave to the tech team to figure out — company leaders need to lean in and develop the capability of learning how to lead an AI powered organization.

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Ani Adigyozalyan | unsplash
Featured Research