advertisement

CEOs and leaders are at risk of overlooking a form of the technology that’s been slowly creeping in through the back door.

What managers should know about the secret threat of employees using ‘shadow AI’

[Source photo: Dousan_Miao/Getty Images]

BY Steve Salvin4 minute read

ChatGPT became the poster child for generative AI earlier this year. From writing up business plans to explaining complex topics in layman’s terms, ChatGPT has been drafted to help with just about anything and everything. And companies small and large have been scrambling to explore and reap the benefits of generative AI ever since.

But as this new chapter of AI innovation progresses at a dizzying pace, CEOs and leaders are at risk of overlooking a form of the technology that’s been slowly creeping in through the back door: shadow AI. 

Shadow AI is dangerously overlooked

Put simply, shadow AI is when staff bolt AI tools onto their work systems to make life easier, unbeknown to management. This quest for efficiency is, in most cases, well intentioned, but it’s opening companies up to a new realm of cybersecurity and data privacy issues. 

Shadow AI is typically being embraced by staff looking to improve process efficiency and productivity, particularly when it comes to navigating monotonous tasks or laborious processes. That might mean they’re asking AI to scan through hundreds of PowerPoint decks to find key information, or asking it to synthesize the key points from meeting minutes. 

As a rule, employees aren’t purposefully making their organization vulnerable. Quite the opposite. They’re simply streamlining tasks so they can tick more off their to-do list. But with more than one million U.K. adults having already used generative AI at work, the risk is that more and more workers use models that have not been authorized for safe use by their employer, and risk data security in the process. 

Two major risks

The risk of shadow AI is twofold. 

First, employees may feed such tools sensitive company information or leave company information open to be scraped while the technology is running in the background. For example, when an employee is using ChatGPT or Google Bard to streamline their productivity or clarify information, they could be inputting sensitive or confidential company information in the process. Sharing data is not always an issue in itself—companies often rely on third-party tools and service providers with their information—but issues can occur when the tool in question and its data-handling policies haven’t been assessed and approved by the business. 

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Steve Salvin is the founder and CEO of Aiimi. More


Explore Topics