TIPS #1: Is ChatGPT a Productivity Booster or Data Security Risk?

03.30.23 | Shane Shook | Blog Post

Welcome to the first edition of #ForgepointTIPS (Threat Intelligence Portfolio Spotlight), where we examine the latest cybersecurity trends and threats and provide actionable insights for industry decision-makers. Today we explore #ChatGPT: what’s not so hot from a risk perspective and how to safeguard your data. Helpful reading? Subscribe and tell a friend…

Issue: the rise of ChatGPT in the workplace.

ChatGPT has gained popularity for its productivity-enhancing capabilities. According to data from Cyberhaven Labs, as of March 21, 8.2% of employees have used ChatGPT at work since its launch, and 6.5% have pasted company data into the platform. However, companies like JP Morgan and Verizon have blocked access to ChatGPT over data security concerns.

Impact: the data security dilemma with ChatGPT.

When employees input confidential data — e.g. medical records, company strategies, or proprietary information including intellectual property — into ChatGPT, it could used as training data for the AI model. Amazon’s legal team issued an official warning employees against sharing confidential data with ChatGPT after the company reportedly observed responses that mimicked internal company data. Meanwhile, Samsung recorded at least three incidents of employees actually pasting meeting notes and source code into ChatGPT, unknowingly leaking sensitive and proprietary information “into the wild”.

So how can organizations protect themselves? Unfortunately, traditional security tools struggle to track ChatGPT usage. Identifying and protecting data shared with ChatGPT is challenging because:

  1. Employees copy and paste data into the browser, making it difficult to track compared to file uploads.
  2. Confidential data often lacks recognizable patterns, making it hard for security tools to identify sensitive information.


Action: Balance productivity and data security.

Companies must find ways to harness the benefits of ChatGPT while minimizing data security risks. Addressing this challenge requires a proactive approach:

  1. Monitor and control access to ChatGPT.
  2. Educate employees about the risks of sharing sensitive information with the platform.
  3. Invest in advanced security tools that can track data movement and identify potential data misuse.


By striking the right balance, organizations can leverage ChatGPT to boost productivity without compromising data security.

Kudos to Howard Ting and the Cyberhaven team for surfacing this critical issue for organizations across industries, and for developing a solution for generative AI tools like ChatGPT to prevent users from exposing confidential data. Learn more about Cyberhaven for AI here and see how your organization can put a stop to data leaks.

Interested in learning more about AI and risk management? Check out Cyberhaven’s inaugural CISO Series session. You can also book a live demo with the Cyberhaven team.

Thanks for reading our first edition! Please subscribe and share with a peer. Have feedback or a cyber threat or trend you’d like us to address? Get in touch.


***This blog was originally featured on our Forgepoint TIPS LinkedIn newsletter. Read the original post on LinkedIn here.***

You may also enjoy: