using chatgpt in your business

The Risks of Sharing Business Data with ChatGPT

Whether you’re a casual or committed user of ChatGPT, you’ve no doubt been impressed by the productivity-boosting capabilities of the AI chatbot. From generating ideas to writing up emails or creating content, it’s a massive time saver that more and more people are relying on in the workplace.

But… there’s always a but… you need to be extremely careful about what kind of information you share with ChatGPT. Especially business data. There are potential security and privacy risks that could cause massive headaches for your business and already have done for many others.

To understand these risks, you need to know how ChatGPT uses your data.

How ChatGPT Collects and Uses Data

ChatGPT business dataWhen you input your prompts into OpenAI’s ChatGPT, the conversations are being stored and retained. This means any information entered is sitting there on OpenAI’s servers. That’s a risk in itself – but more concerning is that OpenAI’s policies explain that it’s allowed to use interactions with users for training purposes and service improvements. Information you put into the interface is accessible by OpenAI systems and could be used to improve its model.

You don’t want your company information or your personal information doing that. There is a small button in the settings of your ChatGPT account, called ‘Improve our model for everyone’. It gives you the option to:

“Allow your content to be used to train our models, which makes ChatGPT better for you and everyone who uses it. “

Ideally, turning that off should improve your privacy protection. The fact remains though, that anything you input into ChatGPT is stored on the data systems for a certain time-period. If it’s information you don’t want to be accessed by anybody else, keep it off ChatGPT.

What Are the Risks?

All businesses and individuals need to understand they’re sharing data with an external entity, and that data is subject to security vulnerabilities. Let’s look at some of the associated risks of having your data sitting on OpenAI servers.

Hackers Infiltrating the Site

No system is immune to cyber-attacks, and all that juicy data sitting on the OpenAI servers will draw hackers like moths to a flame.  They love targeting databases with vast amounts of information, especially if there’s a chance of sensitive data being on there.

If a hacker does infiltrate OpenAI, any data you’ve inputted into the system will be at risk. It’s a massive concern if you’ve shared share proprietary information, intellectual property, or other sensitive details on there.

Your Data Stored on Their Servers

As we said above, data from your interactions with ChatGPT is stored and accessible in OpenAI’s infrastructure. It can be used to inform the training of the AI, making it accessible to ChatGPT users. The public could theoretically become informed of your highly sensitive business data, just because you entered it into the AI chatbot.

Third Party Linkages to the Data

Like any organisation, OpenAI is required to meet compliance standards and legal requirements when it comes to storing your data. But what about the third-party add-ons, bots and vendors they use in delivering their service? ChatGPT has an API that other platforms can use, and it’s possible that these interfaces may have security vulnerabilities? Suddenly the boundaries around what happens to your data become murky, and this is something businesses should keep in mind when using this AI in the workplace.

Copyright Issues

Copyright and ChatGPT has been a concerning topic since AI came to prominence a couple of years ago. Where did it get all that data it responds to prompts with? It’s been scraped from countless sources off the internet, with no acknowledgement or understanding reached with the content owners or creators. The content is used to inform and shape ChatGPT’s answers, just as any information you input into the chatbot will. If you’re sharing unique business ideas or proprietary content, it’s going to affect your copyright or intellectual property claims.

What Not to Share with ChatGPT

chatGPT safetyWith those risks in mind, let’s examine some examples of information you should be wary of inputting into ChatGPT?

Personally Identifiable Information

The vast majority of people are extremely careful about sharing any information that could be used to identify them. You should not be sharing names, addresses, phone numbers or ID numbers with ChatGPT. Inputting these is a breach of privacy that can lead to identity theft.

Finance and Banking Information

Keep your finances away from ChatGPT and other AI. We’re talking banking details, financial statements, account numbers, or transaction histories – they’re all especially sensitive and can be misused by unauthorised parties.

Passwords and Login Credentials

You can’t be too password savvy these days, with 81% of cyber breaches caused by weak, reused or stolen passwords. We need to have them on serious lockdown, and we shouldn’t be sharing them with anybody or any third-party tool, especially ChatGPT or other AI bots.

Private or Confidential Information

You don’t want to expose the information that gives your business a competitive advantage, nor do you want your proprietary strategies to be made public. Keep business-specific information such as contracts, processes and plans off ChatGPT if you want them to remain confidential.

Intellectual Property

Intellectual property is valuable to your business and should remain the property of your organisation. If you input trade secrets, inventions or original content into an external AI system, you’re compromising your control of that information.

Contact the Data Security Experts

Modern businesses must be invested in and engaged with data security – the risks and implications of a breach are too far-reaching not to be. You want protocols in place to help protect sensitive information and ensure data is managed responsibly. Your approach and use of AI chatbots like ChatGPT should be included in these protocols.

All interactions with external AI systems should be treated as inherently risky, and the business-related data shared with them should be limited. Strict data security practices involving data anonymisation and encryption should also be adopted.

At Smile IT, we work with businesses to implement data security strategies that keep their information safe. If you want to talk about the cyber risks you might already be exposed to, or need to fortify your cyber defences, we’re here to help. Get in touch and let’s secure your business data and give you peace of mind!

peter drummond

When he’s not writing tech articles or turning IT startups into established and consistent managed service providers, Peter Drummond can be found kitesurfing on the Gold Coast or hanging out with his family!

Share

Client Support