Copilot and Privacy

Copilot and Privacy – How Secure Is This AI?

AI is a productivity booster, workflow streamliner and time saver – at the same time as being a conduit for robots to steal all our data, take over the world, our jobs and our lives!

Or something like that. It’s hard to think of a new technology that has caused such an ‘angsty’ amount of fear across the world. We get it – it’s a fear of the unknown, an uncertainty about where it’s all heading. It’s good to be sceptical, and it’s also good to arm yourself with knowledge about any new tools you start to implement in your business, including AI.

One of the common worries businesses have with AI tools such as Microsoft Copilot and ChatGPT is privacy, with concerns raised over where prompt information goes to and what happens with any company data that’s accessed.

We’ve been getting a lot of use out of Copilot in the Smile IT office these days, so have put some research into how it uses company data, the privacy restrictions it has in place and what we should be doing to keep our data safe and secure.

Today, we’re going to share that with you, to help in your journey of getting your business AI ready. Let’s start with how Copilot works.

How Does Copilot Work?

If you use the Microsoft 365 stack in your business, you have access to the AI chatbot Copilot. It becomes your assistant in the space, there to answer your questions, guide you through processes and complete tasks for you. Copilot can write a document in Word, drum up emails in Outlook and create presentations in PowerPoint. It can summarise Teams meetings, bring you up to speed on a week of chats you missed out on, and make your morning cup of coffee for you. (Okay, maybe not the last one, but you get the picture!)

Copilot works by combining data from your available organisational information with data from Large Language Models (LLM’s) to generate answers to your prompts. Microsoft has a 49% stake in OpenAI, the company behind ChatGPT. So, the LLM used to build responses to your prompts is the latest GPT-4, although it’s provided by Azure OpenAI services and not the publicly available OpenAI LLM.

Most of the privacy concerns people have around Copilot take root in this combination of company data with LLM data. Let’s go over some of the core security and privacy practices Microsoft has in place to safeguard your business data.

Data Security and Privacy in Copilot

Microsoft has always shown a strong commitment to security, privacy and compliance across their entire platform, and their approach to AI is no different. Here are some of the answers to common questions you may have when it comes to Copilot and security.

Q: Is my Data Used to Train Third-Party Products Like ChatGPT?

A: For a start, they’re very clear that prompts and data accessed through Copilot aren’t used in any form of training for LLM models. It needs to access company information to provide rich and relevant responses to your queries, but that information stays within the tenant boundaries of your Microsoft 365, provided you have set your permissions correctly (more on that below). It’s not used to train or improve Microsoft AI or any other products.

Q: What Happens to Data Shared in a Chat Session with Copilot?

A: Data is stored about interactions with Copilot and Microsoft 365 apps such as Word or PowerPoint. This data can be used to fuel further interactions with Copilot and is encrypted while it’s stored and is never used to train LLM’s. Users can manage this shared data, as well as set retention policies relating to how its stored.

Q: How Does Copilot Protect my Data?

A: Microsoft Entra ID, which you may remember as Azure Active Directory, is used to authenticate a user and give them access to Copilot. User information is then removed at the start of a chat session, which prevents search queries going out through Bing from being linked to your organisation. Any chat data going to and from Copilot is encrypted when on the move and at rest. Microsoft doesn’t actually set eyes on it.

Q: What About the Accidental Sharing of Data Within the Organisation?

A: As an organisation you have data that’s specific to various departments or users that you don’t want to be internally leaked. The permissions model within Microsoft 365 helps prevent this from happening, honouring identity-based access boundaries meaning the current user can only see data they’re authorised to see. Stringent encryptions and controls will prevent unauthorised data moving through unsolicited channels within the organisation.

As well as all the above reassurances from Microsoft, it’s good to know that they’re also committed to complying with privacy regulations such as the GDPR, and standards like ISO27018, which is an internationally recognised code of practice for cloud privacy. Microsoft’s encryption technologies also include BitLocker, per-file encryption, Transport Layer Security (TLS) and Internet Protocol Security (IPsec).

A Word About Your Microsoft 365 Permissions

If you’re looking to start implementing Copilot into your organisation, you need to ensure your business is AI-ready. A big step in achieving this is reviewing the permissions you’ve set around your data. This means checking who and what programs are able to access which parts of your organisational data. Copilot will access and use what you allow it to use, so it’s worth making sure all your permissions are in the right place for specific users and security groups.

This process should be undertaken regularly within a business but often falls by the wayside. If you’re feeling overwhelmed by what it entails, we’d be happy to assist you in reviewing your permissions and implementing Copilot. The same goes if you have any other questions about Copilot or Microsoft 365. Just get in touch and one of our team members will be glad to help!

peter drummond

When he’s not writing tech articles or turning IT startups into established and consistent managed service providers, Peter Drummond can be found kitesurfing on the Gold Coast or hanging out with his family!

Share

Client Support