AI workplace policy

Have you Got an AI Workplace Policy in Place? 

Since Generative AI really gained traction with ChatGPT becoming available to the public, it’s hard to avoid talk of AI in the workplace. It ranges from positive (“this program could really improve our efficiency and free up our time!”) to fearmongering (“Robots are going to take all our jobs!”). 

Whatever your take on it is, AI is here to stay. Businesses needs to adapt and evolve with AI as it grows, being open to and embracing the myriad of progressive opportunities it brings. This a transformative technology that’s going to revolutionise the accessibility of data, how we make decisions and the speed at which we operate. It’s becoming a case of adapt to AI, or get left behind.  

Why an AI Workplace Policy is Important 

We’re still in a bit of a Wild West scenario when it comes to AI in the workplace. The tech has forged ahead in leaps and bounds, while in most countries regulation hasn’t managed to keep up. There isn’t much in the way of oversight when it comes to using generative AI in the workplace, which places businesses who use it at risk.  

That’s what makes your own internal AI workplace policy so important. It’s your opportunity to protect your company against the risks that come with using an emerging technology. It’s your buffer in this grey area where we’re waiting for regulation to catch up – a protection against intentional or accidental misuse. 

Formalising the implementation of AI through a set of policies is likely to put your employee’s minds at rest. AI has made many people feel expendable in their work positions. By having a formal AI Workplace policy they will understand the role AI is to play in the organisation, including its functions and limitations. They’ll feel more comfortable with the adoption of the technology in this way.   

What to Include in your AI Workplace Policy 

 The inclusions in your policy will vary according to different industries, but there is a core of basic topics you should consider when putting it together. Let’s take a look at what some of them are. 

Data Privacy and Security 

This is a big one. GenAI accesses huge swathes of data in milliseconds, processes it and spits it out in response to a user’s query. Where does it get this data from? With ChatGPT, any data that is inputted into the program in the form of queries is absorbed as training data to improve the technology. If your employees paste private company data when they’re using ChatGPT, that information becomes publicly available. And, that’s happening as we speak, with 4.7% of the data employees are plugging into ChatGPT being confidential. This is why it’s imperative your AI policy outlines how company data is being used in AI programs.  

Copyright and Intellectual Property Concerns 

The data that generative AI programs like ChatGPT or even visual AI builders like Midjourney use, has to come from somewhere. What if it includes information from copyrighted sources? Your company could get in trouble down the track if it does. AI-generated content could definitely step on the toes of intellectual property law. Your employees need to be aware of this, and either learn how to generate content that won’t infringe copyright or be limited in their access to AI tools that could create legal risks.  

Inaccuracy 

Chatbots like ChatGPT sure do amaze us with their ability to churn out informed answers to our prompts. But are they always accurate? Unfortunately, they can sometimes be misleading or incorrect, and even the developers have acknowledged the systems can create ‘hallucinations’ of completely fabricated information. Workplace policy should include provisions where AI content is double checked against other sources. In the event it can’t be double checked caution should be used.  

Transparency 

Transparency protocols should stipulate that all recipients, internal and external, should be made aware when generative AI was used in creating a piece of content they are presented with. Everybody, particularly supervisors, should be know to what extent generative AI was used to complete a task.  

Bias 

Your policy should include auditing of all AI content for potential bias. Generative AI content can be built on a set of data that is historically biased, and that bias can come out in the answers it provides to your prompts. This bias may lead to content being produced that is contrary to the principles of your organisation. Intentional or unintentional discrimination may occur, or the content could be offensive or defamatory to members of your workforce or your customers.  

Above all, your AI workplace policy needs to bring clarity to a space that is still finding its feet. The guardrails for permissible use need to be firmly outlined, including which AI products can be used and in what circumstances.  Essentially your policy is a framework for ongoing governance of AI within your organisation. The framework needs to be allied to training of your staff, and even the creation of an AI governance committee.  

Final Thoughts on an AI Workplace Policy 

The benefit of creating your workplace policy on AI is that it will really get you thinking about how to incorporate this exciting tech into your workplace. The more you familiarise your organisation with AI, the more you’ll realise how much benefit it can be. You’ll realise how to use it effectively, and the pitfalls you need to avoid as you navigate this exciting new world! Being proactive now will reap rewards later down the track.

peter drummond

When he’s not writing tech articles or turning IT startups into established and consistent managed service providers, Peter Drummond can be found kitesurfing on the Gold Coast or hanging out with his family!

Share