Generative AI is technology that uses artificial intelligence to produce content when prompted by a human. The potential uses are far ranging - given the right information, it can be used to create articles, images, reports, social media posts, conversational text, music and even poetry, as well as to automate tasks and predict trends. Whatever your business, employees could use AI to help improve their efficiency, boost creativity, and keep your company up to date. 
Does is affect my business? 
 
Don’t be tempted to bury your head in the sand - generative AI will have an impact whether you acknowledge it or not. The technology has been incredibly popular - in the first two months of Chat GPT being available, it had 100 million users (for comparison, TikTok took nine months to reach this many users, and Instagram took more than two years.)  
 
Currently, leaders and employees have different attitudes towards using AI. 68% of business leaders think employees should only use AI with a manager's permission and a further 19% think it depends on the circumstances. However employees are already comfortable using AI for work such as administrative tasks (76%), analytical work (79%), and creative work (73%), and 70% said they would delegate as much work as possible to AI. 
 
Businesses are also divided about who to hold accountable when AI goes wrong. One third think the employee is solely to blame, even where a manager approved the use of AI, while a third think the blame should be shared between employee and manager.  
 
If you don’t set boundaries, you leave employees to make their own judgments and could put your business at risk. The more you ignore generative AI, the more of a problem it could be. 
 
 
The risks of AI and unregulated use at work 
 
Given the potential uses for generative AI, this isn’t a concern just for technology companies, but all businesses. In fact, the less technological awareness in your business, the greater the risk of mistakes. Without a policy, it is unclear who is responsible for quality control of AI creations, who is accountable for mistakes or how your business will uphold standards. 
 
Accuracy 
Generative AI uses information from the internet to create content. Not all information online is correct, up-to-date or objective, and so the content generated could share these issues. Where information isn’t available at all, AI tools have a tendency to use the existing information to estimate what is missing and fill in the gaps. Sending out incorrect information will damage your business reputation. 
 
Tone 
Even if the content created is correct, it may be in an inappropriate or impersonal tone. Even the best AI can’t be aware of previous interactions with a client, their preferences or your brand guidelines. The wrong tone could damage relationships with your clients and undermine your brand. 
 
Appropriateness 
Amnesty International was criticised for illustrating an article with AI-generated images. While Amnesty intended to protect the identity of real protesters, critics said that using AI images undermined confidence in the authenticity of other reports. It requires human input to weigh up when it is appropriate to use AI, and when you need to be transparent about it. It is sensible to give your employees guidance about making these decisions, for the sake of consistency. 
 
Confidential information 
For some purposes, such as creating company reports, employees may need to input sensitive or confidential information about your business or its clients. Somebody who understands your data protection obligations will need to check the terms and conditions to find out how the information will be used and protected. For example, if the programme is based outside of the UK, only somebody who understands your obligations around transferring information internationally can properly assess the risks. 
 
Use in employment decisions 
Under data protection regulations, employees have the right not to be subject to automated decision-making, and using technology in employment decisions can create unnecessary disputes. Uber recently had to reinstate drivers and pay fines after their app effectively dismissed drivers without any human oversight while Estee Lauder had to pay compensation when the firm selected people for redundancy based on automated computer judgments. The program rated interview answers and used their expressions and data to make the decision, with no human involvement. 
 
There have also been concerns that facial recognition is less effective for people with darker skin. According to one researcher, some technology has a 99% accuracy rate in recognising white males, but an error rate of around 35% in recognising Black women. Using these tools could unintentionally create racial bias in your firm. Uber paid an undisclosed amount to settle race discrimination claims after suspending a Black driver because AI failed to recognise his photos.  
 
It may be tempting to use generative AI tools to create job adverts, recruitment materials or reports on employees’ work. However, if the materials could affect decisions around who gets hired, fired, promoted or made redundant, there’s a risk you could face claims based on unfair processes, or even discrimination. 
 
Copyright 
With AI progressing so quickly, some issues of copyright are still unclear. This is particularly likely to be an issue if you work internationally, as different countries take different approaches. The UK is “one of only a handful of countries” to recognise the person who prompts the creation as the author, while the US Copyright office has said that images created by AI should not be granted copyright.  
 
The terms and conditions of each programme might specify their approach to copyright, and could vary depending on the programme used. Some programmes may even claim copyright of anything created, which could leave you without ownership of branded materials. It is important to check the terms and conditions before use, as using the programme will usually count as accepting the terms. 
 
There’s also the concern that an AI programme could use copyrighted material to create content, which might leave you responsible for the breach of copyright. 
 
 
 
How a Generative AI policy can help 
 
You can help tackle some of these risks with a policy that sets standards and boundaries for the use of AI. 
 
What programmes employees can to use 
A policy allows you to set out which programmes employees can (and cannot) use for work. This means you can check the terms and conditions to ensure they don’t risk your data security, infringe on copyrights, or produce low-quality or inaccurate content. It is sensible to have the list as a separate document, so that you don’t need to update your policy every time new tools are released or old tools cause problems. 
 
When they can and can't use AI 
A policy allows you to set out which tasks employees can and cannot use generative AI for. This allows you to limit risky usage without prohibiting generative AI (and missing out on the benefits) altogether. You can choose to prevent its use in any recruitment tasks, or anything involving sensitive data, while still encouraging using AI for improved efficiency. 
 
Given the expansive range of uses for generative AI, you should consider all foreseeable uses in your business and the potential problems, then tailor your policy to your business needs. 
 
Set out what precautions to take when using AI 
Possibly the best results will come from combining AI technology and human input to support each other. Your policy can help achieve this by setting out what precautions you expect employees to take when using generative AI. This could include proofreading for mistakes, double checking the facts and figures, or adjusting the tone of the content. You might also require employees to keep records of content that has been created by AI, and which programme was used, in case of unforeseen problems. 
 
Consequences for damaging content 
Your policy should outline who is accountable for AI output and the consequences for failing to take precautions. This should impress the gravity of the risks upon employees and make them aware of potential problems. It could also help protect you from legal liability in disputes, by showing that you took reasonable precautions and that the responsible employee was acting outside of their role. 
 
Ideally your policy should integrate with other relevant policies, such as your Disciplinary, Data protection, and Computer, email and internet usage policies, to avoid any contradictions or gaps. 
 
 
Implementing a policy 
 
If you haven’t got a policy on the use of AI, we strongly recommend implementing one now before AI becomes habitual in your business. Our handbooks include a Generative AI policy, which we tailor to your business. Get in touch for advice. 
 
 
This article was originally published on 9th May 2023. It was updated on 27th July 2023 to add figures from a survey of business leaders by tech.co, and again in March 2024 when the Uber matter was settled.  
 
Tagged as: Employers, Policies
Share this post:

GET IN TOUCH 

Do you have a legal matter you'd like to discuss with us? Get in touch using the details below or use the form here and a member of our team will be in touch to discuss your enquiry. 
Phone: 0121 817 0520 
Address: Spencer Shaw Solicitors Limited 
St Mary's House, 68 Harborne Park Road,  
Harborne, Birmingham, B17 0DH 
Opening hours: 
Monday - Friday 9:00AM - 5:00PM 
Saturday, Sunday & Bank Holidays - Closed 
Keep in touch 

SCHEDULE AN INITIAL CALL 

We take your privacy seriously and will only use the information you provide on this contact form to deal with your enquiry. Please see our Client Privacy Policy for more detail. 
Our site uses cookies. For more information, see our cookie policy. Accept cookies and close
Reject cookies Manage settings