Legal development

5 Key Risks of Generative AI in the Workplace

Insight Hero Image

    生成人工智能headlines at the moment. Whilst AI as a technology has been around since the 1950s, new technological breakthroughs and the public launch of ChatGPT in November 2022 have put Generative AI at the forefront of public consciousness. It's clear that this technology has the power to transform the world…and the workplace.

    Generative AI describes a form of machine learning whereby a computer system is trained on large datasets and, when prompted, is able to learn from these datasets and produce novel content.

    It's not hard to think of the use cases for such technology in the workplace.

    Whilst automation of tasks previously undertaken by humans is hardly new to the workplace, Generative AI is capable of levels of intelligence and cognitive understanding reminiscent of humans. This significantly expands the scope of tasks that it will ultimately be able to automate.

    But before we all go and hang up our tools, it's important to bear in mind that there are still limitations on the capability of Generative AI. Whilst that is likely to change over time as the technology advances, for the moment it will need to be slowly integrated into businesses, rather than being adopted at the exclusion of all previous ways of working.

    In this article, we are taking a brief look at the 5 key workplace risks to keep in mind when adopting any form of Generative AI in your business.

    1. Redundancy

    An inevitable result of the development of Generative AI is that as the technology develops and Generative AI is able to undertake tasks and produce content previously undertaken and produced by humans, the roles undertaken by those humans will no longer be necessary.

    In the UK, dismissal of an employee will potentially be fair where it is on the grounds of redundancy. It is likely that automation of roles due to Generative AI would meet the definition of redundancy, as the new technology will result in a diminished need for employees to carry out their work.

    In considering making redundancies, employers should be conscious of:

    1. Their legal obligations during the redundancy process including (in the case of a redundancies involving more than 20 employees) their obligation to inform and consult with affected employees;
    2. Succession planning and the risk involved with making junior employees redundant and therefore not adequately training anyone to replace more senior roles that can't be automated; and
    3. Push-back from trade unions, works councils and employee representative bodies. The automation of jobs is likely to be met with chagrin by these bodies. Whilst pushback is inevitable, careful handling of the business' relationship with employees will be necessary to reduce the extent of the damage that could be inflicted by employee action.

    2. Changes to Employee Terms & Conditions

    In the short-term, rather than replacing roles entirely, Generative AI is likely to be adopted by businesses as a tool to be used alongside an employee's normal work in order to improve productivity.

    Most good employment contracts will be drafted to allow for some flexibility in the types of duties that can be assigned to an employee. This will allow businesses to assign employees new tasks related to the use of Generative AI systems. Provided the change in employees' day-to-day work is only moderate, the business can likely make the change without requiring employee consent.

    However, if the Generative AI tool is going to entirely change the employee's day-to-day work (for example, where their role goes from content generation to quality control of the content generated by a Generative AI tool), then the business will likely need to consult with the employee and seek their consent to that change.

    Where the change is fundamental and consent is required, a failure to obtain consent could leave the business vulnerable to claims of breach of contract and constructive dismissal.

    3. Discrimination

    Businesses must ensure that there is no bias in the data being used to train their Generative AI systems which could result in discriminatory outcomes for employees.

    In particular, where Generative AI is used to support decision-making on HR topics such as performance and recruitment, there is a risk that if the AI produces a discriminatory outcome, the employee could bring a claim for discrimination against the business, liability for which is uncapped.

    In order to mitigate against this risk, businesses should think carefully about what data is used to train the Generative AI system, and should ensure appropriate checks and balances are in place to be able to reverse-engineer the outcome, identify and rectify any bias which does become evident.

    4. Policies and Procedures

    Businesses should consider if their policies and procedures are adapted for the ways in which Generative AI tools will be used in the workplace. For example, if the ability to use and quality check the output of Generative AI systems is going to become a fundamental part of an employee's role, consider whether that should be reflected in their performance expectations.

    企业还需要to introduce new policies in relation to how employees should interact with Generative AI systems. In particular, if employees will be required to input data into the AI systems, the business should have policies in place to dictate appropriate use of data. Businesses should be particularly cautious to avoid employees inputting personal, copyrighted or biased data.

    5. Training

    The productivity gains that have been much lauded in relation to the adoption of Generative AI in the workplace are dependent on employees' ability to use the AI system correctly. Businesses should therefore take proactive steps to ensure that employees are trained on how to use the new technology. Failure to do so could make it difficult for businesses to performance manage employees down the line if they fail to fully adapt to the new systems.

    Where employees will be required to input data into the AI systems, training should also cover the importance of data input to mitigating legal liability. As well as enabling the business to realise an increase in productivity, where training is fulsome enough, this should mitigate against the risk of various data protection, IP and employment claims.

    Author:印度Coultas,副