How To Talk to Your Team About Using AI at Work

By Dr. LILY jAMPOL

Ermahgerd AI. Where do we start?

We have been in nonstop dialogue with our clients and colleagues for the last few weeks and have heard every anxiety imaginable. Recruiters are wondering if job applicants are real people. Leaders are anxious about their employees accidentally sharing confidential info, managers are exhausted thinking about how this will affect their employee’s work, and employees are terrified that their jobs will be replaced. 

Most imminently, folks who work in HR or lead companies want to know how to minimize risk. We want to offer a few frameworks and policy ideas (including ones we have been implementing at ReadySet) that may help with current challenges.

But first, a few realities we need to contend with:

  • AI is not inherently good or bad. Like the internet, it is a tool. There will be some important uses of it, and also misuses of it, from confidentiality of IP to plagiarism. Holding both truths and embracing the gray area will allow you to better create policy and react to changes as they come up.

  • AI will have different implications for different people. As with everything that involves humans, there is a lot of room for error and many more consequences for those who are already disadvantaged or marginalized in the first place. AI, and the changes it will bring, will invariably have an unequal and potentially surprising impact on segments of our workforce, widening the gap for some and closing it for others. 

  • AI and ChatGPT are not going away anytime soon. Anything that is incentivized to make people feel smarter, more efficient, and more productive will creep into use no matter how hard we try to stop it. Your leadership will likely also adopt efficiency-promoting technology. Even if you can avoid it in your own work, your clients and customers will also use it. 

So where to start? The most important thing we can do now is promote safer use of AI in our workplaces. Here are some tips we can offer right now for setting up a foundation for harm reduction and responsible use in the long term. 

1. Have an org-wide or team-wide discussion about AI

Communicating outward that the org is thinking through this and leadership is aware of the risks, fears, and dialogue broadcasts that you don’t have your head in the sand. This is also a great time to understand the questions and challenges folks have, by asking them for feedback. 

2. Familiarize employees with AI and how to use it

We don’t want to require folks to use it, but we do want to provide the resources and information for them to explore it safely. It’s easier for folks to not make mistakes when they are educated and familiar with the tech. And, when org leaders are the ones to help with understanding (by providing resources or education), this can promote trust in the organization and any policies that are subsequently implemented. 

3. Clarify the biggest risks for the organization and employees, but balance them with the positives

If we broadcast an ‘AI is bad’ mentality, people will not trust anything we say or follow organizational safety guidance. We can and should be mentioning the risks of using AI, including privacy and data leaks, accuracy of output, limitations and biases but also mention the likely benefits, from efficiency and productivity gains, to the creative inspiration it can add to our innovation. But we also want to try to avoid being overly positive too - noting the risks and challenges can help those who are worried feel seen.

4. Share behaviors and suggestions for safer use

Rules are important, but not everything is or should be enforceable or folks may feel micromanaged and anxious. Trusting employees to make good individual decisions, but giving them guidance that will nudge them in the right direction will allow autonomy and empower employees to help take responsibility in organizational safety. Guiding principles might include: ‘Don’t cut and paste (avoid plagiarism)’; ‘Double check sources and citations’, ‘Use AI as inspo, not for ‘right’ answers’; ‘Don’t rely on it for facts, figures, or when you need the ‘right’ info’.

5. Choose clear rules (policy) for use that are clear and doable

Some clear policies around use will ultimately help folks understand what is and is not tolerated. Policies generally include things that will affect organizational safety, not personal safety. Some might include: ‘Employees are not allowed to input any sensitive, identifying, or IP information into AI software’ (this is especially important with client or customer information). We can also ask employees to sign up with their personal email if they do sign up as opposed to a work email, to avoid risk (unless we are asking them to use AI for work, of course).

6. If mistakes happen, be ready to deal with them

Having an internal escalation plan that identifies the risk and the people who will deal with it, will help reduce the impact of the mistake. We suggest clarifying which mistakes are going to be employee fault, and which are tolerable from a performance perspective. But, for the sake of our teams and cultures, we also need to remember to have some leniency - we won’t be able to predict everything that can and will go wrong, and neither will employees. And mistakes will likely disparately impact an already marginalized population who are more likely to be seen as incompetent, guilty, punished for mistakes, and maybe even less familiar with technology (e.g. manufacturing employees). Being proactive about mitigating bias in this new context will help maintain any healthy culture and equity goals we may have.


Everyone, no matter their role, could use some advice on AI right now. The more information, data, and solutions (or mistakes) we share, the more we can all lift each other up to create best practices that employees and organizations alike, benefit from. If something works, we would love to hear it. Please reach out to us if you want to collaborate or share successes. Also, stay tuned for more advice, information, and discussion on AI in the workplace. We are just getting started. And in case you were wondering, no we didn’t use ChatGPT to write this article, but the image is 100% AI generated.


Previous
Previous

Introducing ReadySet Lab: Enhancing DEI Training with Immersive Scenario-Based Learning

Next
Next

What has changed in the DEI landscape over the past decade?