Safeguards For Using ChatGPT and Other Bots for HR
Author: Natasha K.A. Wiebusch
Date: April 24, 2023
Chatbots have become increasingly popular in recent years, revolutionizing the way businesses interact with customers and automate their operations.
That sentence was written by ChatGPT in response to the query, "Can you write an intro sentence to my article about chatbots?"
As shown above, AI chatbots like ChatGPT provide written responses to questions from users. Though incredibly powerful, users everywhere are noticing that these bots sometimes provide incomplete or incorrect answers. On occasion, they even make things up without being prompted. (In the AI industry, these are called hallucinations.)
Still, these tools have their strengths, and they can help HR pros in many ways. In the future of work, HR leaders will need to learn how to effectively use AI as a partner.
Chatbot Problems HR Should Know About
Inaccuracies and hallucinations aren't the only problem with AI. Warnings about this technology range from job eliminations to a Terminator-esque AI takeover. Though that's probably far-fetched, even Google's CEO, Sundar Pichai, admitted that, "Some AI systems are teaching themselves skills that they weren't expected to have," and it's not well understood how it happens.
AI takeover aside, not every concern carries the same weight. Here are four key concerns that HR should know about:
1. Bias
ChatGPT has "shortcomings around bias," according to OpenAI CEO Sam Altman. Without the oversight of a trained user, ChatGPT and other chatbots may provide responses that perpetuate racism, sexism and other forms of discrimination.
There are several potential reasons for this. First, AI algorithms are built by humans who have biases of their own, and they pull from resources that may be biased. Also, AI uses heuristics, which are short cuts made to solve problems quickly. We (humans) also create heuristics, and they happen to be one of the primary drivers of unconscious bias.
Safeguards Against Chatbot Bias
- Ensure that all employees understand what bias is and how to identify it;
- Independently audit chatbot responses for bias; and
- Implement anti-bias standards for tasks that are known pain points for bias, like job descriptions, performance reviews, and pay decisions.
2. Inaccuracy
Chatbots sometimes provide incorrect answers to user questions. For example, while researching chatbots for this article, HR & Compliance Center tested ChatGPT. We asked, "Am I required to provide employees with employee leave in Arkansas?" ChatGPT incorrectly responded that Arkansas has no other leave requirements aside from what is required by the FMLA:
The problem with inaccuracies isn't just that they're incorrect. There is also no way of knowing what is correct and what isn't. Unfortunately, chatbots don't explain how they came up with their responses. In the AI industry, this is called the "black box" of the AI system. The black box represents the AI's decision-making processes that we don't quite understand.
For now, the only way of knowing when a chatbot has made an error is if the user already knew the answer to their own question.
Safeguards Against Inaccuracies
- Thoroughly research the chatbot's capabilities and best uses;
- Set clear parameters for what types of tasks chatbots can be used for;
- Ensure that chatbots are maintained through employee monitoring;
- Require that chatbot outputs be independently verified; and
- Prohibit the use of chatbots for advanced research and compliance questions.
3. Cybersecurity and Privacy
Chatbots have coding capabilities that may attract hackers who want to enhance their phishing attempts and malware. If a chatbot is hacked on an employer's website, it can lead to large security breaches and liability for the employers.
Also, be aware that chatbots are not privacy friendly. For example, ChatGPT's privacy policy specifically states that ChatGPT collects personal information, including your name, contact information, content included in queries, your location and IP address, among other information. That information may be provided to third parties.
Safeguards for Cybersecurity and Employee Privacy
- Consult with the company IT team to ensure leading practices are followed;
- Thoroughly research chatbots before choosing one to ensure it is reputable and uses high-quality data;
- Do not provide chatbots with personal identifiable information (PII) or personal health information (PHI); and
- Implement encryption, authentication, and other security systems to prevent the chatbot from being misused.
4. User Error
Finally, many employees may not know how to use a chatbot. Employers must recognize that chatbots are unlike anything we have ever seen before, and they require upskilling. Employees will need to understand how these new tools work, what their limitations are, and how to audit and maintain them.
Safeguards Against User Error
- Train employees in how chatbots work, AI ethics, and relevant policies; and
- Establish a gradual adoption plan that allows employees time to understand their new partner.
Legal Safeguards
As AI continues to evolve, so does the regulatory landscape. In addition to implementing internal safeguards, employers will need to remain vigilant of new legal and political developments related to AI.
For example, in 2020, Illinois passed the Artificial Intelligence Video Interview Act to regulate the use of AI software to assess video interviewees for jobs. This July, New York City will begin implementing Local Law 144, which prohibits employers from using AI in recruitment and promotion decisions without first auditing the AI for bias. And in 2022, the White House issued a Blueprint for an AI Bill of Rights.
Chatbots Are Here to Stay
According to a recent survey by Eightfold AI, 92% of HR leaders plan to increase AI use in at least one HR area, showing that AI really is HR's newest partner.
What's clear is that despite its growing pains, ChatGPT has the capability to add efficiencies to work by completing certain tedious and repetitive tasks faster. In doing so, it can free up humans to do the complex-thinking required for more important tasks. In the spirit of embracing the future of work, let's finish this how we started:
While this technology presents exciting opportunities, it also raises important challenges that must be addressed to ensure its safe and responsible development and use. By doing so, we can harness the full potential of AI to create a better future for all. - ChatGPT.