For many HR leaders, there is a sense of both urgency and confusion about how best to move forward with artificial intelligence (AI).
They understand the speed at which AI is changing the talent landscape and future roles. They also understand the value that it offers them to free up time and allow them to focus on the more strategic and ‘value-add’ discussions. And yet there is some trepidation (quite rightly) about how best to harness its use.
We've been exploring for many months how AI and Large Language Models (LLMs) could be ethically, carefully and sensibly deployed to augment our 360-degree feedback software.
In doing so, two fundamental – and challenging – areas came up time and again.
Firstly, how can we embrace the power of AI in our 360 platform for our customers, and yet ensure we avoid the ‘AI hallucinations’ that we read about? These hallucinations are the instances in which AI makes up information or – more likely – makes educated guesses as to what a credible answer might be to a question (based by drawing on the wealth of information it has access to).
Secondly, with 360 reviews typically containing an individual’s first name, their role, examples of situations they’ve dealt with and both negative and positive feedback, how can our customers use this with AI with data protection and data security in mind? What happens to the data submitted for analysis? How are issues of consent and confidentiality managed?
We know that 360 is a very personalised experience. We’ve been working in this field for nearly 30 years and we know that it is very, very important to retain the human touch in the process of gathering, understanding, exploring, discussing and deciding how to use feedback.
With these two thorny issues in mind, we set two of our teams challenges.
We set our business psychologists – with their many years of qualitative and quantitative analysis – against OpenAI’s ChatGPT to see how the outputs of AI would differ from those of actual human beings.
They looked at where ‘hallucinations’ occurred and tried to minimise the opportunities for AI to confidently create credible – but inaccurate – responses. We tested different ways of asking the same questions – and compared AI outputs to human analysis. We are able to now provide suggestions as to the questions that seem to give answers that most closely reflected the original data and can offer five top tips for writing questions that should yield more accurate, human responses.
While our 360 platform is already robust with regard to data security, data protection and confidentiality, we asked our developers to look at what was needed for those customers who want to use ChatGPT in a conscious and informed way when looking at the 360 review data across a group or cohort of people.
As an organisation, we have been awarded (and have proudly retained) the ISO 27001 accreditation for data security – this is core to how we work.
The result of the development team’s work is included in our latest platform release. It enables customers with high level Admin rights to extract open-ended comments from a 360 review without the full names of the individuals attached and then review this extract and consent to its submission to ChatGPT. The system will clarify what that means for data privacy and thereby providing a check and balance on the sharing of the data to Open AI. Users can then be confident in the way in which this is being used.
An enormous advantage of using AI to answer questions and summarise information in this way is efficiency and time.
It takes a human two days to perform the qualitative analysis on a single question and one data set – ChatGPT gets a result in seconds.
However, is it right? Is it better?
Effectively, neither is free from bias. ChatGPT will use its LLMs to make links, predictions, see patterns and extract meaning. The human will use their experience, recent projects, familiar frameworks and a whole host of preferences and biases – or they will have to work very hard to avoid doing so.
Our view is that expert, human analysis can be more responsive to nuances, can more accurately interpret and reflect the language and feelings represented in the feedback and can double-check itself and question its own accuracy.
ChatGPT is inordinately quicker and, quite often, right. However, to avoid sweeping generalisations and AI hallucinations – and stop it from making (mostly sensible) predictions based on its broader data set – it needs input. The right questions need to be asked – and that takes skill and adopting the guidelines that we know improves the questions – and therefore the responses back.
Instead, ask simple, short but specific open questions related to the purpose of the 360. Remember to keep firmly in mind questions that you would like to ask of that data later. When you are interrogating ChatGPT, know what questions people were asked in the review and think about interrogative questions that would sensibly follow these.
Remember that ChatGPT will not see the questions that are asked (only the responses) and will also not know which comments relate to which individual (unless there is only one person in the review). So, asking “Who shows the most potential for a senior leadership role?” will lead to a skewed answer, as ChatGPT will only be able to comment on those whose names have been included within people’s responses. It is better to ask “How do people in this team show potential for more senior leadership roles?”
Inevitably AI will take its place in supporting HR leaders in providing focus, analysis and information. The challenge now is how to best use the power AI has – and recognise its flaws – so that it can become a supportive, additional resource to the HR leader.
To find out more about how we are enabling our 360 customers to take the first steps – and protecting data privacy, drop us an email and let’s start a conversation.
get in touch.