Head Light Blog: Talent Management software views, tips and ideas

Asking AI the right questions to get powerful insights into 360 themes

Written by Head Light | 21-Aug-2024 13:19:56

For many HR leaders, there is a sense of both urgency and confusion about how best to move forward with artificial intelligence (AI).

They understand the speed at which AI is changing the talent landscape and future roles. They also understand the value that it offers them to free up time and allow them to focus on the more strategic and ‘value-add’ discussions. And yet there is some trepidation (quite rightly) about how best to harness its use.

We've been exploring for many months how AI and Large Language Models (LLMs) could be ethically, carefully and sensibly deployed to augment our 360-degree feedback software.

In doing so, two fundamental – and challenging – areas came up time and again.

Firstly, how can we embrace the power of AI in our 360 platform for our customers, and yet ensure we avoid the ‘AI hallucinations’ that we read about? These hallucinations are the instances in which AI makes up information or – more likely – makes educated guesses as to what a credible answer might be to a question (based by drawing on the wealth of information it has access to).  

Secondly, with 360 reviews typically containing an individual’s first name, their role, examples of situations they’ve dealt with and both negative and positive feedback, how can our customers use this with AI with data protection and data security in mind? What happens to the data submitted for analysis? How are issues of consent and confidentiality managed? 

We know that 360 is a very personalised experience. We’ve been working in this field for nearly 30 years and we know that it is very, very important to retain the human touch in the process of gathering, understanding, exploring, discussing and deciding how to use feedback. 

The challenge to Head Light psychologists and software developers

With these two thorny issues in mind, we set two of our teams challenges.

We set our business psychologists – with their many years of qualitative and quantitative analysis – against OpenAI’s ChatGPT to see how the outputs of AI would differ from those of actual human beings.  

They looked at where ‘hallucinations’ occurred and tried to minimise the opportunities for AI to confidently create credible – but inaccurate – responses. We tested different ways of asking the same questions – and compared AI outputs to human analysis. We are able to now provide suggestions as to the questions that seem to give answers that most closely reflected the original data and can offer five top tips for writing questions that should yield more accurate, human responses.

While our 360 platform is already robust with regard to data security, data protection and confidentiality, we asked our developers to look at what was needed for those customers who want to use ChatGPT in a conscious and informed way when looking at the 360 review data across a group or cohort of people.

As an organisation, we have been awarded (and have proudly retained) the ISO 27001 accreditation for data security – this is core to how we work.

The result of the development team’s work is included in our latest platform release. It enables customers with high level Admin rights to extract open-ended comments from a 360 review without the full names of the individuals attached and then review this extract and consent to its submission to ChatGPT.  The system will clarify what that means for data privacy and thereby providing a check and balance on the sharing of the data to Open AI. Users can then be confident in the way in which this is being used.

How AI supports HR when looking at 360 data

An enormous advantage of using AI to answer questions and summarise information in this way is efficiency and time. 

It takes a human two days to perform the qualitative analysis on a single question and one data set – ChatGPT gets a result in seconds.

However, is it right? Is it better? 

Effectively, neither is free from bias. ChatGPT will use its LLMs to make links, predictions, see patterns and extract meaning. The human will use their experience, recent projects, familiar frameworks and a whole host of preferences and biases – or they will have to work very hard to avoid doing so.

Our view is that expert, human analysis can be more responsive to nuances, can more accurately interpret and reflect the language and feelings represented in the feedback and can double-check itself and question its own accuracy. 

ChatGPT is inordinately quicker and, quite often, right. However, to avoid sweeping generalisations and AI hallucinations – and stop it from making (mostly sensible) predictions based on its broader data set – it needs input. The right questions need to be asked – and that takes skill and adopting the guidelines that we know improves the questions – and therefore the responses back.

Top tips when writing questions to ask ChatGPT about 360 feedback results

#1 From the outset, look carefully at the open-ended questions you include as you design your 360.
Revisit the questions you build into your 360 questionnaire and consider what the outputs might be. For example, “How well does this individual demonstrate our values?” will lead to responses such as “quite well” or “not at all”. If just the responses are going into ChatGPT, some of the meaning of these will be lost.


Instead, ask simple, short but specific open questions related to the purpose of the 360. Remember to keep firmly in mind questions that you would like to ask of that data later. When you are interrogating ChatGPT, know what questions people were asked in the review and think about interrogative questions that would sensibly follow these. 

Remember that ChatGPT will not see the questions that are asked (only the responses) and will also not know which comments relate to which individual (unless there is only one person in the review). So, asking “Who shows the most potential for a senior leadership role?” will lead to a skewed answer, as ChatGPT will only be able to comment on those whose names have been included within people’s responses. It is better to ask “How do people in this team show potential for more senior leadership roles?”

#2 Provide focus for and direct
Tell ChatGPT to “look only at the feedback here” or preface questions with “from this information only …”. This encourages it to focus on what you’ve fed in rather than it delving more deeply into its LLMs to predict what a common, or likely, answer to your question will be.

#3 Be clear in how you phrase your question and the language you use.
Use as clear and simple language as possible and say exactly what you mean. Be aware of whether meaning could be misconstrued or misinterpreted.

Don’t try to get too narrow. ChatGPT seems to struggle more with more specific behavioural themes or concepts (such as collaboration or teamwork) than it does with broader questions (such as those relating to strengths or weaknesses). For some reason, ‘team’ seems to work better than ‘group’ if you are asking about collective themes. 

It may be obvious, but check your spelling. It may be our imagination, but responses seem to take longer to generate when there is a spelling or grammatical error in the question!

#4 Encourage an answer that is more specific to the actual language, tone and essence of the feedback and comments.
This helps you to get the ‘voice’ of the feedback providers reflected in any summary. Ask questions such as “Which positive words and phrases appear most frequently in this feedback?” or “What do people say here about the strengths of this team?”

Sometimes, though, AI can mistake a positive statement for a negative one and vice versa. While these sorts of questions can provide an interesting lens, it is worth reading the examples it spits out carefully and locate any that look misunderstood by going back to the original feedback comment.

#5 Identify the best response group to answer the question.
Rather than asking a question of the entire feedback group, consider who is best placed to give the most useful comment. For example, if you’re asking about how inclusive a team of leaders are, you might look to their direct reports.

If you want to ask about who shows most potential for a more senior role, line managers would be a good place to start (but direct reports and peers may have a different view of this, so it’s worth checking across groups).

Caution – ChatGPT does seem to struggle with ‘self’ responses, possibly because the comments will all be in the first person and ChatGPT tends to assume it is referring to one individual.


Inevitably AI will take its place in supporting HR leaders in providing focus, analysis and information. The challenge now is how to best use the power AI has – and recognise its flaws – so that it can become a supportive, additional resource to the HR leader.

To find out more about how we are enabling our 360 customers to take the first steps – and protecting data privacy, drop us an email and let’s start a conversation.

 get in touch.