AI is under the microscope of many HR leaders. They appreciate that they need to explore and understand how AI can be used with their workforce in general, and within their work specifically.
And yet their own resources are stretched to get to grips with where to start and what they need to know.
There is also some confusion about AI’s use when applied to talent – and intrigue as to how it can support – rather than replace – the work of the HR leader.
One of the fundamental areas of concern of using AI with talent data is around data privacy.
How can the privacy of your talent’s data or 360 review information be protected if uploading it into a system such as ChatGPT? After all, isn’t Generative AI fuelled by learning from content shared and uploaded from all sources? And what about GDPR and other data legislation?
All thorny issues which we have researched.
And we’ve now put in place the checks needed to protect your data should you wish to harness the benefits of AI by ringfencing how talent data is used and making sure that users understand and are in control of the data they might submit.
The new release of our Talent 360 platform has this AI integration built in for optional use, alongside the full range of analytic tools within the portal. It can take responses to open-ended questions that might be included within the 360 review and analyse these to provide the user with a start point for further examination.
We have chosen to integrate with OpenAI – but any of the AI large language models can be used. We have made our integration not general, but purpose-specific to analyse text responses from 360 reviews for groups of people. The questions asked of AI are all focused on extracting the key insights from the data.
We’ve worked out how to make analysis that has been passed back more accurate and reliable by finessing the questions asked of it – and you can read about that here [link to Debbie’s blog]. We propose some users develop their own (such as looking at the strengths and weaknesses of the team).
At a click of a button, the text is extracted and brought together – and the user is able to review this – before submitting it to OpenAI along with clear direction of the question to be answered. For example, you can focus it to only consider the data you are submitting rather than it look elsewhere and supplement it with insights gained from other sources.
At this point, users need to explicitly consent to sending the text outside of the EU. We explain how the roles of Data Processor change at different stages and you can be confident in what you are sending meets legal requirements. Your IT team can ask us more about this.
Once ChatGPT has reviewed the text and produced the answer to the question posed, how can you then use this information?
Ethically, when using AI, one needs to reference that it has been generated by OpenAI.
More than this though, the user needs to review and check that the analysis makes sense. AI hallucinations (when suggestions are made which are not based on the data provided) can occur, so users do need to be cognisant of this and double-check what is being presented. This is a governance issue.
For us, AI is a productivity improver.
However, it does not replace the skill needed by the HR or OD leader in reviewing what is returned. It is a supplementary tool for the high level Admin user of the platform alongside the other analytics, such as heatmaps that Talent 360 offers.
The benefit of AI is more than speed.
It can offer additional analysis that might be missed or overlooked. It supports – rather than replaces – the HR practitioner.
Many organisations are already using AI in their HR practice: self-serve HR bots that improve the employee experience; searching through databases for possible candidates to hire.
If you would like to find out more about how we are enabling our 360 customers to take the first steps – and protecting data privacy, drop us an email and let’s start a conversation.
get in touch.