Why do you work in HR? Most probably because you are interested in people and their welfare. You’re a people person. AI (Artificial intelligence) will probably become your new best friend in the next couple of years, but AI will never be human. Its limitations need to be understood and taken into account.
HR needs to get itself into a prime position to keep track of what is top of employees’ wishlist for technology. Plus it is vital that HR professionals use the latest technology like everybody else, so they can provide insights that will help business leaders understand the interdependencies of business and people.
One company doing this is insurance firm AXA. AXA is using Oracle's AI-powered HCM application, which offers intelligent features such as smart candidate lists.
As a large global company with many decentralised businesses, AXA’s HR team are tasked with managing 157,000 people across 56 countries. By making it quicker to integrate newly acquired businesses into the AXA HR system, and supporting data sharing and analysis locally, regionally or globally – all with the highest level of data security – Oracle HCM is allowing the team to enhance its service to the business.
With access to this kind of smart technology, HR staff or line managers can simply ask a chatbot to source specific data points to gain insight into employees’ performance history.
A pregnant worker who wants more details about the company maternity leave policy could just grab their mobile and chat to a bot – the AI might even suggest additional actions or activities based on the experiences of others.
However, it is important to treat AI as an addition rather than a replacement for HR staff. While AI can be great as part of the recruitment process - attracting talent from a broader range of backgrounds than traditional recruiting processes for example – interviews are still vital to getting a feel for the right candidate.
More critically, it is imperative to keep bias out of HR systems. There have been several cases where organisations have relied on AI, and been called out as racist or sexist.
According to a study by Massachusetts Institute of Technology, thirty-five percent of images for darker-skinned women faced errors on facial recognition software, compared to only one percent for lighter-skinned males.
Google, meanwhile, has decided to omit gender-based pronouns from its Smart Compose Gmail technology, as it cannot find a way to guarantee the software correctly predicts someone’s sex or gender identity, and avoid causing offence.
Take the case of the pregnant worker asking the chatbot questions about their upcoming leave – the AI system might be programmed to automatically refer to the father of the baby, in an era where same-sex couples and single parents could just as well be the case.
The key for HR staff is to be able to trust the data in front of you, so you can ensure the advice and information being passed to staff and used to form business decisions is accurate. AI can only see the data, not the people behind it; human interpretation is necessary to avoid built-in bias and offence.
Find out more about how business leaders are looking at data security by checking our report here