Want to hear more about our solutions? Speak to an expert

Automation raises ethical questions for HR leaders

That’s exactly why they need to nurture their specialists

We spotted a thought-provoking article in Personnel Today, the UK’s leading free-access HR website. It considers the implications of automation and AI on the workplace and highlights the possibility of undermining equality and gender pay principles when bias becomes systematized.

HR administration processes often lend themselves well to RPA, because they involve data entry, transferring information between systems and receiving input from applicants and employees that need to be checked for completeness and used to populate records.

Personnel Today raises an issue voiced by many of its readers. Because some of these processes relate to recruitment, there’s a risk that RPA will build in bias or inequality that’s inherent in some processes. At the moment, human oversight picks up and corrects issues – when it’s in the hands of the robots, who will notice the anomalies or injustices in yes/no decisions and take an informed, contextual view of whether certain omitted or red-flagged information might be overlooked or treated as a special case?

With RPA seen as a gateway technology towards AI, these fears are magnified. If AI controls and arbitrates over bot processes and learns from experience, how dangerous it could be if a small unconscious bias is rapidly repeated and embedded into a company’s HR activities?

The key point of reassurance, according to a recent report by the Confederation of British Industry (CBI) is that AI and RPA systems require human supervision. That sounds obvious to us. Handing over repetitive, time-consuming processing to automation frees up skilled HR staff to do something really meaningful – to continually assess and improve all HR processes. They can learn from their automation projects what works best and benchmark processes and outcomes constantly against responsible and ethical best practice and values.

The CBI recommends three ethical priorities for businesses in deploying RPA and AI: embed governance, engage staff so they’re empowered to challenge any anomalies and explain AI decision-making to consumers. These are sound principles that we support as part of automation programme best practice.

Deployed in a controlled and expert way, we don’t see RPA and AI threatening organisations’ ethical conduct. Automation deployment should never be applied to a flawed or high-risk process. First, understand the process thoroughly and make it fit for purpose – or replace it with other processes. Then automate, and apply the usual IT project principles of user testing, a limited trial and roll-out with frequent and regular healthchecks.

Testing process automation in a lab environment can be particularly useful where there’s unease about the impact of RPA on compliance and reputation. Our Robotic Operations Center of Excellence (The RoC) provides a safe testbed for our clients, where there are particular sensitivities, complex challenges or we’re breaking new ground by automating a unique or particular process for the first time.

Talk to a Lanshore expert in HR process automation if you have a challenge or question relating to automation in your organization’s HR or Personnel function.