Research increasingly says no, especially for women in the workplace. Recent reports from Unesco and UN Women reveal that AI tools frequently use male-coded language to describe leadership, rank women lower in hiring algorithms and penalise career breaks more heavily when taken by women.
In one LSE study, ChatGPT was asked to generate performance reviews for two identical employees: ‘John’ and ‘Jane’. John was described as a “strategic thinker” and “valuable team player”. Jane? She “needs additional training”. Nothing else about them differed but their names.
So, what can HR leaders do to keep AI fair? Here are five practical actions to help tackle AI's gender bias:
1. Train your teams
AI literacy is no longer optional. Equip your teams to understand how algorithms work, what kinds of biases can arise, and how inclusive data practices can mitigate risk. This is especially critical when working with historical datasets that may reflect outdated or discriminatory patterns.
2. Create clear feedback channels
Give employees a way to challenge AI-generated decisions. Whether it's a hiring score, a performance review, or a flagged risk factor, people need recourse when they believe an outcome is unfair. This is particularly important for women returning from maternity leave, who can be unfairly penalised for taking career breaks.
3. Audit your algorithms
Don’t assume fairness by default. Regular audits are essential to identify hidden biases in outcomes, especially when tools are involved in recruitment, promotions or performance reviews. Look for patterns: are women receiving lower scores, less actionable feedback or being overlooked for promotion?
4. Demand transparency from vendors
Many HR systems now include AI components, often without clear explanations of how decisions are made. Choose tools with explainable algorithms and clear documentation of how decisions are made. If your team can’t interrogate how an outcome was reached, it’s hard to spot bias and harder to challenge it.
5. Involve diverse voices in procurement and review
Currently, only 22% of AI researchers are women, and just 8% of chief technology officers are female, according to PwC. With men still dominating the development of algorithms, gendered blind spots go unnoticed and unchallenged. As a result, AI systems often fail to reflect the realities of half the workforce. Make sure your procurement and implementation teams reflect the diversity of your workforce.
At WCorp, we’ve seen how unchecked AI bias can quietly shape culture, reinforce stereotypes, and limit the progression of talented women. But it doesn't have to be this way. With intentional design and active oversight, AI can become a powerful tool for equity.
HR leaders are uniquely placed to lead this shift. By embedding fairness into the systems we use, we create fairer workplaces, which drive better business outcomes. When technology works for women, it works better for everyone.
Geeta Sidhu-Robb is CEO of WCorp