[Video] How AI Tools Can Mitigate Bias
Part of a series | Generative AI Insights
As momentum builds around artificial intelligence (AI)-driven tools at work – employers are eager to seize the opportunities AI provides, while protecting their organizations from risk.
In this Workforce News Minute below, Helena Almeida, managing counsel, ADP, shares some tips to ensure AI-enhanced technology can be used without introducing bias.
Learn more
[On-demand webcast] How AI is Reshaping the World of Work: Insights and Strategies to Consider
While powering positive advancements, artificial intelligence applications can potentially have negative implications, such as arriving at incorrect recommendations or amplifying factors of bias. In addition, in using these technologies, it is essential to adopt good privacy practices. Keeping critical principles in mind to counter such effects can help organizations navigate these challenges, especially as regulators in the U.S. and elsewhere consider new rules to address the societal impact of AI.
Launch this webcast anytime and receive critical insights and best practices on what organizations need to consider.
ADP has adopted rigorous principles and processes to govern its use of generative AI and AI in general. Find out what we're doing
Transcript
You know, bias is illegal whether you're having a human make a decision about hiring, firing or promoting somebody. If that's a biased decision, that's just as inappropriate as if a machine's doing it.
And the interesting thing is, as humans, we all have biases, right?
We're all working, you know, every day to sort of control the unconscious bias that seeps into our decision making. The advantage we have when we're talking about an AI algorithm is that we have data scientists who are focused on what data should be going into the algorithm that's going to evaluate candidates. How do I make sure that the algorithm isn't picking up inappropriate things about race, gender, or other protected categories when it's helping make recommendations?
And the way I look at it is that with new technology like generative AI, with any new technology, people aren't going to use it if they don't trust it.
And making sure that the product is compliant, that it's following the existing rules, frameworks, and best practices, as Jack mentioned, for making sure that the data has integrity, that privacy concerns are monitored, that we're keeping track of things like bias and accuracy.
Part of that trust will come from educating people on all the steps that companies are taking to make sure that their data is protected and that the AI is working as it should.