Trends

AI and Data Ethics: Ethical Use of Data

businessman with robot hand touching touchscreen

Continuing our series on AI and Data Ethics, this conversation is about one of ADP's ethics principles: Ethical Use of Data.

We talked with Joe Nuzzo, ADP's VP Counsel, Global Compliance, and member of ADP's AI and Data Ethics Advisory Board, about ADP's approach to the ethical use of data.

What does ethical use of data mean in the context of artificial intelligence (AI)?

Nuzzo: Over the last few years, as tech capabilities really exploded, people realized the question to ask about AI is no longer "what can we do?" but instead "what should we do?" As is often the case, technological advancement has outpaced the ethical and regulatory tentpoles that will eventually become commonplace but are really still in their infancy today.

Whether AI is involved or not, the ethical use of data means looking carefully at what the goal is and what the right way to achieve that goal is. Data holds so much power to transform our lives – both in positive and negative ways. We always want to make sure that whatever we do will help enrich the lives of those who are impacted.

When it comes to AI, there are heightened concerns because of the absence of direct human involvement in producing results and the sheer volume of tasks and outputs that machines can produce today. But a computer only considers the information you give it and only the questions you ask. It becomes so important not just to make sure the upfront programming addresses any ethical concerns, but that the output is consistent with the expectations and free from bias.

How does ADP incorporate the ethical use of data into its practices and products?

Nuzzo: At ADP, integrity is everything. Doing the right thing, every time, will always be a foundational element of the way we design our products and services. We have an AI & Data Ethics Board that works with teams across the company to constantly evaluate the way we use data, provide guidance to teams developing new uses, and follow up to ensure the output is as intended.

As part of the Board's mission, human accountability is always part of the process for new AI uses. We evaluate how the use of data affects privacy, security, bias, and how people might use it. We make sure the processes involved are transparent and explainable to people who might not be data scientists.

All users should be able to understand the information they are getting and what data the computer relied on. People have a natural tendency to rely on the outputs of many of these technologies. We're used to computers giving us answers. With AI, you get richer information that should be used to inform human decisions, rather than answers. The person making decisions should be able to understand the limitations and what the output is based on.

What does the process of evaluating the use of data look like?

Nuzzo: We have controls built in from the beginning with our privacy and compliance by design framework. Privacy, compliance and legal professionals are involved during the development process to help our developers incorporate the principles I mentioned earlier and deliver a product that not only addresses compliance requirements but also our own ethical requirements.

The AI & Data Ethics Board is the first stop for product developers contemplating new machine learning or AI uses. We have broad expertise on the Board in tech, privacy, law, auditing, as well as outside and independent perspectives. The Board reviews the idea and its potential uses and provides meaningful direction and feedback to make sure data is being used fairly and in compliance with both legal requirements and our own standards. The Board is a resource – we want to make sure that the right issues are being considered and help design teams avoid any potential pitfalls.

Then we test the tool to make sure it does what we expect it to do. We test it for bias to make sure that it's not perpetuating or even amplifying past bias in the underlying data. Many times, that involves bringing in independent third parties to help us with that evaluation and validate our own conclusions. And then we follow up to make sure these tools continue to deliver the same level of quality output over time.

There is such a strong culture of compliance and integrity at ADP that these conversations are pretty easy. Our associates want to do the right thing and want to deliver products and experiences that make the world of work a better place for our clients.

Learn more about ADP's privacy commitment and read our position paper, "ADP: Ethics in Artificial Intelligence," linked from the AI, Data & Ethics blade on the Privacy at ADP page.

Related articles:

Algorithms and Ethics: What's in your A.I.?

AI and Data Ethics: Privacy by Design

AI and Data Ethics: 5 Principles to Consider