Insights and Best Practices for the Current and Future State of Data Privacy and AI
Part of a series | Insights in Action Series
Dive into the complex entanglements of data privacy, cybersecurity and AI ethics with insights from leaders in the field.
During the recent second annual ADP virtual summit, Insights in Action: Unlocking the Data-Driven Potential of People at Work, experts and leaders put their heads together once again to share observations, predictions and advice for the future of work.
The event's final session focused on the complex entanglements of data privacy, cybersecurity and artificial intelligence (AI) ethics. Topics that might once have been reserved for government agencies, technology developers and health-care providers have evolved to a point in time when nearly every organization needs policies and best practices to navigate these challenges effectively. Panelists Jason Albert, global chief privacy officer at ADP; Amie Stepanovich, vice president of U.S. policy at the Future of Privacy Forum; and Danny Weitzner, 3Com Founders senior research scientist at MIT Computer Science and Artificial Intelligence Laboratory, explored the challenges and opportunities facing business leaders today, tomorrow and beyond.
Current privacy challenges in AI
"Data is the lifeblood for AI," says Stepanovich. "And when that data is personal data, I think it creates a pretty significant risk to privacy and a potential for harm to individuals, to communities and potentially to society more broadly. That risk needs to be considered."
Weitzner notes the evolution of these technologies builds on long-standing data practices. "The technologies we're talking about are extensions of the data analytics, data search, data classification and recommendation systems that we've been working with for decades," he says. "They're more different in scale than different in kind. So that means organizations that have good privacy practices to begin with are going to find the transition to applying AI analytic techniques much easier." Weitzner urges organizations to prioritize client relationships and transparent communication around the goals and intent behind its usage.
Albert points out that trust plays a major role in people's relationships with these technologies. "At the end of the day, people will only use technology they trust," he says. One of the benefits of AI is that it works with much less personal data — or in ways that can protect privacy, such as masking and tokenization — but panelists agree that people deserve to have information and confidence about how their data is used.
Developments in policy and legislation
While comprehensive rules governing privacy and AI are still somewhat lacking, the panelists note that many privacy laws that predate the AI boom — such as the General Data Protection Regulation (GDPR) in Europe — are informing the usage of newer technologies. While new legislation is popping up in various U.S. states, Stepanovich points out that there have been "groundbreaking attempts at sectoral guidance on how to use AI." These attempts include best practices that go "a little bit beyond pure privacy and involve thinking through transparency and accountability, how humans should or need to be brought into AI systems and how to make sure you're preventing 'junk science' and discrimination."
Ultimately, existing privacy laws can help set precedence and offer direction on how to approach future regulations. That correlation may drive the need for further context-specific uses of AI. Some state legislatures are already passing laws that govern specific areas, such as elections, health care and education, while stopping short of any broad or universal legislation that would govern AI tools independent of their use cases.
Navigating AI adoption
Centering the client relationship continues to play a key part in effective adoption and can help when organizations adopt their own ethical principles to guide their actions. "First, respect your customers and users," says Weitzner. "Focus on the context of the customer relationship you already have. What did the customer come to you for? Are you fulfilling that need? Or are you doing something entirely different with the data and perhaps hoping they won't notice?"
Rushing to deploy AI solutions to keep up with external pressures is another common pitfall. Racing to the finish line can create unpleasant surprises for clients, regulators and partners, which undercuts respect. "[ADP's ethical guidelines for AI include] communicating with transparency about when we use AI, explaining how it works, making sure it's human-centered," says Albert, "[This means] it's not machine decisions, but there is always a human in the loop, and we're making sure we have diverse teams that develop it so we can take account of different perspectives and making sure we have good data quality to mitigate bias. We're also making sure we have a culture of responsible AI [use] and training throughout the organization."
Register then watch the full recording of this session and others from the Insights in Action summit to learn more about how technology and innovation are driving changes in the workplace.