Trends

Algorithms and Ethics: What's in your A.I.?

Algorithms and Ethics: What's in your A.I.?

Three key takeaways to consider when it comes to AI, data and humans.

People often talk about artificial intelligence (AI) and its potential impact on business, but what do they really mean? And is there a clear foundation for us to start unpacking its true value?

The collection of technologies falling roughly under the general header "AI" includes robotic process automation, machine learning (ML), neuro-linguistic programming (NLP), deep learning, predictive analytics, neural networking, quantum computing, robotics and autonomous mobility, among others. The premise of AI generates both great fear and great optimism, depending on applications and audience.

The EPIC2018 Evidence conference tackled those opposing sentiments. It provided a forum for social scientists, computer scientists and technologists of many stripes to openly debate how we might collaboratively approach the collection, use, analysis and dissemination of data that "feed" products, services and brands today. Central to these cross-disciplinary conversations is the shared belief that human experiences — mundane and profound — should be improved when creating and acting on evidence or data (quantitative and qualitative). Ethics factored heavily in these conversations.

The very definition of evidence raised questions and, in the process, confounded the often taken-for-granted notion that "data" (the machine-generated variety, in particular) are somehow more "truthful" and therefore more "valid" than other forms of insight. "Hard data" can correlate how much and how many, but it doesn't get to the messy "why" behind ultimately very human decisions.

The conversation led to three key takeaways to consider when it comes to AI, data and humans:

  1. Ethics shouldn't be additive to data. Ethical considerations aren't something you can overlay on top of a set of data and imagine you have somehow neutralized bias in the process. How, where, what and whose data is collected, sampled and interpreted has profound material impacts on the "evidence" produced. According to Open Democracy, "A society whose synapses have been replaced by neural networks will generally tend to a heightened version of the status quo. Machine learning by itself cannot learn a new system of social patterns, only pump up the existing ones as computationally eternal. Moreover, the weight of those amplified effects will fall on the most data visible, i.e., the poor and marginalized."
  2. Responsible AI practices must include courses to teach ML practitioners about fairness. For example, practitioners should learn how different demographics of people will be affected by a model's predictions. Google offers a crash course on fairness to help ML engineers "become aware of common human biases that can inadvertently be reproduced by ML algorithms." It's not enough to say "we are deeply committed" to data fairness and privacy. As discussions at EPIC2018 demonstrated, a genuine and urgent commitment to making things better for people through AI requires doubling down on the root social and political causes of bias, which today are amplified and replicated in data.
  3. Encourage discussion on the topic of bias in data within your development teams. Following an update on EPIC2018, my Roseland, New Jersey-based Innovation Labs co-collaborators joined me in thinking differently about data collection, data modeling and data outputs. And, while we didn't have any definitive answers to what is ultimately a very big set of problems, it did provide us with a forum to ask better questions, while also acknowledging the urgency of the task.

Machines and people are increasingly coextensive, and this relationship is only going to get more intimate. So what are we going to do? How can we contribute to more humane dialogues around how machines learn?

As members of a tech community, how can we begin to ask more informed questions about how data is collected and the means by which the flows of data get transformed by algorithms in ways both intended and unintended — benign or innocuous? Awareness that data isn't value-free is a great place to start.

Go Deeper:

Driving Innovation with Ethnography

Storytelling in Business: Capturing Organizational Wisdom

ADP Women in STEM Profile: Martha Bird

Recommended for You

Tools & Resources

Take your organization to the next level with practical tools and resources that can help you work smarter.

Visit Resource Center

Recommend a Topic

Is there a topic or business challenge you would like to see covered on SPARK?