How We Got Responsible AI All Wrong

Written by Jennifer Prendki

July 14, 2020

If you want to distill the idea of technology into a single sentence, a good place to start is with this: “Fire can cook your food or it can burn your house to the ground.” That is to say: technology isn’t good or bad in and of itself. The devil’s in the details. Or the application, as it were.

Artificial intelligence is a great example of this dichotomy. On the one hand, it can forecast climate change more accurately, help us find information instantaneously, optimize complex business practices, drive our cars for us, diagnose diseases like cancer, and more. On the other, it can be abused, misapplied, and do genuine harm to real people. The issue we should be focusing on is responsibility. But instead, many experts have been putting all the focus on fairness. And that’s simply not an ambitious enough goal.

Take the scandal Google found itself in a few years back. A software engineer named Jacky Alcine noted that the company’s image recognition algorithm was classifying his black friends as “gorillas.” To Google’s credit, they acknowledged and remedied the problem (though they did that by simply blocking their algorithm from identifying gorillas altogether).

Now, to actually fix the problem in a more fundamental way, you’d want to retrain these models, presumably by showing them additional pictures of people with darker complexions, but that’s beside the point here. What’s important is whether it matters if Google makes their algorithm more “fair” if they don’t use the model responsibly? If they sell their image recognition algorithms to a totalitarian state, it doesn’t matter if their model was trained “fairly” or if it can identify anyone or anything. What matters is that they’re actively making the world a worse place.

Recently, you’ve seen big companies (like IBM and Microsoft) do the opposite in response to the BLM protests in America. Namely, they’ve pledged to no longer sell facial recognition to police departments. But note the “no longer” in our previous sentence. This is a new stance, taken because of a popular movement. It’s responsibility, not fairness, born out of public pressure.

Contrast that with a company like Clearview, the secretive facial recognition company who only recently stopped selling its tech to private enterprise. Regardless of how you feel about law enforcement having far reaching surveillance powers, chances are you don’t feel great about Walmart or Macy’s having that same ability.

The point here is there’s a marked difference from the Google example and the law enforcement examples. One is a technical problem. The other is an ethical one. And it’s going to take more than a few researchers in a lab to figure this out

When we talk about responsibility versus fairness, this is precisely the distinction we’re worried about. But it goes a lot deeper than that. Responsible AI isn’t just about removing harmful bias. It’s about access to AI. It’s about who gets to benefit from AI. It’s about who gets to work on and work with AI. It’s about sustainability in AI. It’s about us in the tech industry partnering with ethical companies, not exploitative ones. It’s all of it, not some of it.

In this series, we’ll discuss each of these in detail. But we’re not just going to talk about the problems we face as an industry. We’re going to talk about their solutions as well. We’d love to hear what you agree with — -and what you don’t.

Up next? The impact of AI. We hope you’ll join us tomorrow.


Read the whole series:

You May Also Like…

5 Pillars of Data-Centric AI

5 Pillars of Data-Centric AI

Currently, AI is the latest buzzword of the technology industry. Social media users and tech enthusiasts seem to...

0 Comments

Trackbacks/Pingbacks

  1. Increasing Accessibility to AI - Alectio - […] Part 1: How we got responsible AI all wrong […]
  2. Creating More Opportunities in AI - Alectio - […] Part 1: How we got responsible AI all wrong […]
  3. Impact, Bias and Sustainability in AI - Alectio - […] Part 1: How we got responsible AI all wrong […]