Later On

A blog written for those whose interests more or less match mine.

How to Force Machines to Play Fair

leave a comment »

In Quanta Kevin Hartnett has an interesting article about Cynthia Dwork and her work:

Theoretical computer science can be as remote and abstract as pure mathematics, but new research often begins in response to concrete, real-world problems. Such is the case with the work of Cynthia Dwork.

Over the course of a distinguished career, Dwork has crafted rigorous solutions to dilemmas that crop up at the messy interface between computing power and human activity. She is most famous for her invention in the early to mid-2000s of “differential privacy,” a set of techniques that safeguard the privacy of individuals in a large database. Differential privacy ensures, for example, that a person can contribute their genetic information to a medical database without fear that anyone analyzing the database will be able to figure out which genetic information is hers — or even whether she has participated in the database at all. And it achieves this security guarantee in a way that allows researchers to use the database to make new discoveries.

Dwork’s latest work has a similar flavor to it. In 2011 she became interested in the question of fairness in algorithm design. As she observes, algorithms increasingly control the kinds of experiences we have: They determine the advertisements we see online, the loans we qualify for, the colleges that students get into. Given this influence, it’s important that algorithms classify people in ways that are consistent with commonsense notions of fairness. We wouldn’t think it’s ethical for a bank to offer one set of lending terms to minority applicants and another to white applicants. But as recent work has shown — most notably in the book “Weapons of Math Destruction,” by the mathematician Cathy O’Neil — discrimination that we reject in normal life can creep into algorithms.

Privacy and ethics are two questions with their roots in philosophy. These days, they require a solution in computer science. Over the past five years, Dwork, who is currently at Microsoft Research but will be joining the faculty at Harvard University in January, has been working to create a new field of research on algorithmic fairness. Earlier this month she helped organize a workshop at Harvard that brought together computer scientists, law professors and philosophers.

Quanta Magazine spoke with Dwork about algorithmic fairness, her interest in working on problems with big social implications, and how a childhood experience with music shaped the way she thinks about algorithm design today. An edited and condensed version of the interview follows.

QUANTA MAGAZINE: When did it become obvious to you that computer science was where you wanted to spend your time thinking?

CYNTHIA DWORK: I always enjoyed all of my subjects, including science and math. I also really loved English and foreign languages and, well, just about everything. I think that I applied to the engineering school at Princeton a little on a lark. My recollection is that my mother said, you know, this might be a nice combination of interests for you, and I thought, she’s right.

It was a little bit of a lark, but on the other hand it seemed as good a place to start as any. It was only in my junior year of college when I first encountered automata theory that I realized that I might be headed not for a programming job in industry but instead toward a Ph.D. There was a definite exposure I had to certain material that I thought was beautiful. I just really enjoyed the theory.

You’re best known for your work on differential privacy. What drew you to your present work on “fairness” in algorithms?

I wanted to find another problem. I just wanted something else to think about, for variety. And I had enjoyed the sort of social mission of the privacy work — the idea that we were addressing or attempting to address a very real problem. So I wanted to find a new problem and I wanted one that would have some social implications.

So why fairness?

I could see that it was going to be a major concern in real life.

How so?

I think it was pretty clear that algorithms were going to be used in a way that could affect individuals’ options in life. We knew they were being used to determine what kind of advertisements to show people. We may not be used to thinking of ads as great determiners of our options in life. But what people get exposed to has an impact on them. I also expected that algorithms would be used for at least some kind of screening in college admissions, as well as in determining who would be given loans.

I didn’t foresee the extent to which they’d be used to screen candidates for jobs and other important roles. So these things — what kinds of credit options are available to you, what sort of job you might get, what sort of schools you might get into, what things are shown to you in your everyday life as you wander around on the internet — these aren’t trivial concerns.

Your 2012 paper that launched this line of your research hinges on the concept of “awareness.” Why is this important?

One of the examples in the paper is: Suppose you had a minority group in which the smart students were steered toward math and science, and a dominant group in which the smart students were steered toward finance. Now if someone wanted to write a quick-and-dirty classifier to find smart students, maybe they should just look for students who study finance because, after all, the majority is much bigger than the minority, and so the classifier will be pretty accurate overall. The problem is that not only is this unfair to the minority, but it also has reduced utility compared to a classifier that understands that if you’re a member of the minority and you study math, you should be viewed as similar to a member of the majority who studies finance. That gave rise to the title of the paper, “Fairness Through Awareness,” meaning cross-cultural awareness.

In that same paper you also draw a distinction between treating individuals fairly and treating groups fairly. You conclude that sometimes it’s not enough just to treat individuals fairly — there’s also a need to be aware of group differences and to make sure groups of people with similar characteristics are treated fairly.

What we do in the paper is, we start with individual fairness and we discuss what the connection is between individual fairness and group fairness, and we mathematically investigate the question of when individual fairness ensures group fairness and what you can do to ensure group fairness if individual fairness doesn’t do the trick.

What’s a situation where individual fairness wouldn’t be enough to ensure group fairness? . . .

Continue reading.

Written by LeisureGuy

23 November 2016 at 6:37 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s