How algorithms rule our working lives
Cathy O’Neil reports in the Guardian:
A few years ago, a young man named Kyle Behm took a leave from his studies at Vanderbilt University in Nashville, Tennessee. He was suffering from bipolar disorder and needed time to get treatment. A year and a half later, Kyle was healthy enough to return to his studies at a different university. Around that time, he learned from a friend about a part-time job. It was just a minimum-wage job at a Kroger supermarket, but it seemed like a sure thing. His friend, who was leaving the job, could vouch for him. For a high-achieving student like Kyle, the application looked like a formality.
But Kyle didn’t get called in for an interview. When he inquired, his friend explained to him that he had been “red-lighted” by the personality test he’d taken when he applied for the job. The test was part of an employee selection program developed by Kronos, a workforce management company based outside Boston. When Kyle told his father, Roland, an attorney, what had happened, his father asked him what kind of questions had appeared on the test. Kyle said that they were very much like the “five factor model” test, which he’d been given at the hospital. That test grades people for extraversion, agreeableness, conscientiousness, neuroticism, and openness to ideas.
At first, losing one minimum-wage job because of a questionable test didn’t seem like such a big deal. Roland Behm urged his son to apply elsewhere. But Kyle came back each time with the same news. The companies he was applying to were all using the same test, and he wasn’t getting offers.
Roland Behm was bewildered. Questions about mental health appeared to be blackballing his son from the job market. He decided to look into it and soon learned that the use of personality tests for hiring was indeed widespread among large corporations. And yet he found very few legal challenges to this practice. As he explained to me, people who apply for a job and are red-lighted rarely learn that they were rejected because of their test results. Even when they do, they’re not likely to contact a lawyer.
Behm went on to send notices to seven companies, including Home Depot and Walgreens, informing them of his intent to file a class-action suit alleging that the use of the exam during the job application process was unlawful. The suit, as I write this, is still pending. Arguments are likely to focus on whether the Kronos test can be considered a medical exam, the use of which in hiring is illegal under theAmericans with Disabilities Act of 1990. If this turns out to be the case, the court will have to determine whether the hiring companies themselves are responsible for running afoul of the ADA, or if Kronos is.
But the questions raised by this case go far beyond which particular company may or may not be responsible. Automatic systems based on complicated mathematical formulas, such as the one used to sift through Behm’s job application, are becoming more common across the developed world. And given their scale and importance, combined with their secrecy, these algorithms have the potential to create an underclass of people who will find themselves increasingly and inexplicably shut out from normal life.
It didn’t have to be this way. After the financial crash, it became clear that the housing crisis and the collapse of major financial institutions had been aided and abetted by mathematicians wielding magic formulas. If we had been clear-headed, we would have taken a step back at this point to figure out how we could prevent a similar catastrophe in the future. But instead, in the wake of the crisis, new mathematical techniques were hotter than ever, and expanding into still more domains. They churned 24/7 through petabytes of information, much of it scraped from social media or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending patterns. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals.
This was the big data economy, and it promised spectacular gains. A computer program could speed through thousands of résumés or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top. This not only saved time but also was marketed as fair and objective. After all, it didn’t involve prejudiced humans digging through reams of paper, just machines processing cold numbers. By 2010 or so, mathematics was asserting itself as never before in human affairs, and the public largely welcomed it.
Most of these algorithmic applications were created with good intentions. The goal was to replace subjective judgments with objective measurements in any number of fields – whether it was a way to locate the worst-performing teachers in a school or to estimate the chances that a prisoner would return to jail.
These algorithmic “solutions” are targeted at genuine problems. School principals cannot be relied upon to consistently flag problematic teachers, because those teachers are also often their friends. And judges are only human, and being human they have prejudices that prevent them from being entirely fair – their rulings have been shown to be harsher right before lunch, when they’re hungry, for example – so it’s a worthy goal to increase consistency, especially if you can rest assured that the newer system is also scientifically sound.
The difficulty is that last part. . .