Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

The hi-tech war on science fraud—and some dangers

leave a comment »

In the Guardian Stephen Buranyi has a fascinating long article on how to fight fraud in science:

One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”

Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.

Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.

Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.

The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.

Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.

“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.

When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.

The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.

But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.

If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.

But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.

Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.


In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.

As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.

Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.

His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.

But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”

A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.

In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.

Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.

On 7 September 2011, the university revealed that . . .

Continue reading. There’s a lot more.

Written by LeisureGuy

7 February 2017 at 11:53 am

Posted in Science, Technology

Skydiving as an indoor sport (using a vertical wind tunnel)

leave a comment »

The above video is from this Jason Kottke post, which contains links to other routines from the same meet, including Kyra Poh’s first-place routine. (The routine above, by Maja Kuczynska) placed third.

Written by LeisureGuy

6 February 2017 at 1:06 pm

The FCC is stopping 9 companies from providing federally subsidized Internet to the poor

leave a comment »

The government seems to be moving toward serving only the wealthy by smoothing the path to their getting even more wealth and transferring money as rapidly as possible from individuals into the coffers of corporations. Brian Fung reports in the Washington Post:

Regulators are telling nine companies they won’t be allowed to participate in a federal program meant to help them provide affordable Internet access to low-income consumers — weeks after those companies had been given the green light.

The move, announced Friday by FCC Chairman Ajit Pai, reverses a decision by his Democratic predecessor, Tom Wheeler, and undercuts the companies’ ability to provide low-cost Internet access to poorer Americans. In a statement, Pai called the initial decisions a form of “midnight regulation.”

“These last-minute actions, which did not enjoy the support of the majority of commissioners at the time they were taken, should not bind us going forward,” he said.

The program, known as Lifeline, provides registered households with a $9.25-a-month credit, which can then be used to buy home Internet service. As many as 13 million Americans may be eligible for Lifeline that do not have broadband service at home, the FCC has found. Roughly 900 service providers participate in the Lifeline program.

For Kajeet Inc., one of the companies that was initially granted permission to provide service through Lifeline, the news comes as a blow.

“I’m most concerned about the children we serve,” said Kajeet founder Daniel Neal. “We partner with school districts — 41 states and the District of Columbia — to provide educational broadband so that poor kids can do their homework.”

Since becoming chairman last month, Pai has made closing the digital divide a central axis of his policy agenda. Although the vast majority of Americans have access to Internet service, there remain distinct gaps in U.S. broadband penetration, particularly among seniors, minorities and the poor. In his first address to FCC staff, Pai singled out the digital divide as one of the signature issues he hoped to address.

But Friday’s move cuts against those remarks, according to some consumer advocates who argue the decision will make it harder for low-income Americans to access the Web.

“The most obvious fact in our society is that high-speed Internet is astronomically expensive for the middle-class and down,” said Gene Kimmelman, president of the consumer advocacy group Public Knowledge. “So in any way limiting the Lifeline program, at this moment in time, exacerbates the digital divide. It doesn’t address it in any positive way.”

As many as eight other companies are also subject to the FCC’s reversal, according to the text of the decision. The agency can freely reconsider decisions it’s made on the matter within 30 days of making them. Four of the nine approvals were revoked in response to a complaint, while the remaining five were revoked within the 30-day window.

Until last year, Lifeline recipients could only apply their federal benefit toward landline and mobile voice service. Significant changes to the program under Wheeler let beneficiaries, for the first time, use their credits to purchase broadband. The expansion was opposed by Pai and other Republican officials, who argued that the measure did not do enough to rein in potential costs or to control waste, fraud and abuse. (Democrats claimed that recent reforms to the program had helped cut down on the latter.) . . .

Continue reading.

Written by LeisureGuy

4 February 2017 at 7:22 pm

It may only take 3.5% of the population to topple a dictator – with civil resistance

leave a comment »

Very interesting article by Erica Chenoweth in the Guardian:

Many people across the United States are despondent about the new president – and the threat to democracy his rise could represent. But they shouldn’t be. At no time in recorded history have people been more equipped to effectively resist injustice using civil resistance.

Today, those seeking knowledge about the theory and practice of civil resistance can find a wealth of information at their fingertips. In virtually any language, one can find training manuals, strategy-building tools, facilitation guides and documentation about successes and mistakes of past nonviolent campaigns.

Material is available in many formats, including graphic novels, e-classes, films and documentaries, scholarly books, novels, websites, research monographs, research inventories, and children’s books. And of course, the world is full of experienced activists with wisdom to share.

The United States has its own rich history – past and present – of effective uses of nonviolent resistance. The technique established alternative institutions like economic cooperatives, alternative courts and an underground constitutional convention in the American colonies resulting in the declaration of independence. In 20th century, strategic nonviolent resistance has won voting rights for women and for African Americans living in the Jim Crow south.

Nonviolent resistance has empowered the labor movement, closed down or cancelled dozens of nuclear plants, protected farm workers from abuse in California, motivated the recognition of Aids patients as worthy of access to life-saving treatment, protected free speech, put climate reform on the agenda, given reprieve to Dreamers, raised awareness about economic inequality, changed the conversation about systemic racism and black lives and stalled construction of an oil pipeline on indigenous lands in Standing Rock.

In fact, it is hard to identify a progressive cause in the United States that has advanced without a civil resistance movement behind it.

This does not mean nonviolent resistance always works. Of course it does not, and short-term setbacks are common too. But long-term change never comes with submission, resignation, or despair about the inevitability and intractability of the status quo.

This does not mean nonviolent resistance always works. Of course it does not, and short-term setbacks are common too. But long-term change never comes with submission, resignation, or despair about the inevitability and intractability of the status quo. . .

Continue reading. There’s a lot more.

Written by LeisureGuy

2 February 2017 at 7:36 pm

A starting point for tech developers to help victims of abuse

leave a comment »

Lily Hay Newman writes in Wired:

FOR ADULTS DEALING with abuse from an intimate partner, technology can be both an empowering tool and a potential source of risk. But a team of Google researchers hopes to spur the tech community to make their products easier to use for survivors—and build more potentially life-saving tools into their products.

An intimate partner abuser is in a unique position to turn someone’s technological world against them. They may know passwords, or be able to get them, and may even be in a position to place spyware on mobile devices to monitor web browsing, or get real-time location data. For those suffering abuse, a home computer may help connect with outside resources for help, but it can also reveal an escape plan.

The challenge of using technology in a secure and private way can make an already traumatic situation even more stressful for abuse survivors. To study this dynamic, a team at Google collaborated with organizations like Community Overcoming Relationship Abuse (CORA) to gather data about survivors’ technology use, create a framework to describe this digital engagement, and draw some conclusions about how developers can build or improve their technologies with survivors of intimate partner abuse in mind. One team member, Sunny Consolvo, is presenting the findings today at the Enigma security conference in Oakland. (The group will release its paper at the Association for Computing Machinery CHI conference in May.)

“We couldn’t find much in the literature that was focused on helping technology designers address this population, so that’s a big reason we’re focused on it,” says Consolvo, whose user experience research at Google focuses on usable privacy and security. “The better we can design for folks in these situations, the better [technology] becomes for everyone.”

It’s also a large population to address. The Centers for Disease Control and Prevention estimates that in the US one in three women, and 1 in four men, have been victims of intimate partner abuse at some point in their lives. “It’s not a mild social issue, it’s an epidemic,” says Cindy Southworth, founder of the Safety Net Technology Project at the National Network to End Domestic Violence, who was not involved in the Google research. “It’s a staggering number of people.”

A Range of Solutions

The Google team’s study looks specifically at adult intimate partner abuse (domestic violence more broadly also includes things like child abuse, and teens can also experience intimate partner abuse), and identifies three phases of technology use by survivors: Physical control, Escape, and Life Apart. In the first phase, the survivor uses technology while still living with, or regularly interacting with, their abuser. This means the abuser probably has regular physical access to a victim’s devices. The second phase addresses the transition period away from an abuser—full separation often takes multiple attempts—which combines periods of physical proximity with periods of being apart. In the third phase, the survivor has separated from their abuser, and wants to avoid being found while strengthening their support network.

Even if you haven’t been in an abusive relationship, it’s not hard to imagine the ways in which using technology can quickly become thorny. Text messages, social networks, and email can be indispensable for preventing an abuse survivor’s isolation, but can also become oppressive if and when an abuser finds a way to log into those accounts or do other types of surveillance. A browser can be key for planning an escape, but a search history can also betray those plans.

Though the study found, and experts agree, that there are many generalizable aspects of technology use for survivors, it’s also clear that technology needs to be flexible enough to account for the nuance of people’s lives. “We found lots of ways that technology is already working for survivors,” Consolvo says. “But these situations are often unique, it’s really complicated. And this is a population that’s motivated to look into those more advanced privacy and security features even though they’re available for anyone.”

In its recommendations, the study emphasizes the importance of designing privacy and security tools that are easy to use. Survivors are often already under intense strain when they use digital tools, which can contribute to errors when attempting to browse privately, clear a search history, or delete data. The study found that privacy and security tutorials that walk users through available options are valuable for helping survivors understand what’s possible. Offering granular controls in the first place is another crucial element, especially in the Life Apart phase when a survivor wants to use technology relatively normally, but still has potential digital overlap with their abuser.

Awareness First

Google already incorporates some of these conclusions in features like two-step verification, Chrome’s Incognito Mode, the Google Account Privacy Checkup, and the Security Checkup. Other companies like Facebook offer privacy and security walkthroughs, too. And services that provide sign-in logs, like Google and Twitter, can help survivors know if someone else logs into their accounts, what the IP address was, and where the sign-in took place. But not all companies offer such a full suite of tools, and few technology products on the market today were built with specific user-interface consideration for abuse survivors. Even the valuable tools that already exist might be tweaked or designed differently if intimate partner abuse were being taken into consideration. These takeaways may seem somewhat self-evident, but the important step is the work to compile and codify them in a way that is specifically accessible to the tech community. . .

Continue reading.

Written by LeisureGuy

1 February 2017 at 12:49 pm

Posted in Daily life, Technology

A calming interlude

with one comment

Despite all the bad things humanity has done, it also has its moments:

Written by LeisureGuy

31 January 2017 at 1:40 pm

Posted in Music, Technology, Video

The Data That Turned the World Upside Down

leave a comment »

Hannes Grassegger and Mikael Krogerus report at Motherboard:

Psychologist Michal Kosinski developed a method to analyze people in minute detail based on their Facebook activity. Did a similar tool help propel Donald Trump to victory? Two reporters from Zurich-based Das Magazin (where an earlier version of this story appeared in December in German) went data-gathering.

On November 9 at around 8.30 AM., Michal Kosinski woke up in the Hotel Sunnehus in Zurich. The 34-year-old researcher had come to give a lecture at the Swiss Federal Institute of Technology (ETH) about the dangers of Big Data and the digital revolution. Kosinski gives regular lectures on this topic all over the world. He is a leading expert in psychometrics, a data-driven sub-branch of psychology. When he turned on the TV that morning, he saw that the bombshell had exploded: contrary to forecasts by all leading statisticians, Donald J. Trump had been elected president of the United States.

For a long time, Kosinski watched the Trump victory celebrations and the results coming in from each state. He had a hunch that the outcome of the election might have something to do with his research. Finally, he took a deep breath and turned off the TV.

On the same day, a then little-known British company based in London sent out a press release: “We are thrilled that our revolutionary approach to data-driven communication has played such an integral part in President-elect Trump’s extraordinary win,” Alexander James Ashburner Nix was quoted as saying. Nix is British, 41 years old, and CEO of Cambridge Analytica. He is always immaculately turned out in tailor-made suits and designer glasses, with his wavy blonde hair combed back from his forehead. His company wasn’t just integral to Trump’s online campaign, but to the UK’s Brexit campaign as well.

Of these three players—reflective Kosinski, carefully groomed Nix and grinning Trump—one of them enabled the digital revolution, one of them executed it and one of them benefited from it.

How dangerous is big data?

Anyone who has not spent the last five years living on another planet will be familiar with the term Big Data. Big Data means, in essence, that everything we do, both on and offline, leaves digital traces. Every purchase we make with our cards, every search we type into Google, every movement we make when our mobile phone is in our pocket, every “like” is stored. Especially every “like.” For a long time, it was not entirely clear what use this data could have—except, perhaps, that we might find ads for high blood pressure remedies just after we’ve Googled “reduce blood pressure.”

On November 9, it became clear that maybe much more is possible. The company behind Trump’s online campaign—the same company that had worked for Leave.EU in the very early stages of its “Brexit” campaign—was a Big Data company: Cambridge Analytica.

To understand the outcome of the election—and how political communication might work in the future—we need to begin with a strange incident at Cambridge University in 2014, at Kosinski’s Psychometrics Center.

Psychometrics, sometimes also called psychographics, focuses on measuring psychological traits, such as personality. In the 1980s, two teams of psychologists developed a model that sought to assess human beings based on five personality traits, known as the “Big Five.” These are: openness (how open you are to new experiences?), conscientiousness (how much of a perfectionist are you?), extroversion (how sociable are you?), agreeableness (how considerate and cooperative you are?) and neuroticism (are you easily upset?). Based on these dimensions—they are also known as OCEAN, an acronym for openness, conscientiousness, extroversion, agreeableness, neuroticism—we can make a relatively accurate assessment of the kind of person in front of us. This includes their needs and fears, and how they is likely to behave. The ”Big Five” has become the standard technique of psychometrics. But for a long time, the problem with this approach was data collection, because it involved filling out a complicated, highly personal questionnaire. Then came the Internet. And Facebook. And Kosinski.

Michal Kosinski was a student in Warsaw when his life took a new direction in 2008. He was accepted by Cambridge University to do his PhD at the Psychometrics Centre, one of the oldest institutions of this kind worldwide. Kosinski joined fellow student David Stillwell (now a lecturer at Judge Business School at the University of Cambridge) about a year after Stillwell had launched a little Facebook application in the days when the platform had not yet become the behemoth it is today. Their MyPersonality app enabled users to fill out different psychometric questionnaires, including a handful of psychological questions from the Big Five personality questionnaire (“I panic easily,“ “I contradict others”). Based on the evaluation, users received a “personality profile”—individual Big Five values—and could opt-in to share their Facebook profile data with the researchers.

Kosinski had expected a few dozen college friends to fill in the questionnaire, but before long, hundreds, thousands, then millions of people had revealed their innermost convictions. Suddenly, the two doctoral candidates owned the largest dataset combining psychometric scores with Facebook profiles ever to be collected.

The approach that Kosinski and his colleagues developed over the next few years was actually quite simple. First, they provided test subjects with a questionnaire in the form of an online quiz. From their responses, the psychologists calculated the personal Big Five values of respondents. Kosinski’s team then compared the results with all sorts of other online data from the subjects: what they “liked,” shared or posted on Facebook, or what gender, age, place of residence they specified, for example. This enabled the researchers to connect the dots and make correlations.

Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.

Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether deduce whether someone’s parents were divorced.

The strength of their modeling was illustrated by how well it could predict a subject’s answers. Kosinski continued to work on the models incessantly: before long, he was able to evaluate a person better than the average work colleague, merely on the basis of ten Facebook “likes.” Seventy “likes” were enough to outdo what a person’s friends knew, 150 what their parents knew, and 300 “likes” what their partner knew. More “likes” could even surpass what a person thought they knew about themselves. On the day that Kosinski published these findings, he received two phone calls. The threat of a lawsuit and a job offer. Both from Facebook. . .

Continue reading.

Written by LeisureGuy

29 January 2017 at 6:00 pm

%d bloggers like this: