Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Can we please stop gerrymandering?

leave a comment »

Christopher Ingraham has a great article in the Washington Post on the practice of gerrymandering, clearly explained in this short video:

Ingraham’s article is well worth reading because the problem is so prevalent. States that have solved it have taken the drawing of district lines away from the legislature and given it to an independent commission (see end of post). Ingraham’s article begins:

In his State of the Union speech, President Obama called on lawmakers and the public to take a number of steps “to change the system to reflect our better selves” for “a better politics.” The top item on that list was to end partisan gerrymandering: “we have to end the practice of drawing our congressional districts so that politicians can pick their voters, and not the other way around,” Obama said.

In most states, state legislatures draw the district boundaries that determine how many delegates the state sends to the U.S. Congress, as well as the general partisan make-up of that delegation. State legislatures are partisan beasts, and if one party is in control of the process they can draw boundaries to give themselves a numeric advantage over their opponents in Congress. This process is called gerrymandering.

Some state legislatures are more brazen about the process than others. Maryland’s districts, drawn by Democrats, are one particularly egregious example. North Carolina’s, drawn by Republicans, are another. Advocates of reform have proposed various solutions to the problem over the years. In some states, redistricting is put in the hands of an independent commission. In others, lengthy court battles are playing out to draw the districts more fairly.

But a fundamental problem with district-drawing still remains: as long as humans are drawing the lines, there’s a danger of bias and self-interest to creep into the process. There is another way, however: we could simply let computers do the drawing for us.

From a technological standpoint it’s fairly straightforward — a software engineer in Massachusetts named Brian Olson wrote an algorithm to do it in his spare time. As I described it in 2014, Olson’s algorithm creates “optimally compact” equal-population congressional districts in each state, based on 2010 census data. It draws districts that respect the boundaries of census blocks, which are the smallest geographic units used by the Census Bureau. This ensures that the district boundaries reflect actual neighborhoods and don’t, say, cut an arbitrary linethrough somebody’s house.”

To see what this looks like in practice, compare this map of our current congressional districts (top) with one we stitched together from Olson’s output (bottom).

gerrymander

Big difference, isn’t it? You can check out a larger version of the compacted map here. Rather than a confusing snarl of interlocked districts, you have neat, trim boundaries that make intuitive sense. Here are some individual state comparisons I made back in 2014 that let you see some more of the detail: . . .

Continue reading.

Some states have solved the problem, as Wikipedia notes:

Rather than allowing more political influence, some states have shifted redistricting authority from politicians and given it to non-partisan redistricting commissions. The states of Washington,[28] Arizona,[29] and California have created standing committees for redistricting following the 2010 census. Rhode Island[30]and the New Jersey Redistricting Commission have developed ad hoc committees, but developed the past two decennial reapportionments tied to new census data.

The Arizona State Legislature challenged the constitutionality of the use of a non-partisan commission, rather than the legislature, for redistricting. In Arizona State Legislature v. Arizona Independent Redistricting Commission, the US Supreme Court in 2015 upheld the constitutionality of non-partisan commissions.[31]

Note that both red states and blue states have been able to clean up their act on redistricting (though it seems that Arizona did not go quietly).

Wikipedia has an excellent short article devoted to Redistricting Commissions. Well worth reading and pondering and then getting your state moving toward it.

Written by LeisureGuy

19 February 2017 at 12:19 pm

Intelligent Machines are Teaching Themselves Quantum Physics

leave a comment »

It had to happen. Once AI achieved really good pattern recognition, AI can work on all sorts of problems. Daniel Oberhaus reports in Motherboard:

Last year, Google’s DeepMind AI beat Lee Sedol at Go, a strategy game like chess, but orders of magnitude more complicated. The win was a remarkable step forward for the field of artificial intelligence, but it got Roger Melko, a physicist at the Perimeter Institute for Theoretical Physics, thinking about how neural networks—a type of AI modeled after the human brain—might be used to solve some of the toughest problems in quantum physics. Indeed, intelligent machines may be necessary to solve these problems.

“The thing about quantum physics is it’s highly complex in a very precise mathematical sense. A big problem we face when we study these quantum systems [without machine learning] is how to deal with this complexity,” Melko told me.

“DeepMind winning this game of Go kind of crystallized some of our thinking. Go is a very complex game, but there was a solution that came from machine learning,” he continued. “So we thought, why can’t we employ similar solutions to tackle quantum physics complexity problems?”

As an example, Melko cites his own work, which focuses on condensed matter physics—basically the science of interactions between many quantum particles in various solids or liquids. As Melko wrote in a recent article for Quartz, condensed matter physics “deals with the most complex concept in nature: the quantum wave function of a many-particle system.” The quantum wave function of a particle mathematically describes all of its possible states, or as Melko describes it to me, it is the “infinitely complex…reality of the particle.”

While “infinitely complex” might seem like a bit of an overstatement, according to Melko, just modeling the wave function of a nanometer-scale mote of dust would require a computer whose hard drive contained more magnetic bits than there are atoms in the universe. As for trying to compute the wave functions of several of these dust particles at once with a classical computer? Forget about it.

The question posed by Melko and other pioneers of the field of quantum machine learning was whether neural nets could perform tasks that are beyond the capacity of algorithms which don’t incorporate machine learning, like modeling the wave function of a multi-particle system—and they didn’t have to wait long for an answer.

According to a study published last week in Science, two physicists that were not affiliated with Melko created a relatively simple neural network that was able to reconstruct the wave function of a multi-particle system, and it did so better than any previous technique that did not use machine learning. As Giuseppe Carleo, a physicist at ETH Zurich and co-author of the study, told New Scientist, “It’s like having a machine learning how to crack quantum mechanics, all by itself.”

Now that Carleo and his colleague have their proof of concept, they hope to develop a more robust neural network that can handle more complex problems. This is also a good sign for Melko, who . . .

Continue reading. There’s more.

Written by LeisureGuy

13 February 2017 at 3:38 pm

The internet and its services are enabling effective protest: Betsy DeVos edition

leave a comment »

When I say “enabling” effective protest, I mean only in providing a way for groups to communicate and work together. Their work, however, is still required, and it has to be more than marching in protest, which is fun—and an opportunity to meet people, make contacts, plan, identify resources and next steps, and organize and start to work. And the Internet greatly facilitates not only the quick and efficient organization of the Womens March on Washington—endless details quickly resolved, great visual symbol, terrific impact nationally and even globally. Not bad for a bunch of amateurs, but really revealing the power of a distributed approach, drawing on and pooling the expertise of many individuals: crowd-sourcing, in effect. And efficiently: nowadays good project management software abounds and many people have experience in using them: expertise from the crowd.

Now look at this story in Salon by Taylor Link:

On her first day as the head of the education department, Betsy DeVos learned that American teachers have to buy their own pencils.

In a tweet that quickly went viral, Devos shared her first moment in her new office.

What was supposed to be a harmless message has now been turned against her. Hundreds — from retired teachers to concerned citizens — responded to the tweet by informing the new education secretary that many school districts lack the resources to provide enough pencils for their class.

The billionaire Republican donor . . .

Continue reading.

What Twitter does:

  • It is an efficient way to voice opinions.
  • It is public: you can see how many others are voicing opinions.
  • Your contact information is included, so if the government goes very authoritarian, they have a handy list of dissidents.
  • Your contact information is included, so dissidents can identify each other and communicate directly with each other to organize.

That last thing is a bit tricky. Trolls are clever, and some will want to play a long con, joining and going along to see what’s happening and who are the nexus points.

Avoiding that would, I think, require a good procedure of personal vetting, with a multistep entry.

Written by LeisureGuy

9 February 2017 at 4:01 pm

This Teen Hacked 150,000 Printers to Show How the Internet of Things Is Shit

leave a comment »

Christopher Moyer reports in Motherboard:

On Saturday, February 4, 2017, a self-described “pissed off high school student” in the United Kingdom sat in front of his computer, listening to Bones and Yung Lean, coding a rootkit, a set of software tools that allows an unauthorized user to control a computer system. He got to thinking about recent news reports about printer hacking and shifted gears, instead building a short program in C.

Within hours, roughly 150,000 internet-connected printers across the world began spitting out ASCII art and messages informing their owners that their machines were “part of a flaming botnet.” The hacker signed his work as “Stackoverflowin.”

Throughout the evening and into Sunday, people across the web reported finding the mysterious printouts. Many of the affected printers were connected to restaurant POS systems, leaving confused employees to find ASCII robots pouring out of their receipt printers. . .

Continue reading.

Sounds as though the Federal government should set security standards that all manufacturers be required to meet. But somehow I don’t think the Federal government’s working all that well anymore.

Written by LeisureGuy

9 February 2017 at 3:22 pm

Serial Killers Should Fear This Algorithm

leave a comment »

Robert Koker writes in Bloomberg Businessweek:

On Aug. 18, 2010, a police lieutenant in Gary, Ind., received an e-mail, the subject line of which would be right at home in the first few scenes of a David Fincher movie:

“Could there be a serial killer active in the Gary area?”

It isn’t clear what the lieutenant did with that e-mail; it would be understandable if he waved it off as a prank. But the author could not have been more serious. He’d attached source material—spreadsheets created from FBI files showing that over several years the city of Gary had recorded 14 unsolved murders of women between the ages of 20 and 50. The cause of each death was the same: strangulation. Compared with statistics from around the country, he wrote, the number of similar killings in Gary was far greater than the norm. So many people dying the same way in the same city—wouldn’t that suggest that at least a few of them, maybe more, might be connected? And that the killer might still be at large?

The police lieutenant never replied. Twelve days later, the police chief, Gary Carter, received a similar e-mail from the same person. This message added a few details. Several of the women were strangled in their homes. In at least two cases, a fire was set after the murder. In more recent cases, several women were found strangled in or around abandoned buildings. Wasn’t all of this, the writer asked, at least worth a look?

The Gary police never responded to that e-mail, either, or to two follow-up letters sent via registered mail. No one from the department has commented publicly about what was sent to them—nor would anyone comment for this story. “It was the most frustrating experience of my professional life,” says the author of those messages, a 61-year-old retired news reporter from Virginia named Thomas Hargrove.

Hargrove spent his career as a data guy. He analyzed his first set of polling data as a journalism major at the University of Missouri, where he became a student director of the university’s polling organization. He joined an E.W. Scripps newspaper right out of college and expanded his repertoire from political polling data to practically any subject that required statistical analysis. “In the newsroom,” he remembers, “they would say, ‘Give that to Hargrove. That’s a numbers problem.’ ”

In 2004, Hargrove’s editors asked him to look into statistics surrounding prostitution. The only way to study that was to get a copy of the nation’s most comprehensive repository of criminal statistics: the FBI’s Uniform Crime Report, or UCR. When Hargrove called up a copy of the report from the database library at the University of Missouri, attached to it was something he didn’t expect: the Supplementary Homicide Report. “I opened it up, and it was a record I’d never seen before,” he says. “Line by line, every murder that was reported to the FBI.”

This report, covering the year 2002, contained about 16,000 murders, broken down by the victims’ age, race, and sex, as well as the method of killing, the police department that made the report, the circumstances known about the case, and information about the offender, if the offender was known. “I don’t know where these thoughts come from,” Hargrove says, “but the second I saw that thing, I asked myself, ‘Do you suppose it’s possible to teach a computer how to spot serial killers?’ ”

Like a lot of people, Hargrove was aware of criticisms of police being afflicted by tunnel vision when investigating difficult cases. He’d heard the term “linkage blindness,” used to describe the tendency of law-enforcement jurisdictions to fail to connect the dots between similar cases occurring right across the county or state line from one another. Somewhere in this report, Hargrove thought, could be the antidote to linkage blindness. The right person, looking at the information in the right way, might be able to identify any number of at-large serial killers.

Every year he downloaded and crunched the most recent data set. What really shocked him was the number of murder cases that had never been cleared. (In law enforcement, a case is cleared when a suspect is arrested, whatever the eventual outcome.) Hargrove counted 211,487, more than a third of the homicides recorded from 1980 to 2010. Why, he wondered, wasn’t the public up in arms about such a large number of unsolved murders?

To make matters worse, Hargrove saw that despite a generation’s worth of innovation in the science of crime fighting, including DNA analysis, the rate of cleared cases wasn’t increasing but decreasing—plummeting, even. The average homicide clearance rate in the 1960s was close to 90 percent; by 2010 it was solidly in the mid-’60s. It has fallen further since.

These troubling trends were what moved Hargrove to write to the Gary police. He failed to get any traction there. Sure enough, four years later, in October 2014, in Hammond, Ind.—the town next door to Gary—police found the body of 19-year-old Afrikka Hardy in a room at a Motel 6. Using her phone records, they tracked down a suspect, 43-year-old Darren Deon Vann. Once arrested, Vann took police to the abandoned buildings where he’d stowed six more bodies, all of them in and around Gary. Anith Jones had last been seen alive on Oct. 8; Tracy Martin went missing in June; Kristine Williams and Sonya Billingsley disappeared in February; and Teaira Batey and Tanya Gatlin had vanished in January.

Before invoking his right to remain silent, Vann offhandedly mentioned that he’d been killing people for years—since the 1990s. Hargrove went to Gary, reporting for Scripps, to investigate whether any of the cases he’d identified back in 2010 might possibly be attributed to Vann. He remembers getting just one helpful response, from an assistant coroner in Lake County who promised to follow up, but that too went nowhere. Now, as the Vann prosecution slogs its way through the courts, everyone involved in the case is under a gag order, prevented from speculating publicly about whether any of the victims Hargrove noted in 2010 might also have been killed by Vann. “There are at least seven women who died after I tried to convince the Gary police that they had a serial killer,” Hargrove says. “He was a pretty bad one.”

Hargrove has his eye on other possible killers, too. “I think there are a great many uncaught serial killers out there,” he declares. “I think most cities have at least a few.”

We’re in a moment when, after decades of decreases nationally in the overall crime rate, the murder rate has begun creeping upward in many major U.S. cities. For two years running, homicides in major cities jumped on average more than 10 percent. (Those increases aren’t uniform, of course: Chicago leapt from 485 reported killings in 2015 to 762 in 2016, while the number of murders dipped in New York and Baltimore.) President Trump, in the campaign and since, has vowed to usher in a new era of law and order, hammering away on Twitter at Chicago’s “carnage” in particular.

Threats of federal intervention aside, it will be difficult to fix the problem of high murder rates without first addressing clearance rates. So it’s fortuitous, perhaps, that we are living in an age in which the analysis of data is supposed to help us decipher, detect, and predict everything from the results of presidential elections to the performance of baseball players. The data-focused approach to problem-solving was brought to life for a lot of people by Michael Lewis’s Moneyball, which introduced the non-baseball-nerd public to the statistical evaluation of Major Leaguers and made a hero of Billy Beane, an executive with the Oakland A’s. Law enforcement would seem to be a fertile area for data to be helpful: In the 1990s the New York Police Department famously used data to more shrewdly deploy its officers to where the crimes were, and its CompStat system became the standard for other departments around the country.

What Hargrove has managed to do goes a few orders of magnitude beyond that. His innovation was . . .

Continue reading.

Later in the article:

. . . Police in large cities with stubbornly high murder rates point the finger at gang- and drug-related killings, and the reluctance of witnesses to come forward to identify the murderers. “The biggest problem is that everyone knows everyone,” Chester, Pa., Police Commissioner Darren Alston told the Philadelphia Daily News in September. (Chester’s homicide rate outstrips all other U.S. cities’—and is more than double that of nearby Philadelphia.) City residents, in turn, point to a lack of trust in the police. But one other obvious problem is resources. “We fund homicide investigations like we fund education—it comes down to a local tax,” Hargrove says. “When an economy fails enough and we just have to start firing cops, we see everything going to hell.”

MAP [Murder Accountability Project] tracks staffing trends on its website, too. Hargrove notes that Flint, Mich., and Dayton, Ohio, have seen their clearance rates fall more than 30 percentage points since the 1990s, coinciding with huge reductions in police manpower (330 to 185 officers in Flint; 500 to 394 in Dayton). When Hargrove’s group filed a FOIA request to get homicide data about a suspected serial killer in Detroit, the response was that the police lacked the budget to fulfill the request. “What do you do when a city says, ‘We’re too broke to even try to pull the records?’ ” Hargrove says. “I joke that what we’ve done is to create what amounts to a failed government detector.”

There is a case to be made, though, that clearance rates aren’t just a function of a police department’s staffing. Priorities and management also figure heavily. In 2000, Charles Wellford, a criminologist at the University of Maryland, published a seminal paper in which he identified the commonalities for departments that do effective murder clearance. No. 1 on that list was ensuring that cops are able to chase leads in the critical early hours after a murder, even if that means earning overtime pay. Wellford’s current research looks closely at the amount of money spent per officer, the amount spent per case, and the percentage of detectives on the force. Clearance rates, Wellford says, “are very much determined by priorities and resources. I’m beyond thinking that’s an open question. The question now for me is: How can we use the resources departments have to improve what they’re doing in clearing serious crimes?”

The most discouraging thing Hargrove has learned since starting his organization is how many police departments around the country not only ignore the FBI’s data but also don’t bother sharing their data with the FBI at all. Among the offenders: the state of Illinois, which MAP has sued for the information. Hargrove recently reported that homicides were more likely to go unsolved in Illinois in 2015 than in any other state: Only 37.3 percent of the 756 homicides were cleared. That dreadful clearance rate would seem to go a long way toward explaining Chicago’s notoriously climbing homicide rate, just as the president and others start searching for solutions. . .

Written by LeisureGuy

8 February 2017 at 2:00 pm

The hi-tech war on science fraud—and some dangers

leave a comment »

In the Guardian Stephen Buranyi has a fascinating long article on how to fight fraud in science:

One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”

Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.

Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.

Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.

The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.

Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.

“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.

When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.

The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.

But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.

If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.

But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.

Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.


In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.

As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.

Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.

His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.

But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”

A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.

In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.

Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.

On 7 September 2011, the university revealed that . . .

Continue reading. There’s a lot more.

Written by LeisureGuy

7 February 2017 at 11:53 am

Posted in Science, Technology

Skydiving as an indoor sport (using a vertical wind tunnel)

leave a comment »

The above video is from this Jason Kottke post, which contains links to other routines from the same meet, including Kyra Poh’s first-place routine. (The routine above, by Maja Kuczynska) placed third.

Written by LeisureGuy

6 February 2017 at 1:06 pm

%d bloggers like this: