Archive for the ‘Technology’ Category
Christian Miller reports in ProPublica:
More than 30 years ago, the Federal Bureau of Investigation launched a revolutionary computer system in a bomb shelter two floors beneath the cafeteria of its national academy. Dubbed the Violent Criminal Apprehension Program, or ViCAP, it was a database designed to help catch the nation’s most violent offenders by linking together unsolved crimes. A serial rapist wielding a favorite knife in one attack might be identified when he used the same knife elsewhere. The system was rooted in the belief that some criminals’ methods were unique enough to serve as a kind of behavioral DNA — allowing identification based on how a person acted, rather than their genetic make-up.
Equally as important was the idea that local law enforcement agencies needed a way to better communicate with each other. Savvy killers had attacked in different jurisdictions to exploit gaping holes in police cooperation. ViCAP’s “implementation could mean the prevention of countless murders and the prompt apprehension of violent criminals,” the late Sen. Arlen Specter wrote in a letter to the Justice Department endorsing the program’s creation.
In the years since ViCAP was first conceived, data-mining has grown vastly more sophisticated, and computing power has become cheaper and more readily available. Corporations can link the food you purchase, the clothes you buy, and the websites you browse. The FBI can parse your emails, cellphone records and airline itineraries. In a world where everything is measured, data is ubiquitous — from the number of pieces of candy that a Marine hands out on patrol in Kandahar, to your heart rate as you walk up the stairs at work.
That’s what’s striking about ViCAP today: the paucity of information it contains. Only about 1,400 police agencies in the U.S., out of roughly 18,000, participate in the system. The database receives reports from far less than 1 percent of the violent crimes committed annually. It’s not even clear how many crimes the database has helped solve. The FBI does not release any figures. A review in the 1990s found it had linked only 33 crimes in 12 years.
Canadian authorities built on the original ViCAP framework to develop a modern and sophisticated system capable of identifying patterns and linking crimes. It has proven particularly successful at analyzing sexual-assault cases. But three decades and an estimated $30 million later, the FBI’s system remains stuck in the past, the John Henry of data mining. ViCAP was supposed to revolutionize American law enforcement. That revolution never came.
Few law enforcement officials dispute the potential of a system like ViCAP to help solve crimes. But the FBI has never delivered on its promise. In an agency with an $8.2 billion yearly budget, ViCAP receives around $800,000 a year to keep the system going. The ViCAP program has a staff of 12. Travel and training have been cut back in recent years. Last year, the program provided analytical assistance to local cops just 220 times. As a result, the program has done little to close the gap that prompted Congress to create it. Police agencies still don’t talk to each other on many occasions. Killers and rapists continue to escape arrest by exploiting that weakness. “The need is vital,” said Ritchie Martinez, the former president of the International Association of Law Enforcement Intelligence Analysts. “But ViCAP is not filling it.”
Local cops say the system is confusing and cumbersome. Entering a single case into the database can take an hour and hits — where an unsolved crime is connected to a prior incident — are rare. False positives are common. Many also said the FBI does little to teach cops how to use the system. Training has dropped from a high of about 5,500 officers in 2012 to 1,200 last year.
“We don’t really use ViCAP,” said Jeff Jensen, a criminal analyst for the Phoenix Police Department with 15 years of experience. “It really is quite a chore.”
The FBI has contributed to the confusion by misrepresenting the system. On its website, the FBI says cases in its database are “continually compared” for matches as new cases are entered. But in an interview, program officials said that does not happen. “We have plans for that in the future,” said Nathan Graham, a crime analyst for the program. The agency said it would update the information on its website.
The agency’s indifference to the database is particularly noteworthy at a time when emerging research suggests that such a tool could be especially useful in rape investigations.
For years, politicians and women’s advocates have focused on testing the DNA evidence in rape kits, which are administered to sexual assault victims after an attack. Such evidence can be compared against a nationwide database of DNA samples to find possible suspects. Backlogs at police departments across the country have left tens of thousands of kits untested.
But DNA is collected in only about half of rape cases, according to recent studies. A nationwide clearinghouse of the unique behaviors, methods, or marks of rapists could help solve those cases lacking genetic evidence, criminal experts said. Other research has shown that rapists are far more likely than killers to be serial offenders. Different studies have found that between one-fourth to two-thirds of rapists have committed multiple sexual assaults. Only about 1 percent of murderers are considered serial killers.
Studies have questioned the assumptions behind behavioral analysis tools like ViCAP. Violent criminals don’t always commit attacks the same way and different analysts can have remarkably different interpretations on whether crimes are linked. And a system that looks for criminal suspects on the basis of how a person acts is bound to raise alarms about Orwellian overreach. But many cops say any help is welcome in the difficult task of solving crimes like rape. A recent investigation by ProPublica and The New Orleans Advocate found that police in four states repeatedly missed chances to arrest the former NFL football star and convicted serial rapist Darren Sharper after failing to contact each other. “We’re always looking for tools,” said Joanne Archambault, the director of End Violence Against Women International, one of the leading police training organizations for the investigation of sexual assaults. “I just don’t think ViCAP was ever promoted enough as being one of them.”
The U.S. need only look north for an example of how such a system can play an important role in solving crimes. Not long after ViCAP was developed in the United States, Canadian law enforcement officials used it as a model to build their own tool, known as the Violent Criminal Linkage Analysis System, or ViCLAS. Today, the Royal Canadian Mounted Police maintains a database containing more than 500,000 criminal case profiles. The agency credits it with linking together some 7,000 unsolved crimes since 1995 – though not all of those linkages resulted in an arrest. If the FBI collected information as consistently as the Mounties, its database would contain more than 4.4 million cases, based on the greater U.S. population.
Instead, the FBI has about 89,000 cases on file.
Over the years, Canada has poured funding and staff into its program, resulting in a powerful analytical tool, said Sgt. Tony Lawlor, a senior ViCLAS analyst. One critical difference: . . .
Continue reading. It’s a lengthy article.
More and more it seems as though the FBI is simply not very good at its job. It’s unclear whether the cause is incompetence or the fact that they don’t care.
The Obama administration does not look good in this report by Lee Fang in The Intercept:
The Obama administration’s central strategy against strong encryption seems to be waging war on the companies that are providing and popularizing it: most notably Apple and Google.
The intimidation campaign got a boost Thursday when a blog that frequently promotes the interests of the national security establishment raised the prospect of Apple being found liable for providing material support to a terrorist.
Benjamin Wittes, editor-in-chief of the LawFare blog, suggested that Apple could in fact face that liability if it continued to provide encryption services to a suspected terrorist. He noted that the post was in response to an idea raised by Sen. Sheldon Whitehouse, D-R.I., in a hearing earlier this month.
“In the facts we considered,” wrote Wittes and his co-author, Harvard law student Zoe Bedell, “a court might — believe it or not — consider Apple as having violated the criminal prohibition against material support for terrorism.”
FBI Director James Comey and others have said that end-to-end encryption makes law enforcement harder because service providers don’t have access to the actual communications, and therefore cannot turn them over when served with a warrant.
Wittes and Bedell argue that Apple’s decision to “move aggressively to implement end-to-end encrypted systems, and indeed to boast about them” after being “publicly and repeatedly warned by law enforcement at the very highest levels that ISIS is recruiting Americans” — in part through the use of encrypted messaging apps — could make the company liable if “an ISIS recruit uses exactly this pattern to kill some Americans.”
The blog compares Apple’s actions to a bank sending money to a charity supporting Hamas — knowing that it was a listed foreign terrorist organization.
“The question ultimately turns on whether Apple’s conduct in providing encryption services could, under any circumstances, be construed as material support,” Wittes and Bedell write. The answer, they say, “may be unnerving to executives at Apple.”
One way to avoid such liability, Wittes and Bedell argue, would be to end encrypted services to suspected terrorists. But, they acknowledge, “Cutting off service may be the last thing investigators want, as it would tip off the suspect that his activity has been noticed.”
In a hearing on July 8 before the Senate Judiciary Committee, Justice Department officials insisted that companies need to be able to provide them with unencrypted, clear access to people’s communications if presented with a warrant.
The problem is that eliminating end-to-end encryption or providing law enforcement with some sort of special key would also create opportunities for hackers.
Within minutes of the Lawfare post going up, privacy advocates and technologists expressed outrage: Chris Soghoian, principal technologist for the American Civil Liberties Union, called it a continuation in Wittes’ “brain-dead jihad against encryption,” while Jake Laperruque, a fellow at the Center for Democracy and Technology, wrote that Wittes’ post “equates selling a phone that’s secure from hackers with giving money to terrorists.”
If Apple and Google were to cave under the pressure of being likened to terrorist-helpers, and stop making end-to-end encryption, that could be the start of a “slippery slope” that ends the mainstream availability of strong encryption, said Amie Stepanovich, U.S policy manager for Access.
But even so, strong encryption will always exist, whether produced by small companies or foreign outlets. Terrorists can take their business elsewhere, while normal Americans will be left without a user-friendly, easily accessible way of protecting of their communications. “These tools are available and the government can’t get to all of them,” says Stepanovich. . .
Security apparently was not an issue or a consideration when many modern systems were designed and developed, and that now promises to be a serious problem. Lorenzo Franceschi-Bicchierai reports in Motherboard:
When we think about cyberattacks against infrastructure, thanks to hyperbolic andunrealistic Hollywood flicks, we think of exploding nuclear plants or blacked out cities. But in reality, hackers could cause much more damage with subtler attacks, even without targeting critical infrastructure.
For example, a hacker might change the chemical composition of a popular medication drug during its preparation stage at a pharmaceutical plant—without anyone noticing—and kill thousands of people, according to Robert Lee, a security researcher and a PhD candidate researching cyber security at King’s College in London.
While that’s an unlikely, worst case kind of scenario, it’s theoretically possible because the backbone networks supporting thousands of industrial control systems around the world, think of water and electricity distribution facilities, automated bridges, oil rigs, and different types of factories, all have a common weak link: their network switches.
These switches, which tunnel data around in several industrial processes, are often overlooked when thinking about potential cyberattacks against infrastructure. But they are a critical point of failure.
“If you own the network switch you don’t have to even go after other devices,” Lee told Motherboard in a phone interview. “An adversary that can get on the switch and own the switch, can own everything on that network and do anything they’d like with it.”
As it turns out, popular network switches made by Siemens, GE, Garrettcom and Opengear, have flaws that make them easy to hack, according to new research by Colin Cassidy, Eireann Leverett, and Lee himself. The three plan to show their findings at the security and hacking conferences Black Hat and Def Con in Las Vegas next week.
If malicious hackers can break into a switch, for example by phishing someone who’s on the same network, then the hackers can steal data, manipulate it, or just study the industrial process to learn how to sabotage it at a later stage, Cassidy, a security consultant for IOActive, told Motherboard.
At that point, pretty much everything is possible, depending on what’s the system these switches help control, Cassidy said.
In an electrical substation, for example, a hacker could . . .
And do watch the video. Amazing.
Justin Lynch writes in The Weekly Wonk:
In the Chicago area, there’s a nearly exact replica of a 10 year old boy’s head. It’s not an exact replica because last year, he had a cranial defect. Doctors needed to perform craniofacial surgery on his skull to protect his brain.
Operating on the brain or skull leaves little room for error. “If something goes wrong I can destroy that person’s character…forever,” said noted neurosurgeon Henry Marsh in the 2009 documentary The English Surgeon.
It helps to make a model. A team of doctors at the Loyola University Medical Center wanted to do just that to assist the doctors performing the operation, but ordering a replica of the boy’s skull would have taken 2 to 3 weeks and cost about $4,000. Instead, they went to the Chicago Public Library as part of a trial study and printed out a replica of the boy’s skull using a 3D printer. The model of the skull was sanitized, and took just 12 hours to make. It cost $20 and the surgery was successful.
The surgery is an example of how people are using public libraries in new and important ways. Public libraries are becoming a one-stop shop for manufacturing in the digital age. Because libraries are investing in machines like 3D printers, someday soon everyone with access to a public library could become an inventor or create something.
Did a car part break? 3D scanners can digitize the part and create an exact replica of it. Need to make a cheap prototype of your invention? You can work with a library specialist to design it. Want to make your own custom jewelry? Use a 3D printer and sell it on Etsy.
“It is about making knowledge available and initiating the public to make knowledge themselves,” Jeroen de Boer, co-author of the upcoming book Makerspaces in Libraries, told me. “Makerspaces are the places where knowledge exchange happens in new ways.” Libraries are increasingly inviting places for these areas, which are essentially DIY spaces where people can go to access resources and exchange ideas in order to create and invent things.
With new technology, libraries are not necessarily doing a different job—they are doing the same job, just better. Public libraries in the U.S., dating back to the 19th century, have always given people greater access to ideas. Before the Internet, those ideas could be found back in the stacks among the books. They were a collection of past knowledge, available free of charge to anyone with a library card, as Matt Damon’s character in Good Will Hunting explained succinctly to an over-privileged grad student:
“You dropped a 150 grand on a [expletive] education you could have gotten for a $1.50 in late charges at the public library.”
What’s different about this new trend is how it incorporates a focus on inspiring the future. Libraries that invest in commercial manufacturing technology give patrons access to past and future ideas. An inventor doesn’t have to spend thousands of dollars and wait weeks for a prototype–they can go to a public library and make a prototype for a few cents. Many inventors who work in libraries use a website called Thingiverse.com, a repository of 3D designs for anyone to upload or download. Just like book collections, it is free to access, and the only cost to library users is for materials.
In 2013, around 109 libraries in the United States offered makerspaces—also called maker labs or hackspaces—but that number is on the rise and likely much higher today. Some makerspaces consist only of a single 3D printer, like how the Chattanooga Public Library in Tennessee started, and operate on a shoestring (3D printers cost around $2,500, not including materials). Other makerspaces can cost $250,000 or more, like the one at the Chicago Public Library. The maker lab there has digital design software, 3D printers, laser cutters, milling machines, and vinyl cutters.
Michael Nielsen has an interesting article in Quanta:
Imagine it’s the 1950s and you’re in charge of one of the world’s first electronic computers. A company approaches you and says: “We have 10 million words of French text that we’d like to translate into English. We could hire translators, but is there some way your computer could do the translation automatically?”
At this time, computers are still a novelty, and no one has ever done automated translation. But you decide to attempt it. You write a program that examines each sentence and tries to understand the grammatical structure. It looks for verbs, the nouns that go with the verbs, the adjectives modifying nouns, and so on. With the grammatical structure understood, your program converts the sentence structure into English and uses a French-English dictionary to translate individual words.
For several decades, most computer translation systems used ideas along these lines — long lists of rules expressing linguistic structure. But in the late 1980s, a team from IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y., tried a radically different approach. They threw out almost everything we know about language — all the rules about verb tenses and noun placement — and instead created a statistical model.
They did this in a clever way. They got hold of a copy of the transcripts of the Canadian parliament from a collection known as Hansard. By Canadian law, Hansard is available in both English and French. They then used a computer to compare corresponding English and French text and spot relationships.
For instance, the computer might notice that sentences containing the French word bonjour tend to contain the English word hello in about the same position in the sentence. The computer didn’t know anything about either word — it started without a conventional grammar or dictionary. But it didn’t need those. Instead, it could use pure brute force to spot the correspondence between bonjour and hello.
By making such comparisons, the program built up a statistical model of how French and English sentences correspond. That model matched words and phrases in French to words and phrases in English. More precisely, the computer used Hansard to estimate the probability that an English word or phrase will be in a sentence, given that a particular French word or phrase is in the corresponding translation. It also used Hansard to estimate probabilities for the way words and phrases are shuffled around within translated sentences.
Using this statistical model, the computer could take a new French sentence — one it had never seen before — and figure out the most likely corresponding English sentence. And that would be the program’s translation.
When I first heard about this approach, it sounded ludicrous. This statistical model throws away nearly everything we know about language. There’s no concept of subjects, predicates or objects, none of what we usually think of as the structure of language. And the models don’t try to figure out anything about the meaning (whatever that is) of the sentence either.
Despite all this, the IBM team found this approach worked much better than systems based on sophisticated linguistic concepts. Indeed, their system was so successful that the best modern systems for language translation — systems like Google Translate — are based on similar ideas.
Statistical models are helpful for more than just computer translation. There are many problems involving language for which statistical models work better than those based on traditional linguistic ideas. For example, the best modern computer speech-recognition systems are based on statistical models of human language. And online search engines use statistical models to understand search queries and find the best responses.
There’s a lot of work which tries to do sophisticated statistical analysis, … without any concern for the actual structure of language, as far as I’m aware that only achieves success in a very odd sense of success. … It interprets success as approximating unanalyzed data. … Well that’s a notion of success which is I think novel, I don’t know of anything like it in the history of science.
Chomsky compares the approach to a statistical model of insect behavior. Given enough video of swarming bees, for example, researchers might devise a statistical model that allows them to predict what the bees might do next. But in Chomsky’s opinion it doesn’t impart any true understanding of why the bees dance in the way that they do.
Related stories are playing out across science, not just in linguistics. In mathematics, for example, it is becoming more and more common for problems to be settled using computer-generated proofs. An early example occurred in 1976, when Kenneth Appel and Wolfgang Haken proved the four-color theorem, the conjecture that every map can be colored using four colors in such a way that no two adjacent regions have the same color. Their computer proof was greeted with controversy. It was too long for a human being to check, much less understand in detail. Some mathematicians objected that the theorem couldn’t be considered truly proved until there was a proof that human beings could understand.
Today, the proofs of many important theorems have no known human-accessible form. Sometimes the computer is merely doing grunt work — calculations, for example. But as time goes on, computers are making more conceptually significant contributions to proofs. One well-known mathematician, Doron Zeilberger of Rutgers University in New Jersey, has gone so far as to include his computer (which he has named Shalosh B. Ekhad) as a co-author of his research work.
Not all mathematicians are happy about this. In an echo of Chomsky’s doubts, the Fields Medal-winning mathematician Pierre Delignesaid: “I don’t believe in a proof done by a computer. In a way, I am very egocentric. I believe in a proof if I understand it, if it’s clear.”
On the surface, statistical translation and computer-assisted proofs seem different. But the two have something important in common. In mathematics, a proof isn’t just a justification for a result. It’s actually a kind of explanation of why a result is true. So computer-assisted proofs are, arguably, computer-generated explanations of mathematical theorems. Similarly, in computer translation the statistical models provide circumstantial explanations of translations. In the simplest case, they tell us that bonjour should be translated as hello because the computer has observed that it has nearly always been translated that way in the past.
Thus, we can view both statistical translation and computer-assisted proofs as instances of a much more general phenomenon: the rise of computer-assisted explanation. Such explanations are becoming increasingly important, not just in linguistics and mathematics, but in nearly all areas of human knowledge.
But as smart skeptics like Chomsky and Deligne (and critics in other fields) have pointed out, these explanations can be unsatisfying. . .
Everything is permanently recorded, and wide-ranging discussions are a rich source of damning comments taken out of context. More here.