Archive for the ‘Software’ Category
Software is an algorithm and thus should not have been patented according to the Supreme Court’s own past decisions. Tim Lee reviews the current case coming up for a decision. His conclusion:
Now the Supreme Court will have an opportunity to weigh in on the case. And while the high court could issue a narrow ruling based on the details of the patents in this case, it could also take the opportunity to fix the software patent mess more broadly. All it would need to do is to reiterate its earlier position that patents claiming mathematical processes — a.k.a. computer software — isn’t eligible for patent protection unless it’s tied to a specific machine or physical process.
The high court will be reluctant to do this because it would be disruptive. Reiterating that mathematical algorithms can’t be patented would call into question thousands of patents held by major software companies. And these companies could complain, with some justification, that the Supreme Court’s failure to rule on the issue for more than 30 years was a tacit acceptance of rulings by the Federal Circuit.
Still, the federal circuit cannot overrule Supreme Court precedents. And the federal circuit’s experiment with software patents has been a disaster. As the patent scholar James Bessen has argued, the patent troll crisis is really a software patent crisis. Software patents are far more likely to be involved in litigation than other types of patent. The result: According to Bessen’s calculations, troll-related litigation cost the U.S. economy $29 billion in 2011 alone. Reiterating that “pure” software can’t be patented wouldn’t just be good law — it would also save the nation billions of dollars in litigation costs.
People are aghast at the development disaster of Healthcare.gov. Even though that development was managed by the Centers for Medicare and Medicaid Services (CMS) within the government, the bulk of the development work has been done by CGI Federal, a private company.
If only, people say, the government would have let a private company—so much more efficient than government employees—do all the development work. Like, say, Oracle, which was given the task of implementing Oregon’s healthcare-insurance exchange.
Or maybe not: zero people signed up to date. Bupkis. Nada.
Brian Fung writes in the Washington Post:
Ten years ago, the word “smartphone” didn’t exist. By necessity, neither did the word “dumbphone.”
In a decade, we might talk about all of our appliances in similar ways. From ovens to garage doors to insulin pumps to vehicles, many of our devices are going to be connected to the Internet in the same sense that our phones are now. Certain such products are already on the market; one company, SmartThings, sells devices that help consumers control their lights and locks while they’re not at home, for example. Eventually, these items will be able to respond to signals from one another independent of human input. Your bathroom scale might tell your refrigerator that you’re overweight, and your fridge might start recommending healthier recipes.
That could be great, but it also vastly expands the universe of things that could go wrong, particularly when it comes to privacy. This might seem obvious, until you consider that many of the businesses that make these devices have never really needed to worry about securing their products before. Take dishwashers. At heart, they’re very simple machines. But a hacked dishwasher might start running on overdrive, going through multiple cycles, wasting gallons of water and costing you extra and possibly flooding your house. Although the folks who make dishwashers may be fantastic engineers, or even great computer programmers, it doesn’t necessarily imply they’re equipped to protect Internet users from the outset.
“It’s not just that the consumers don’t understand the technology,” said Jeff Hagins, co-founder of SmartThings, at a Federal Trade Commission workshop Tuesday. “It’s also that the people building it don’t understand it.” Hagins added, hypothetically: “Just because I know how to write PHP doesn’t mean I understand these vulnerabilities at all.”
The same holds true for the auto industry, where many companies have begun to experiment with new technologies that let cars communicate with one another. Tadayoshi Kohno is a researcher at the University of Washington who’s spent a lot of time deliberately hacking into cars to test their vulnerabilities.
“Very often we see sectors of the broader industry that are not computer science experts starting to integrate computers into their systems and then start to integrate networks into those systems,” said Kohno. “Because they don’t have experience being attacked by real attackers, like Microsoft and so on, their level of security awareness … appears to be dated.”
Hacking is just an extreme case. Short of that, . .
Very interesting after-action report, looking at contributing factors to HealthCare.gov’s abortive launch. Amy Goldstein and Julian Eisperin report in the Washington Post:
In May 2010, two months after the Affordable Care Act squeaked through Congress, President Obama’s top economic aides were getting worried. Larry Summers, director of the White House’s National Economic Council, and Peter Orszag, head of the Office of Management and Budget, had just received a pointed four-page memo from a trusted outside health adviser. It warned that no one in the administration was “up to the task” of overseeing the construction of an insurance exchange and other intricacies of translating the 2,000-page statute into reality.
Summers, Orszag and their staffs agreed. For weeks that spring, a tug of war played out inside the White House, according to five people familiar with the episode. On one side, members of the economic team and Obama health-care adviser Zeke Emanuel lobbied for the president to appoint an outside health reform “czar” with expertise in business, insurance and technology. On the other, the president’s top health aides — who had shepherded the legislation through its tortuous path on Capitol Hill and knew its every detail — argued that they could handle the job.
In the end, the economic team never had a chance: The president had already made up his mind, according to a White House official who spoke on the condition of anonymity in order to be candid. Obama wanted his health policy team — led by Nancy-Ann DeParle, director of the White House Office of Health Reform — to be in charge of the law’s arduous implementation. Since the day the bill became law, the official said, the president believed that “if you were to design a person in the lab to implement health care, it would be Nancy-Ann.”
Three and a half years later, such insularity — in that decision and others that would follow — has emerged as a central factor in the disastrous rollout of the new federal health insurance marketplace, casting doubt on the administration’s capacity to carry out such a complex undertaking.
“They were running the biggest start-up in the world, and they didn’t have anyone who had run a start-up, or even run a business,” said David Cutler, a Harvard professor and health adviser to Obama’s 2008 campaign, who was not the individual who provided the memo to The Washington Post but confirmed he was the author. “It’s very hard to think of a situation where the people best at getting legislation passed are best at implementing it. They are a different set of skills.”
The White House’s leadership of the immense project — building new health insurance marketplaces for an estimated 24 million Americans without coverage — is one of several key reasons that the president’s signature domestic policy achievement has become a self-inflicted injury for the administration.
Based on interviews with more than two dozen current and former administration officials and outsiders who worked alongside them, the project was hampered by the White House’s political sensitivity to Republican hatred of the law — sensitivity so intense that the president’s aides ordered that some work be slowed down or remain secret for fear of feeding the opposition. Inside the Department of Health and Human Services’ Centers for Medicare and Medicaid, the main agency responsible for the exchanges, there was no single administrator whose full-time job was to manage the project. Republicans also made clear they would block funding, while some outside IT companies that were hired to build the Web site, HealthCare.gov, performed poorly. . .
This topic interests me because I’ve worked on quite a few software projects, some of which did have launch problems, though no so severe as HealthCare.gov (which is probably more complex than the systems on which I worked).
It’s actually operational—not sign-up, I think, but health-plan comparisons. Read about it here.
Twenty-five years ago — Nov 3, 1988 — the Internet suddenly discovered why security is important. Tim Lee has an excellent article looking back at that first big crash and what happened afterwards. A good read.
On November 3, 1988, 25 years ago this Sunday, people woke up to find the Internet had changed forever. The night before, someone had released a malevolent computer program on the fledgling computer network. By morning, thousands of computers had become clogged with numerous copies of a computer “worm,” a program that spread from computer to computer much like a biological infection.
It took days of effort by hundreds of systems administrators to clean up the mess, and the Internet community spent weeks analyzing what had happened and how to make sure it didn’t happen again. A graduate student named Robert Morris was unmasked as the culprit behind the worm. A brilliant loner, he seemed to be motivated more by intellectual curiosity than malice. That didn’t save him from becoming one of the first people prosecuted and convicted under an anti-hacking statute that Congress had passed a few years earlier.
But the most significant effect of the worm was how it permanently changed the culture of the Internet. Before Morris unleashed his worm, the Internet was like a small town where people thought little of leaving their doors unlocked. Internet security was seen as a mostly theoretical problem, and software vendors treated security flaws as a low priority.
The Morris Worm destroyed that complacency. It forced software vendors to take security flaws in their products seriously. It invigorated the field of computer security, creating a demand for such experts in both academia and industry. Today, the Internet is infested with malware that works a lot like the software Morris set out to build a quarter-century ago. And the community of Internet security professionals who fight these infections can trace the roots of their profession back to the events of November 1988.
Morris has gone on to a brilliant career as an entrepreneur, computer scientist, and investor. And the man who prosecuted him, Mark Rasch, now says that he would support pardoning him.
Wednesday: A late night phone call
Andrew Sudduth was best known as a world-class rower. In 1984, he was part of an American team that won a silver medal in that summer’s Olympic games. But he was also a talented computer hacker. In the fall of 1988, he worked on the technical staff of Harvard University’s Aiken Computational Laboratory.
Sudduth had gotten to know Robert Morris while Morris was an undergraduate at Harvard. Morris had graduated from Harvard and began graduate studies at Cornell University in fall 1988. Around 11 p.m. on Wednesday, Nov. 2, Sudduth was talking with Paul Graham, another Aiken Lab staffer and a friend of Morris, when Morris called. (The account that follows is drawn from Sudduth’s testimony to a Cornell commission. Sudduth died in 2006, and Graham declined to be interviewed for this story.)
Graham answered the phone. After the call, . . .
Interesting op-ed by Clay Johnson and Harper Reed in the NY Times, taking another look at the reasons the government has a poor track record of developing software: 94 percent of large federal information technology projects over the past 10 years were unsuccessful. That’s not a very good record. Perhaps Congress should provide some guidance. (Just kidding.)
A really fine article by Lydia DePilis really lays out the fundamental, systemic problems that led to the Healthcare.gov debacle—and they’re substantially deeper and more serious than deciding how the project would be coordinated.
Really, read it.
The problem with keeping things secret from everyone is that you are deprived of useful input. President Obama, despite his talk of “transparent” and “open” has in practice greatly increased the secrecy of our government, as we daily learn. And now, I think, that approach has backfired badly: the incredibly poor implementation of Healthcare.gov would, in times past, been a breaking story months ago: whistleblowers would have been talking to the press about their frustrations and the lack of closure and the impossibility of implementing in the timeframe left with the specifications incomplete, and so on. Those stories would have hit the press, and Obama would have realized from reading them that the reports he was undoubtedly getting up the chain of command were misleadingly optimistic—and not through any desire to deceive. At every level from coder to Kathleen Sebelius,, including all the project team leaders and their managers and their managers, there are filters: managers don’t report to their bosses the problems that the managers believe they can fix. No one wants to report that he is not able to solve a problem, and moreover bosses push back: “You solve that problem, and fast—think outside the box, and do more with less. Work smarter, not harder,” and so on. So the simmering problem doesn’t make its way to the top because at every level people are determined to solve the problem and not bother their boss.
But with a whistleblower, the early problems are leaked, everyone up the entire chain reads about the problems in the Washington Post, and very quickly the problems are addressed.
That didn’t happen with Healthcare.gov. Secrecy was maintained, no one knew what was happening, and now it’s a mess.
From a report by Amy Goldsteini in the Washington Post:
“Unfortunately, the experience on HealthCare.gov has been frustrating for many Americans,” HHS officials said in a blog post Sunday afternoon, acknowledging what has been obvious to millions of insurance seekers who live in the three dozen states relying on the federal exchange. For the first time, the administration appealed to people to report their interactions, good or bad, with the exchange, a core element of the 2010 health-care law.
Emphasis added. I can’t believe that didn’t build feedback into the system.
Also from that article:
The remarks Sunday, and Obama’s expected comments Monday, represent a slight strategic shift for an administration that has repeatedly refused to say publicly exactly what is wrong with the site or what is being done to fix it. The new tack offers a bit more information while allowing officials to strike a sympathetic tone toward consumers exasperated by their experiences.
Even now, administration officials are declining to disclose many details about the debugging effort. They will not say how many experts — whom they describe as “the best and the brightest” — are on the team, when the team began its work or how soon the site’s flaws might be corrected. Still, in talking about the repairs, administration officials for the first time conceded that the site’s problems extend beyond well-publicized front-end obstacles, such as with setting up a personal account.
Since the exchange opened, officials at the White House and HHS had until now insisted that the site’s problems were caused primarily by its popularity — that more people were trying to get on than could be accommodated at once. Even Sunday, the HHS spokesman said the “main driver of the problems is volume.”
Yet insurance companies, consumers and health policy experts have noticed problems that occur further along in the process of using the exchange. The Web site sometimes gives inaccurate information about the federal tax credits that will help most people pay for a health plan, they say. And it sometimes erroneously tells low-income people that they are not eligible for Medicaid.
Note that the Obama Administration is still trying for secrecy — a modified limited hangout is all they can bring themselves to do. I think it would have been smarter and more efficient to go an open-source route so that more eyes could look at the problems and more people contribute to the success.
Secrecy can screw things up badly, as we see. When an organization is determined to keep everything secret from “outsiders,” often those in charge of the organization are similarly kept in the dark, just not so much. But generally some disaster will reveal what has been going on.
UPDATE: Ezra Klein has a brief report in the Washington Post, which repeats the obvious question: If they’re putting the “best and brightest” on the project to fix it, who was on it before?
One thing we know:
The key coordinator — which not only oversaw CGI Federal but all the other contractors building the site — was the Centers for Medicare and Medicaid Services, and in particular, their IT department. But CMS didn’t have the technological expertise to carry out this role — and they still don’t.
Someone decided that that CMS would play the role of coordinator (instead of hiring a company with relevant experience in coordinating massive software projects—e.g., IBM). Who was that person? Who made that decision? That would be interesting to know, but the secrecy of the Administration protects the person, who will doubtless go on to make other ill-founded decisions.
Richard Pollock has a good rundown in the Washington Examiner:
Federal officials did not permit testing of the Obamacare healthcare.gov website or issue final system requirements until four to six days before its Oct. 1 launch, according to an individual with direct knowledge of the project.
The individual, who spoke on condition of anonymity, described the troubled Obamacare website project as suffering from top-level management disarray, changing systems requirements and recurring delays.
The root cause of the problems was a pivotal decision by Centers for Medicare and Medicaid Services officials to act as systems integrator, the central coordinator for the entire program. Usually this role is reserved for the prime information technology contractor.
As a result, full testing of the site was delayed until four to six days before the fateful Oct. 1 launch of the health care exchanges, the individual said.
Federal officials were “freezing requirements in time to permit full testing at all levels of the site — integration testing, user testing, performance testing and tuning,” the individual said.
“Normally a system this size would need 4-6 months of testing and performance tuning, not 4-6 days,” the individual said.
The source said there were “ever-changing, conflicting and exceedingly late project directions. The actual system requirements for Oct. 1 were changing up until the week before,” the individual said.
The individual described the project as suffering from a “lack of an end-to-end business and technology vision for the project,” adding that “the hardest part of any technology project is not the technology — it is the business process decisions, what is the system supposed to do and how it will it do it.”
In addition, “The challenge with this project was that the decisions were made very, very late in the project, and no one organization … seemed to know how this complex ecosystem of applications, interfaces, user processes and hardware should all work together.”
Another person, a former employee of CGI Federal — the private-sector contractor hired to build healthcare.gov — said the government’s insistence on being the systems integrator resulted in disastrous consequences for the website.
The former employee said that “requirements came late, CMS dictated the design, especially the sign-up-before-viewing-plans, and there was absolutely not enough time for testing.” . . .
From this article in the Washington Post:
Aetna CEO Mark Bertolini appeared on CNBC’s “Squawk Box” on Monday to deliver a brutal review of the Affordable Care Act’s launch.
“When you implement a project of this size, the first thing is unit testing, then application testing, and then integrated testing, and then scaleability testing and user testing,” Bertolini said. “That plan is usually a lot longer than some of the application development itself. That’s happening on the fly.”
I hope this guy is not involved in product development. The idea that usability testing is done only after development is complete is … words fail me. It’s completely wrongheaded and leads to disasters at various levels, up to and including having to redesign the system internals from scratch. I think he must never have worked in software development, and I hope he has the good sense to leave software development to other, more knowledgeable people.
Usability testing as the final step of development. Wow.
The title (from the Wonkblog article described below) is a little odd: the NSA introduced security flaws into our communications and encryption systems. The NSA is all about making systems less secure. It would be nice if they were interested in making systems secure, but that seems (apparently) contrary to their view of their mission.
Andrea Peterson reports in Wonkblog:
n a frank discussion about the government’s approach to vulnerabilities in cyber-infrastructure during a Washington Post Live summit Thursday, former NSA chief Michael Hayden said the agency is not always “ethically or legally compelled” to help fix flaws it knows about. If the agency thinks that no one else will be able to exploit a vulnerability, it leaves the problem unfixed to aid in its own spying efforts. That approach might be convenient for the NSA, but it needlessly endangers the security of Americans’ computers.
The statement came after an audience member asked if backdoors reported in the NSA leaks introduced vulnerabilities that could be exploited by hackers. Craig Mundie, a Senior Adviser to the CEO at Microsoft, took a first crack at the question. He asserted that Microsoft does not engineer in any backdoors nor has there ever been any effort to “facilitate” those kind of things. However, he also noted he could not speak to government capabilities and added “any [backdoor] mechanism that anybody would put into something obviously creates another class of vulnerabilities.”
“Nobody but us”
Hayden argued the concept of vulnerabilities was not unique to the Internet and had been an issue the NSA has dealt with since its founding. “There’s a reason that America’s offensive and defensive squads are up at Fort Meade,” Hayden said, explaining “because both offense and defense at this world hinges on a question of vulnerability.” Hayden then laid out the concept of NOBUS, which stands for “nobody but us,” that he termed “very useful” for making macro-judgments about how to react to vulnerabilities, regardless of if those flaws are “preexistent, not designed, mistake, intended, implanted, [or] whatever”:
You look at a vulnerability through a different lens if even with the vulnerability it requires substantial computational power or substantial other attributes and you have to make the judgment who else can do this? If there’s a vulnerability here that weakens encryption but you still need four acres of Cray computers in the basement in order to work it you kind of think “NOBUS” and that’s a vulnerability we are not ethically or legally compelled to try to patch — it’s one that ethically and legally we could try to exploit in order to keep Americans safe from others.
You can watch the full exchange in the video embedded below. [see article at link for the video - LG]
To a certain extent, this NOBUS idea reflects the weighing of the dual defensive and offensive mission of the NSA. Sure, patching vulnerabilities might effectively make infrastructure safer on a broad scale. But we’re talking about the same agency that reportedly has a 600-some elite offensive hacker squad, Tailored Access Operations or TAO, working out of its headquarters. And NOBUS also raises a lot of questions about how the intelligence agency determines if something is likely to be exploited by adversaries.
Take the NSA’s connection to the zero-day market. Earlier this year a Freedom of Information Act (FOIA) request revealed that the agency had a significant contract with with Vupen, a French company that deals with zero-day vulnerabilities — security flaws not yet discovered or patched by vendors. Sometimes these zero-days are used to exploit systems by the hackers who discover them, sometimes vendors are told about them as part of bug bounty programs, and sometimes they end up in these digital gray markets.
The United States is a major player in these gray markets, although other nations are reported to be also in on the game. A Reuters’s special report from May claimed the United States was the biggest . . .
Very cute post by Kevin Drum: Our Score So Far: Kids 1, Adults 0
What’s interesting is how the incentive to crack the security of the devices was built into the program: with the security in place, really the only thing they can do at home with the iPad is to try to breach the security: it was, in effect, the only game on the machine.
That is the very epitome of “perverse incentive.” (Another example: paying quality control inspectors a piece rate based on how many pieces pass inspection per day; or or having hospitals run for a profit (which leads to cost cutting, staff overload, increased rates, and so on); or having a bond-rating agency paid by the banks issuing the bonds…. oh, wait.)
James Fallows explains why (and also offers a fix to the Gmail interface).
Down with software patents! Copyright, yes. Patents, no. Trade secrets, yes. Patents, no. James Bessen writes at Wonkblog:
In 1903, a patent attorney named George Selden sued Henry Ford and four other car manufacturers, demanding a royalty on every car sold. Although he had done nothing to advance automotive technology, Selden held a patent claiming to cover the automobile.
Patent trolls are nothing new. In the nineteenth century, they were called “patent sharks” and they went after farmers in large numbers, armed with patents on barbed wire and sliding gates. But lately, patent trolls have become a hot-button issue in a way they haven’t been for decades.
That’s because trolls are filing an unprecedented number of expensive lawsuits. Over 5,000 firms were named as defendants in patent troll lawsuits in 2011, costing them over $29 billion out-of-pocket. Today’s patent trolls are wreaking damage on a scale not seen in the past. And there’s a specific reason for this: The last two decades saw a dramatic increase in the number of patents on software, and these patents are particularly prone to abuse, both by trolls and by other types of patent holders.
Policymakers are increasingly focusing on the problem of frivolous patent litigation. But so far, policymakers haven’t given enough attention to the fact that the patent crisis is mostly about patents on software. Understanding that fact is essential to fixing what’s wrong with the patent system.
Why software patents lead to litigation
Last week, Tim Lee wrote about a recent study from the Government Accountability Office found that a disproportionate share of patent litigation concerns software patents. The non-partisan government agency found that the number of defendants in patent lawsuits more than doubling from 2007 through 2011. Notably, this increase is specifically related to software patents — software patents account for 89% of the increase, according to the GAO’s calculations.
So why are there so many lawsuits over software patents? The report states that “many recent patent infringement lawsuits are related to the prevalence of low quality patents; that is, patents with unclear property rights, overly broad claims, or both. Although there is some inherent uncertainty associated with all patent claims, several of the stakeholders with this opinion noted that claims in software-related patents are often overly broad, unclear or both.”
It is not hard to find examples of software patents in lawsuits that are unclear and overly broad. A company called Lodsys has been threatening to sue hundreds of smartphone app developers for patent infringement. Its patent cover “Methods and systems for gathering information from units of a commodity across a network.” App developers have little idea what that means and the specific claims within the patent are written in similarly abstract language. Needless to say, the patent’s inventor did not actually develop a practical technology based on this patent and the original patent filing was made in 1992, long before there even were smartphone apps.
In other cases, the patents relate to actual inventions, but are specified in such broad language that they can be interpreted to cover other technologies that are developed later. For example, one inventor developed a kiosk to be placed in retail stores for producing music tapes from digital downloads. During the early 1980s, he filed a patent claiming to cover an “information manufacturing machine” at “a point of sale location.” E-data, a patent troll, later acquired the patent and aggressively interpreted this language to cover digital e-commerce, collecting millions of dollars from all sorts of companies.
Both overly broad patents and unclear patents foster unnecessary litigation. An overly broad patent allows . . .
Thanks to Jack in Amsterdam for the link. Toivo Tänavsuu writes in Ars Technica:
“I don’t care about Skype!” millionaire Jaan Tallinn tells me, taking off his blue sunglasses and finding a seat at a cozy open-air restaurant in the old town of Tallinn, Estonia. “The technology is 10 years old—that’s an eternity when it comes to the Internet Age. Besides, I have more important things going on now.”
Tallinn has five children, and he calls Skype his sixth. So why does he no longer care about his creation?
On August 29, 2003, Skype went live for the first time. By 2012, according to Telegeography, Skype accounted for a whopping 167 billion minutes of cross-border voice and video calling in a year—which itself was a stunning 44 percent growth over 2011. That increase in minutes was “more than twice that achieved by all international carriers in the world, combined.” That is to say, Skype today poses a serious threat to the largest telcos on the planet. It also made Jaan Tallinn and other early Skypers rich.
But something changed along the way. Skype is no longer the upstart that refused to put signs on its offices, that dodged international lawyers, and that kept a kiddie pool in the boardroom. This is the real story of how a global brand truly began, told in more detail than ever before by those who launched it. . .
As you know, the Obama Administration is pushing hard to enable police to seize your cellphone, demand passwords, and paw through the data. Given the amount of data that can be stored on a smartphone, this is highly invasive and seems like a prima facie violation of the Fourth Amendment. However, the 4th Amendment is more and more ignored as police assume (and are given) more and more power and moved in the direction of military occupiers rather than community members.
So I think it’s safe to assume that in the near future the police (or TSA agent or Border Patrol or FBI or whatever) can take your cellphone and require you to provide the password.
So I’m thinking a little password app would be useful. The idea is to have your phone data always backed up—in the cloud, on a hard-drive at home, or both. The password app you would install on your phone accepts two passwords: Password A allows access to the apps and data on the phone. Until that password is entered, the phone is (in effect) non-operative.
Password B triggers the app to delete everything from the phone’s memory—all files gone, all memory locations reading DEADBEEF or some such.
If the police stop you, take your phone, and demand that you provide the password, you provide Password B. At some later date you can tehn restore your phone data.
Is this already out there?
Take a look. Seems like a good idea. Unintended consequences and all that, but on first thought the benefits would exceed the costs.