Archive for October 29th, 2019
The Politics of Images in Machine Learning Training Sets
Kate Crawford and Trevor Paglen write at Excavating AI:
You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning.
Something is wrong with this picture.
Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?
In short, how did we get here?
There’s an urban legend about the early days of machine vision, the subfield of artificial intelligence (AI) concerned with teaching machines to detect and interpret images. In 1966, Marvin Minsky was a young professor at MIT, making a name for himself in the emerging field of artificial intelligence.[1] Deciding that the ability to interpret images was a core feature of intelligence, Minsky turned to an undergraduate student, Gerald Sussman, and asked him to “spend the summer linking a camera to a computer and getting the computer to describe what it saw.”[2] This became the Summer Vision Project.[3] Needless to say, the project of getting computers to “see” was much harder than anyone expected, and would take a lot longer than a single summer.
The story we’ve been told goes like this: brilliant men worked for decades on the problem of computer vision, proceeding in fits and starts, until the turn to probabilistic modeling and learning techniques in the 1990s accelerated progress. This led to the current moment, in which challenges such as object detection and facial recognition have been largely solved.[4] This arc of inevitability recurs in many AI narratives, where it is assumed that ongoing technical improvements will resolve all problems and limitations.
But what if the opposite is true? What if the challenge of getting computers to “describe what they see” will always be a problem? In this essay, we will explore why the automated interpretation of images is an inherently social and political project, rather than a purely technical one. Understanding the politics within AI systems matters more than ever, as they are quickly moving into the architecture of social institutions: deciding whom to interview for a job, which students are paying attention in class, which suspects to arrest, and much else.
For the last two years, we have been studying the underlying logic of how images are used to train AI systems to “see” the world. We have looked at hundreds of collections of images used in artificial intelligence, from the first experiments with facial recognition in the early 1960s to contemporary training sets containing millions of images. Methodologically, we could call this project an archeology of datasets: we have been digging through the material layers, cataloguing the principles and values by which something was constructed, and analyzing what normative patterns of life were assumed, supported, and reproduced. By excavating the construction of these training sets and their underlying structures, many unquestioned assumptions are revealed. These assumptions inform the way AI systems work—and fail—to this day.
This essay begins with a deceptively simple question: What work do images do in AI systems? What are computers meant to recognize in an image and what is misrecognized or even completely invisible? Next, we look at the method for introducing images into computer systems and look at how taxonomies order the foundational concepts that will become intelligible to a computer system. Then we turn to the question of labeling: how do humans tell computers which words will relate to a given image? And what is at stake in the way AI systems use these labels to classify humans, including by race, gender, emotions, ability, sexuality, and personality? Finally, we turn to the purposes that computer vision is meant to serve in our society—the judgments, choices, and consequences of providing computers with these capacities.
Training AI
Building AI systems requires data. Supervised machine-learning systems designed for object or facial recognition are trained on vast amounts of data contained within datasets made up of many discrete images. To build a computer vision system that can, for example, recognize the difference between pictures of apples and oranges, a developer has to collect, label, and train a neural network on thousands of labeled images of apples and oranges. On the software side, the algorithms conduct a statistical survey of the images, and develop a model to recognize the difference between the two “classes.” If all goes according to plan, the trained model will be able to distinguish the difference between images of apples and oranges that it has never encountered before.
Training sets, then, are the foundation on which contemporary machine-learning systems are built.[5] They are central to how AI systems recognize and interpret the world. These datasets shape the epistemic boundaries governing how AI systems operate, and thus are an essential part of understanding socially significant questions about AI.
But when we look at the training images widely used in computer-vision systems, we find a bedrock composed of shaky and skewed assumptions. For reasons that are rarely discussed within the field of computer vision, and despite all that institutions like MIT and companies like Google and Facebook have done, the project of interpreting images is a profoundly complex and relational endeavor. Images are remarkably slippery things, laden with multiple potential meanings, irresolvable questions, and contradictions. Entire subfields of philosophy, art history, and media theory are dedicated to teasing out all the nuances of the unstable relationship between images and meanings.[6] . . .
What John Rawls Missed
Jedidiah Britton-Purdy writes in the New Republic:
John Rawls, who died in 2002, was the most influential American philosopher of the twentieth century. His great work, A Theory of Justice, appeared in 1971 and defined the field of political philosophy for generations. It set out standards for a just society in the form of two principles. First, a just society would protect the strongest set of civil liberties and personal rights compatible with everyone else having the same rights. Second, it would tolerate economic inequalities only if they improved the situation of the poorest and most marginalized (for example, by paying doctors well to encourage people to enter a socially necessary profession).
Taken seriously, Rawls’s principles would require a radical transformation: no hedge funds unless allowing them to operate will benefit the homeless? No Silicon Valley IPOs unless they make life better for farmworkers in the Central Valley? A just society would be very different from anything the United States has ever been. Rawls argued that justice would be compatible with either democratic socialism or a “property-owning democracy” of roughly equal smallholders. One thing was clear: America could not remain as it was, on pain of injustice.
It did not remain as it was, but Rawls’s vision did not triumph either. A Theory of Justice was published in 1971, just before economic inequality began its long ascent from its lowest level in history to today’s Second Gilded Age. Richard Nixon’s “Southern strategy” was reorganizing American politics around resistance to equal rights. Within a decade, Ronald Reagan and Margaret Thatcher would lead the English-speaking world sharply away from anything resembling Rawls’s egalitarianism. Yet his philosophical stature only increased. Even his critics—the libertarian Robert Nozick, the feminist Susan Moller Okin, the communitarian Michael Sandel—ended up confirming the central and inescapable place of his thought. By the end of his life, philosophical thinking about equality, war, political authority, dissent and obedience, and global order took place on a terrain that Rawls towered over—in the shadow of justice.
That shadow provides the title of Katrina Forrester’s extraordinary study of Rawls’s thought and its legacy. Over the last 50 years, she argues, Rawls’s centrality has shaped the very idea of what philosophy is. Working in his aftermath, political philosophers have tended to emphasize ideals of consensus-seeking deliberation, legalistic formulations of political problems, and the dilemmas of individual choice in terrible situations such as war. Certain other questions have been quietly kept out: notably, the central place of conflict and collective action in politics, the tendency of capitalist democracy to fall into plutocracy, and the deep role of racism and colonialism in shaping American society and world order.
Yet as Forrester’s book demonstrates, Rawls’s approach to philosophizing about politics was never the only one, however much his influence has made it seem so. Instead, his theory of justice emerged from his distinctive experience of the exceptional decades after World War II. By tracing those historical circumstances—the political and economic assumptions of the postwar years, as well as the ways philosophy was done then—Forrester shows how Rawls’s thinking, with its strengths and blind spots, came to seem natural. Her aim is to open space for problems that Rawls neglected. What would it mean to pursue a just society while grappling with how deeply unjust and divided ours is, with how it got and stays that way?
Although Rawls’s principles of justice were in many ways radical, they were not novel. He is often thought of as the philosopher of 1960s Great Society reformism, because his principles seemed to elaborate on the goals of the civil rights movement and the war on poverty. What was new was Rawls’s mode of argument. He asked a question fundamental in political philosophy: Can any society be justified to all its members, in light of the inequalities it contains, the burdens it imposes (who empties the bedpans and gets up at midnight to make sure the subways keep running?), and the violence it deals out through police, prisons, and wars? If it cannot be just, then some of us are living in a kind of prison, and others are the wardens. If, however, justification is possible, then we might be able to create a world in which we actually approach one another as free and equal persons.
To imagine such a world, we have to shake off the habits of this one and picture ourselves as able to reset all our institutions and social relations, keeping only those that are just—that is, are justifiable to everyone who has to live within them. Rawls proposed a technique for doing this, a thought experiment that he called the “original position.” It invites us to imagine a situation in which people are to choose the world in which they will live. The key is that they choose behind a “veil of ignorance,” that is, they do not know where they would fall in the distribution of privilege and vulnerability in the societies they imagine. Would you choose to live in the United States today if you didn’t know whether you would be Elon Musk or an undocumented immigrant?
Rawls argued that, faced with this uncertainty, people would choose the world that provided the best position for the least advantaged, worst-off class of people. If you don’t know where you will fall, you will want the worst possibility to be as acceptable as possible. Economics-minded critics argued that this was too risk-averse, that one might gamble for the Silicon Valley jackpot at the risk of picking lettuce instead. But this criticism misconstrued the project: Rawls’s argument was a way of setting out exactly what it meant to justify a social world even to the people picking lettuce. If the question is, “Can this world be justified to me as a free and equal person?” Rawls was not prepared to accept, “Yes, because you might have been Elon Musk!” as an answer.
Conservative critics such as the Straussian Allan Bloom (later famous for his polemic The Closing of the American Mind) accused Rawls of cherry-picking principles to suit the liberal prejudices of the moment. In Rawls’s hands, the original position gave philosophy’s imprimatur to the democratic welfare state as well as to the civil disobedience of the civil rights movement and resistance to the Vietnam War. Friendlier readers interpreted Rawls in light of the conflicts of the early 1970s too. Philosopher Marshall Cohen’s New York Times review of A Theory of Justice welcomed a defense of American liberalism “at a time when these principles are persistently being obscured and betrayed”—presumably in Vietnam and at home by the Nixon administration.
Both of these responses, Forrester argues, miss key features of Rawls’s project. Her story begins in the decade after World War II, when Rawls undertook the work that became A Theory of Justice. A watershed event for Rawls was the 1953 publication of Ludwig Wittgenstein’s Philosophical Investigations, which along with Wittgenstein’s other late work helped to inspire a broader philosophical turn to “ordinary language.” When Rawls visited Oxford in the academic year of 1952 to ’53, this approach was richly elaborated there. It was the new philosophical frontier of the age, full of untried possibility.
Ordinary-language philosophers turned away from highly technical questions about the fundamental nature of language (What makes a sentence true? Does every word in a true statement refer to some definite object in the world?). Instead they asked how language works from the point of view of a clear-minded speaker and listener. Everyone lives inside a language, they reasoned, knows how to use its grammar, and recognizes misuse and confusion. We have to get over the philosophical impulse to seize sentences and sweat them, inquisition-style, until they confirm their truth or confess their falsehood. Philosophy is less about achieving a new kind of knowledge, more about making clear what we already know. Philosophers began to think about language and social practices such as law the way we think about games. There is no such thing as hitting a triple outside of baseball: Try as you might, you cannot do it alone, or in a group of people who have never heard of baseball and want you, please, to take your stick off the soccer pitch. But once you are playing baseball, it is clear whether or not a triple has been hit. Even close cases, such as a photo-finish race to beat a throw from an outfielder, just confirm that we know what a triple is.
The legal philosopher H.L.A. Hart argued that law, too, is a game in this way. There is no “natural law” that tells you whether you “really” must obey a law you dislike, as both dissenters and defenders of existing law had long hoped to show. But once you are involved in legal argument, you tacitly accept that certain things count as law. If you start to insist that Leviticus trumps the San Francisco municipal code, then you have become the person waving a baseball bat on the soccer pitch, hoping to get to third base in a game where third base does not exist. Forrester argues that Rawls wanted to elucidate society itself as a “game” of this sort. Social morality, which is the topic of justice, had its own tacit rules, and drawing those out could help to make clear what people already knew when not distracted by self-interest or prejudice. Like the rule book for a well-established sport, the original position and the principles that Rawls drew from it did not dictate some new morality. They helped to spell out the terms of a social practice.
If Rawls’s approach to justice emerged from the philosophical currents of the 1950s, it also formed in response to political concerns. Born in Baltimore in 1921, Rawls saw the rise of the administrative state through the 1930s and ’40s, as New Deal programs led to the establishment of an alphabet soup of government agencies to implement them: the SEC, the FHA, the PWA, the NLRB, and many more. Although Rawls was not an anti-New Deal reactionary, he shared the worries of some liberals and centrists that the expanded American state would end up interfering with personal autonomy through perennial supervision of the economy. He preferred to think that if the state established the right set of operating principles and guardrails, people would be able to get along on their own, with no more than modest political intrusion or contest.
It was bold, if not implausible, to posit a neutral and abiding set of principles in American society, which was torn by bloody labor conflict in the ’30s and ’40s, and sent its pacifists and revolutionaries to prison or worse. But Rawls wasn’t alone in doing so: The decades in which he developed his theory formed the high-water mark of the “consensus” schools of American political science and history. It became conventional to say that Americans had mostly agreed on the essential principles of liberty, equality, and democracy—and, less abstractly, private property, regulated markets, and courts of law. Conflict was the exception. Radical dissenters were outliers. The idea of consensus was essential to Rawls’s project: If Americans deeply agreed on justice, then the hidden logic of that agreement, drawn out through the original position, could both guide and limit the state.
A Theory of Justice was both radical and conservative. Yes, it proposed a sweeping . . .
Tempeh Batch 6 at 24 hours
It’s interesting that the mold seems to favor beans over grain—and substantially so thus far. I was wishing I had cooked 1.5 cups beans and 1.5 cups kamut instead of 2 cups beans and 1 cup kamut, but now I’m not so sure. We’ll see how it fares. I certainly hope kamut is not an antifungal.
But it’s early hours. We’ll see what it looks like tomorrow.
Boeing Is MAXed Out on Smoking Guns
Maureen Tkacik writes in the New Republic:
The name Mark Forkner is by now familiar to even relatively casual followers of the Boeing 737 MAX saga. Forkner is the former chief technical pilot who conducted a series of Grey Goose–addled Skype chats with a colleague about the new 737’s now infamous self-hijacking software MCAS. The supposed piloting-correction device was, Forkner wrote, “running rampant” in the simulator; for good measure, he also noted that he had “lied to the regulators (unknowingly)” about it. These revelations sent Boeing’s stock plunging over news the feds had finally unearthed a “smoking gun” proving the company knew in advance that its badly flawed piece of software could go “crazy” and cause a crash.
Naturally, Boeing pushed back against the headlines, arguing that the chats did not constitute a “smoking gun” because they concerned a pilot operating in a simulator and not behind a real-life cockpit. For once, many of the best chroniclers of the MAX catastrophe agreed with the company.
Still, we should not be led by this fleeting moment of consensus: Most everyone weighing in on the Forkner chats was wrong, or missing the point. At the end of the day, we don’t need a smoking gun to determine exactly what Boeing knew before the first crash, for the simple reason that we saw in real time how company officials responded after the first crash—i.e., with a veritable arsenal of smoking guns in the form of obvious lies and easily contradicted misinformation. This is also the most crucial lesson to fix in mind as Congress renews a round of hearings on the MAX fiasco on Tuesday.
Here’s one extremely revealing diptych from last fall, for example: When Boeing reps were discussing the crash in conference calls and town hall meetings with airlines and commercial pilots in November, they vehemently blamed incompetent pilots, according to a lawsuit from the Southwest Airlines pilots’ union. In talks with the Federal Aviation Administration, however, Boeing advised issuing an emergency directive “reminding” those same pilots of an obscure protocol they might need to employ “in the event of uncommanded nose down trim”—i.e., in case the plane randomly started to nosedive in unprompted fashion.
The original text of the directive contained a damning reference to the now-notorious MCAS system but at the last minute someone removed all references to the software from the directive. This disappearing act was, in turn, an eerie repeat of the program’s mysterious near-total omission from the plane’s official 1,600 page manual.
So when prosecutors started digging into the mystery of why the murderous software had been omitted from so much of the official MAX literature and test requirements in early 2019, Boeing handed over the inboxes of Mark Forkner, who had spearheaded certain aspects of the plane’s FAA certification and left the company in rather abrupt fashion a few months before the crashes.
Seen in this context, Forkner’s improbable advancement to center stage in the MCAS narrative is another smoking gun of sorts—that is, once you realize that David Gerger, the attorney he has retained to represent him in the probe, is world famous in white collar criminal defense for being, inter alia, the lawyer who got his client Andy Fastow, Enron’s chief financial officer/debt concealment mastermind, off with a six-year sentence the same year Enron CEO Jeff Skilling got 24. The FBI has interviewed scores of former and current Boeing employees in conjunction with the various investigations into the MAX, but Forkner is the only one known to have retained a criminal attorney. In any event, once you read the chat transcript, it’s abundantly clear that the guy is just about the opposite of Fastow. The chat, which occurred between Forkner and his fellow technical pilot Patrik Gustavsson, took place on November 15, 2017, and happened to coincide with the very moment both pilots learn, via someone named “Vince,” that the MCAS software has been drastically changed from its previous iteration, wherein it operated only at extremely high speeds and existed primarily to enable the plane to properly execute military maneuvers required by the FAA testing regime:
Forkner: Oh shocker alerT!
MCAS is now active down to M .2 It’s running rampant in the sim on me at least that’s what Vince thinks is happening
Gustavsson: Oh great, that means we have to update the speed trim descritption [sic] in vol 2
Forkner: so I basically lied to the regulators (unknowingly)
Gustavsson: it wasnt [sic] a lie, no one told us that was the case
Forkner: Vince is going to get me some spreadsheet table that shows when it’s supposed to kick in. why are we just now hearing about this?
Gustavsson: I don’t know, the test pilots have kept us out of the loop
It’s really only christine that is trying to work with us, but she has been too busy
Forkner: they’re all so damn busy, and getting pressure from the program
“Out of the loop” was an understatement: The MCAS changes, which had the software operating at speeds as low as Mach 0.2 (or about 150 miles per hour), were just the tip of the iceberg. The change in MCAS’s speed capability involved, in fact, a whole host of cascading implications that were hardly self-evident; they’d all been approved nearly a year earlier, and finalized in March—the same month, in other words, that Forkner convinced the FAA to remove mention of the program from the flight manual. Forkner had spent the summer meeting with airlines and regulators easing anxieties about the new plane without any notion that any of this was going on. His one crime appears to have been reiterating his unknowing lie to the FAA once more in early 2017, in the process of confirming the omission of MCAS from the official flight manual. His logic was by then completely obsolesced by events: MCAS didn’t need a manual entry, he argued, because it only ran “well outside the normal operating envelope” of the plane. Indeed, it seems highly doubtful, given Forkner’s self-professed and demonstrated inability to extract information out of his colleagues, that he had any grasp of how MCAS actually worked. That’s especially the case, one can assume, since his colleagues seemed relatively committed to keeping him in the dark about the software and he was busy trying to get the simulator to calm down. Say what you will about previous corporate scandal fall guys; at least Goldman Sachs’s notorious derivatives trader Fabrice Tourre basically understood how a synthetic credit default swap worked. I strongly doubt the same can be said for Mark Forkner and MCAS.
Moreover, lying (especially this brand of regulatory white-lying-by-omission-of-actionable knowledge) to aviation regulators is simply part of staying employed at Boeing. Consider in this regard a recently publicized whistleblower complaint filed internally by current Boeing engineer Curtis Ewbank, who mostly discusses his attempts to equip the MAX with an additional safeguard that arguably could have prevented the crashes, also describes a separate episode in which he was assigned at one point to research in-flight 737 auto-throttle malfunctions following an inquiry from the European Union’s aviation regulator. Ewbank says he was at this point directed to withhold information about the five additional malfunctions he uncovered on grounds that Boeing would “fix it” internally. And yet, for a sin of omission that surely seemed far more benign to someone deliberately unacquainted with the plane’s engineering, Forkner appears to have been Boeing’s designated in-house scapegoat—at least among those who don’t happen to be dead pilots.
More recently, Boeing chairman David Calhoun has also . . .
The Market for Voting Machines Is Broken. This Company Has Thrived in It.
Jessica Huseman reports in ProPublica:
In the glare of the hotly contested 2018 elections, things did not go ideally for ES&S, the nation’s largest manufacturer of voting technology.
In Georgia, where the race for governor had drawn national interest amid concerns about election integrity, ES&S-owned technology was in use when more than 150,000 voters inexplicably did not cast a vote for lieutenant governor. In part because the aged ES&S-managed machines did not produce paper backups, it wasn’t clear whether mechanical or human errors were to blame. Litigation surrounding the vote endures to this day.
In Indiana, ES&S’ systems were plagued by mishaps at the local level. In Johnson County, for instance, the company’s brand-new machines faltered in ways that made it difficult to know whether some people had voted more than once.
“ES&S misjudged the need for appropriate resources to serve Johnson County on Election Day 2018,” a report issued by state election officials later concluded. Johnson County subsequently terminated its contract with ES&S and, this September, paid more than $1.5 million to purchase an entirely new set of equipment.
The uneven performance by ES&S in 2018, however, did little to dent its position as one of the most popular and powerful voting technology companies in the U.S. Any number of prior controversies hadn’t either.
The vote in 2006 in Sarasota, Florida, was just one. There, ES&S machines lost around 18,000 votes; it is still unclear why. The loss was far more than the margin of victory, and a lawsuit followed that ultimately resolved little. The company said in a statement that a variety of testing done on its machines supports its claim that the devices were not at fault, but the county wound up canceling its dealings with the firm shortly afterward.
Despite such stumbles, ES&S — based in Omaha, Nebraska, and employing roughly 500 people — controls around 50% of the country’s election system market, the company says, meaning that some 70 million Americans vote using the company’s equipment.
The question of the nation’s election integrity has rarely been more urgent. President Donald Trump has repeatedly made baseless claims of voter fraud. The special counsel investigation of Russian meddling in the 2016 vote produced indictments of more than a dozen foreign nationals. How and what federal authorities are doing to prevent future scandals — incidents of outside interference or basic breakdowns in accurately counting votes — has become yet one more partisan issue in a bitterly divided Congress.
A ProPublica examination of ES&S shows it has fought hard to keep its dominance in the face of repeated controversies. The company has a reputation among both its competitors and election officials for routinely going to court when it fails to win contracts or has them taken away, suing voting jurisdictions, rivals, advocates for greater election security and others.
In September 2018, ES&S filed a federal lawsuit against Cook County, Illinois, after the county awarded a $30 million voting machine contract to another company. ES&S later dropped the lawsuit, but the dispute delayed the implementation of Cook County’s new machines, and the Chicago mayoral election this spring ultimately was conducted using the same machines that were meant to be replaced.
ES&S’ lawsuits and threats of lawsuits have helped delay or thwart progress toward better voting technology even when the litigation is unsuccessful, more than two dozen election officials and voting technology experts said in interviews.
“I’d love to see a vibrant marketplace of voting technologies, with companies competing to make better products at lower prices. We have that in mobile phones, personal computers and automobiles, but we don’t have it at all with voting technologies,” said Dan Wallach, a computer science professor at Rice University who studies election systems and has testified in many such lawsuits. “These companies’ litigiousness creates a barrier to competition that becomes a barrier to improving our elections.”
ES&S is owned by the McCarthy Group, a private equity firm, and thus its financial records — revenue, profits, salaries — are not public. A spokesperson declined to make ES&S officials available for an interview for this article, and the company instead issued written responses to questions. It said its machinery and technology were ultimately found blameless in some of the election controversies over the years. In Georgia, it said that while the company was responsible for the maintenance of the machines, it was unable to make updates to the software. The state of Georgia, it said, was ultimately responsible for the mishaps.
“ES&S believes strongly in the security, accuracy and reliability of our machines,” the company said. “We’re committed to our role in democracy and protecting the right to vote.”
Asked about its history of litigation, the company said it sued only as a way to keep competitors honest: “We believe American voters deserve accountability in this industry.”
ES&S’ chief competitors — Hart InterCivic and Dominion — are not strangers to mishaps and aggressive tactics. Dominion’s most recent voting technology failed certification in Texas, and the state also saw hundreds of aging Hart machines confusing voters and leading to accusations of vote flipping. Both companies have sued customers and each other. But industry experts insist ES&S, in part because it has been around and dominant for so long, is in a category of its own.
Whatever the merits of ES&S’ performance, election officials and experts interviewed by ProPublica say the problems with the country’s voting systems go well beyond one company. They say the very nature of the industry and the way it’s regulated work against innovation and reward the tiny handful of often trouble-plagued companies that have been around for decades.
Along with going to court, ES&S had held onto the lion’s share of the nation’s election technology business by using a variety of controversial tactics, its critics say. For years, ES&S has required states and counties that buy its machines to sign long-term deals that often obligate them to purchase a vast array of other equipment and supplies from the company. ES&S also has made it a practice to hire former election officials as lobbyists in statehouses around the country. And it has donated to individual campaigns and spent money to lobby local and federal politicians at levels far higher than its competitors.
ES&S defended its practices, saying that lobbyists play an important role in monitoring new legislation, and that ES&S is in compliance with all state and federal laws related to lobbying. “Lobbyists help ensure legislators have accurate information for use in serving the best interest of the citizens they represent,” it said. “Lobbying is a normal practice employed by all election vendors and is common across all industries.”
In August 2018, Louisiana announced it would replace its old voting machines and awarded a $95 million contract to a rival of ES&S, which was the lowest bidder. ES&S filed a complaint that accused the state of writing its request for proposals so that only the other company’s machines would satisfy the terms. Shortly after, Gov. John Bel Edwards canceled the deal, effectively siding with ES&S and forcing the state to start the process over again.
“The governor’s administration just sided with a company that was $40 million more expensive,” Louisiana Secretary of State Kyle Ardoin said in a statement after the cancelation.
In a statement, the governor’s office said that the cancelation was justified. The office also laid the blame at the feet of the secretary of state’s office, which it said added “additional requirements” to the bid “just days before the responses were due.”
Louisiana campaign finance records show that an ES&S lobbyist in Baton Rouge has donated $13,250 to Edwards’ campaigns since 2014.
Louisiana still has not obtained new voting machines, and the state is expected to cast ballots in 2020 on the same machines it’s had since 2005.
Sen. Ron Wyden, D-Ore., is among a number of legislators pushing the federal government to better oversee the quality of local election technology, from voting machines to voter registration databases.
To begin the process of assessing the security and reliability of voting machines across the country, Wyden this year sent a detailed . . .
Continue reading. There’s much more.
At Least 20 Billionaires Behind ‘Dark Money’ Group That Opposed Obama
Michela Tindera reports in Forbes:
A nonprofit group with a bland name, Americans for Job Security, spent $5 million supporting Republicans in the 2010 midterms and $15 million denouncing former President Obama in the 2012 election, but until this week, the group never had to file disclosures showing where its money was coming from.
“This is the first time in the Citizens United-era that a dark money group was forced to disclose their donors in a certain time period,” said Jordan Libowitz, a spokesperson for nonprofit watchdog group Citizens for Responsibility and Ethics in Washington, referring to the 2010 Supreme Court case that wiped away previous restrictions on political spending.
Americans for Job Security had previously claimed it was not a political organization and therefore did not have to disclose its donors. But after years of litigation, CREW convinced the Federal Election Commission otherwise. AJS filed a list of its backers and beneficiaries from 2010 to 2012 on Thursday.
The biggest individual donor to the group appears to be Charles Schwab, the brokerage titan worth an estimated $7.8 billion. Over the span of three months in 2012, he donated nearly $9 million. Gap cofounder Doris Fisher—along with her sons Robert, John and William—gave another $9 million.
Some of the donors’ names have seeped out over the years. In 2013 a California state watchdog agency released a redacted donor list following an investigation conducted by the agency. That redacted list showed donors with mostly California addresses.
The new FEC filing shows billionaire donors from around the country. The late Republican megadonors Richard and Helen DeVos gave $1 million to the group in October 2012. Texas oil-and-gas billionaires Richard Kinder, Dan Wilks, Farris Wilks and late Houston Texans owner Robert McNair were also among the donors that year. Others include Dennis Washington of Montana, who got rich in mining and railroads, and Leon Black of New York, who cofounded private equity giant Apollo. WWE chief executive Vince McMahon donated, along with his wife Linda McMahon, who Donald Trump later appointed to his cabinet.
The companies of several billionaires are also on the list, including . . .
How to Increase Gut Bacterial Richness
Michael Greger has an interesting blog post, particularly since we are increasingly aware of the important role played by the gut microbiome:
We live in an “obesogenic environment,” with cheap junk food everywhere, thanks in part to subsidies going to the “‘food industrial complex,’ which manufactures obesogenic foods that foster addiction…The root causes…[may] make obesity difficult to escape,” but a lot of people do. If it were simply the external environment, why isn’t everyone obese?
“Some individuals seem to be more susceptible to the obesogenic environment…than others,” which suggests a genetic component, supported by studies of twins and adopted kids, but the genes that have been identified so far account for only 6 to 11 percent of the genetic variation in body mass index between individuals. Perhaps variation in our “other genome”—that is, all the different microbes that inhabit our body, known as the microbiome—may be playing a role. We have a hundred times more bacterial genes inside us than human genes.
As I discuss in my video Gut Microbiome: Strike It Rich with Whole Grains, a study found that people tend to fall into one of two groups: those who have lots of different types of bacteria in their gut (high “gut bacterial richness”) and those with relatively few types. Those with low bacterial richness had more overall body fat, insulin resistance, which is the cause of type 2 diabetes, high triglycerides, and higher levels of inflammatory markers, like C-reactive protein, compared to those with high bacterial richness. Not only did people with lower bacterial richness start out heavier, but the obese individuals with lower bacterial richness also gained more weight over time.
The question then becomes: Can a dietary intervention have any impact “A number of studies have associated increased microbial richness…with diets higher in fruits, vegetables, and fiber.”
Just giving fiber-type supplements doesn’t seem to boost richness, however, but the “compositional complexity” of a whole food, like whole grains, “could potentially support a wider scope of bacterial taxa,” types of bacteria, “thereby leading to an increase in diversity.” Human studies to investigate the effects of whole grains had been neglected, though…until now.
Subjects were given whole-grain barley, brown rice, or a mixture of both for a month, and all three caused an increase in bacterial community diversity. Therefore, it may take a broad range of substrates to increase bacterial diversity, and this can be achieved by eating whole plant foods.
Moreover, the alterations of gut bacteria in the study coincided with a drop in systemic inflammation in the body. We used to think that the way fiber in whole grains helped us was by gelling in our small intestine right off of our stomach, slowing the rate at which sugars were absorbed and blunting the spike in blood sugars one might get from refined carbs. We now know, however, that fiber is broken down in our colon by our friendly flora, which release all sorts of beneficial substances into our bloodstream that can have anti-inflammatory effects, as well. So, perhaps what’s happening in our large intestine helps explain the protective effects of whole grain foods against type 2 diabetes. [emphasis added – my eating 3 servings of intact whole grain a day perhaps explains the significant improvement in control of my own type 2 diabetes – LG]
Interestingly, the combination of both barley and brown rice worked better than either grain alone, suggesting a synergistic effect. This may help explain “the discrepancy of the health effects of whole grains obtained in epidemiological [population-based] and interventional studies.”Observational studies “strongly suggest” that those who consume three or more servings of whole grains a day tend to have a lower body mass index, less belly fat, and less tendency to gain weight, but recent clinical trials, . . .
The Wee Omega, Stubble Trubble, and the lovely Fendrihan Mk II
This little Omega badger is somewhat coarser than the Wee Scot and not so soft, but it does a fine job and holds plenty of lather for a 3-pass shave. The handle is also well designed and offers a secure and comfortable grip for all its diminutive size.
Stubble Trubble, alas no more, made some fine shaving soaps, and I secured this one from Above the Tie after ST had gone to business heaven. A very nice lather and the magnificent Fendrihand Mk II, a stainless steel razor on my short list of favorite razors, did a superb job, leaving my face undamaged and totally smooth.
A good splash (i.e., several sprays into the palm of my hand) Eau de Lavande, product of a local perfumery, and the day has gotten off to a very pleasurable start. Always start the day doing something you really enjoy: it sets the tone for the day.