Archive for the ‘Government’ Category
And, unfortunately for her, Missouri is one of the 24 states that refused to expand Medicare (though it would have cost them nothing), so that many in those states are out of luck:
I think it’s worth noting that the legislators who refused to accept the expansion of Medicare do not fall into that gap, and they couldn’t care less about those who do. Obviously.
Here’s the story, by Charles Ornstein in Pacific Standard:
For Missouri public radio reporter Harum Helmy, the Affordable Care Act is more than just a story she covers. It is also a story she’s living.
“I know — an uninsured health reporter,” she wrote to me last month. “The joke’s not lost on me.”
Helmy, 23, a part-time reporter/producer for KBIA in Columbia, Missouri, recently completed her coursework at the University of Missouri. She’s on her first professional job. At the station, she covers Obamacare, among other things. But she doesn’t make much money, and if the law worked as it was intended, she would be covered by Missouri’s Medicaid program beginning January 1.
That wasn’t meant to be.
As signed by President Obama, the Affordable Care Act (ACA) would have required every state to expand its Medicaid program for the poor to include adults earning less than 138 percent of the federal poverty level. Those earning more than that, up to four times the poverty level, would qualify for subsidies to purchase health insurance in marketplaces.
But the Supreme Court ruled last year that states could opt out of the Medicaid expansion without consequence, and Missouri along with 24 other states have done just that. The problem is that the law didn’t include subsidies for people in those states who earn less than the federal poverty level to buy coverage through the exchanges—they were supposed to be covered under Medicaid.
That’s the gap in which Helmy sits.
She earns less than the poverty level ($10 an hour for 20 hours per week) and qualifies neither for Medicaid nor a subsidy. Helmy was born in Texas and is a U.S. citizen, though her parents live in Indonesia. While she attended classes at the university, her parents paid for her health coverage.
According to the Kaiser Family Foundation, a non-partisan think tank, “In states that do not expand Medicaid, nearly 5 million poor uninsured adults have incomes above Medicaid eligibility levels but below poverty and may fall into a ‘coverage gap’ of earning too much to qualify for Medicaid but not enough to qualify for Marketplace premium tax credits.” In Missouri, 193,000 people, including Helmy, fall into the gap, Kaiser estimates.
On paper, the Medicaid expansion seems like a great deal. The federal government has agreed to pick up 100 percent of the cost of the expansion for the first three years, phasing down to 90 percent in 2020. But officials in states that have declined to take part view Medicaid as a broken program. They don’t trust the federal government to keep its funding pledge and do not believe they have adequate state funds to cover their portion.
Missouri Governor Jay Nixon, a Democrat, wants to expand Medicaid in his state, but the Republican-controlled Legislature won’t go along with it.
Helmy discussed her situation in a podcast in October (around the 12-minute mark). “I would just get a little bit personal here and say I’m one of those people,” she said. “I’m in this weird gap where I need insurance, my employer doesn’t give me insurance, but I don’t make enough to get a subsidy.”
I asked her what it felt like to be affected by the act. . . .
I think as people in the 24 states become aware that their legislators turned down free money that would have enabled them to have healthcare insurance, and they see that people making less than they get coverage, and people getting more than they are covered, the people left out in the cold are going to get amazingly angry. We may seem some interesting surprises in the November 2014 elections in those states.
Ezra Klein makes a very good point: in the old days, people were terrified of losing their healthcare insurance (employer provided or not). If they left the company, they would lose their medical benefits. If they were covered on an individual plan that was cancelled for any reason, there was no guarantee that they could get another. If they had a chronic illness, they were unable to get any individual healthcare insurance at all. Klein writes:
The furor over “if you like your plan, you can keep it” touches on a deep fear in American life: That your health-care insurance can be taken from you. That fear is so powerful because it happens so often: Almost everyone in the country can lose their health insurance at any time, for all kinds of reasons — and every year, millions do.
If you’re one of the 149 million people who get health insurance through your employer, you can lose your plan if you get fired, or if the H.R. department decides to change plans, or if you have to move to a branch in another state.If you’re one of the 51 million people who get Medicaid, you could lose your plan because your income rises and you’re no longer eligible or because your state cut its Medicaid budget and made you ineligible. You could lose it because you moved from Minnesota, where childless adults making less than 75 percent of the poverty line are eligible, to Texas, where there’s no coverage for childless adults.
If you’re one of the 15 million Americans who buys insurance on the individual market, you could lose your plan because your insurer decides to stop offering it or decides to jack up the price by 35 percent. And that’s assuming you’re one of the lucky people who weren’t denied coverage based on preexisting conditions in the first place.
Then, of course, there are the 50 million people who don’t have a plan in the first place. The vast majority of them desperately want health-care coverage. But it turns out that just because you want a plan doesn’t mean you can get one.
Virtually the only people whose health coverage is reasonably safe are those on fee-for-service Medicare and some forms of veterans insurance. And even there, enrollees are only safe until the day policymakers decide to change premiums or benefit packages.
President Obama’s critics are right: Obamacare doesn’t guarantee that everyone who likes their health insurance can keep it. In some cases, Obamacare is the reason people will lose health insurance they liked.
What Obamacare comes pretty close to guaranteeing, though, is that everyone who needs health insurance, or who wants health insurance, can get it.
It guarantees that if you lose the plan you liked — perhaps because you were fired from your job, or because you left your job to start a new business, or because your income made you ineligible for Medicaid — you’ll have a choice of new plans you can purchase, you’ll know that no insurer can turn you away, and you’ll be able to get financial help if you need it. In states that accept the Medicaid expansion, it guarantees that anyone who makes less than 133 percent of poverty can get fully subsidized insurance.
Health insurance isn’t such a fraught topic in countries such as Canada and France because people don’t live in constant fear of losing their ability to get routine medical care. . .
Al Shaw, Theodoric Meyer, and Christie Thompson report for ProPublica:
When Patrice and Philip Morgan bought a house near the ocean in Brooklyn, they were not particularly worried about the threat of flooding.
Federal maps showed their home was outside the area at a high risk of flood damage. For that reason, the government did not require them to buy flood insurance, a cost imposed on neighbors on more vulnerable blocks.
Even so, the couple decided to raise their house four feet to protect their basement from the effects of heavy rain storms.
“We thought we might have a foot or two of water,” Patrice said, “so we put a sump pump in to avoid any small issues.”
But the maps drawn up by the Federal Emergency Management Agency were wrong. And government officials knew it. [This is the point at which I think criminal charges become appropriate. - LG]
According to documents and interviews, state, local and federal officials had been aware for years that the crucial maps of flood risks were inaccurate; some feared they understated the dangers in New York City’s low-lying areas.
The flaws in the maps had significant impact. Developers relied on FEMA’s assessment of risks when they built new homes near the water. And homeowners and businesses made crucial decisions about where to buy or lease property on the assurance that they were outside of the high-risk zones.
Thousands of the buildings incorrectly identified as outside the flood zone were damaged when seawater surged ashore as Hurricane Sandy made landfall on Oct. 29, 2012.
State and city officials had been asking FEMA for years to revise the maps with technology and modeling methods that didn’t exist when they were first drawn in the 1980s. William S. Nechamen, New York State’s floodplain chief, warned FEMA in a 2005 letter that the failure to do so “will lead to higher than necessary flood damages and more expenses placed on individuals and on FEMA.”
Yet, despite Nechamen’s warning, FEMA missed chances to make changes that could have protected city dwellers from some of the worst of Sandy’s destruction.
During a push to modernize flood maps in the mid-2000s, FEMA decided to save money in New York City and much of the rest of the country by digitizing old flood maps without updating the underlying information, rather than using new technology to create more accurate maps.
The agency changed course in 2006, but didn’t release maps with better elevation data and more accurate storm-surge models until months after Sandy – too late to help New Yorkers like the Morgans.
When FEMA finally released a preliminary version of those maps this January they showed that the number of city structures considered at high risk of flooding had doubled. More than 35,000 additional homes and businesses were added to the map’s riskiest zones, according to a study by New York City’s Office of Long-Term Planning and Sustainability. Some 9,503 of those buildings suffered damaged during Sandy, a ProPublica analysis of flood maps shows.
FEMA did not respond to specific questions about the adequacy of its flood maps or glitches in the modernization process. Bill McDonnell, the deputy director for mitigation for FEMA’s Region II, acknowledged that no new data had been collected to update maps for New York or New Jersey in the mid-2000s. In a statement, the agency said it began giving priority to map updates for “high-risk, coastal areas” in 2009. These included 14 counties in New Jersey and New York City. The agency said it continues to work with state and local officials to “incorporate the best available data into maps.’’
That didn’t help the Morgans. Their home, a 1920s bungalow to which they added a second floor, was hit hard by Hurricane Sandy. . .
Interesting column by Pratap Chatterjee at TomDispatch.com. Tom provides a good introduction:
Someone should launch a feature somewhere on American foreign and war policy under the rubric: How could anything possibly go wrong? Here are just two recent examples.
The Obama administration intervenes militarily in Libya, plays a significant role in overthrowing the autocrat who runs the country as a police state, and helps unleash chaos in its wake. The streets of Libyan cities fill with militias as the new government’s control of the situation fades to next to nil. Which brings us to our present moment, when a panicky Washington decides that what’s needed is yet another, different kind of intervention. The plan seems to be to compete with various local and Islamic militias by creating a government militia as the core of a new “national army.” Its members are to be drawn from already existing militias and they’ll be trained somewhere outside of Libya. What an idea! Honestly, what could possibly go wrong?
Or consider this: Washington begins to panic about heightening tensions between Japan and China over uninhabited islands in the East China Sea. The problem, reports David Sanger of the New York Times, based on what Obama administration officials have told him, is that the conflict could escalate and so “derail their complex plan to manage China’s rise without overtly trying to contain it.” Now, let’s get this straight: before things began to run off the rails in the East China Sea, the Obama administration was confidently planning to “manage” the rise of the next superpower on a planet already in such tumult that what being a new great power might even mean is open to question. And keep in mind that we’re talking about an administration that couldn’t manage the rollout of a website. What could possibly go wrong?
Both examples highlight the strange combination of hubris and panic that, as TomDispatch regular Pratap Chatterjee points out today, seems to be the essence of so many of Washington’s plans and actions at the moment. The urge to “manage” is invariably followed by shock at the unmanageability of this roiling globe of ours, followed by panic over plans gone desperately awry when things begin, utterly predictably, to happen unpredictably, followed of course by the next set of managerial plans. Is there no learning curve in Washington? Tom
And here is Chatterjee’s column:
Call it the Jason Bourne strategy.
Think of it as the CIA’s plunge into Hollywood — or into the absurd. As recent revelations have made clear, that Agency’s moves couldn’t be have been more far-fetched or more real. In its post-9/11 global shadow war, it has employed both private contractors and some of the world’s most notorious prisoners in ways that leave the latest episode of the Bourne films in the dust: hired gunmen trained to kill as well as former inmates who cashed in on the notoriety of having worn an orange jumpsuit in the world’s most infamous jail.
The first group of undercover agents were recruited by private companies from the Army Special Forces and the Navy SEALs and then repurposed to the CIA at handsome salaries averaging around $140,000 a year; the second crew was recruited from the prison cells at Guantanamo Bay and paid out of a secret multimillion dollar slush fund called “the Pledge.”
Last month, the Associated Press revealed that the CIA had selected a few dozen men from among the hundreds of terror suspects being held at Guantanamo and trained them to be double agents at a cluster of eight cottages in a program dubbed “Penny Lane.” (Yes, indeed, the name was taken from the Beatles song, as was “Strawberry Fields,” a Guantanamo program that involved torturing “high-value” detainees.) These men were then returned to what the Bush administration liked to call the “global battlefield,” where their mission was to befriend members of al-Qaeda and supply targeting information for the Agency’s drone assassination program.
Such a secret double-agent program, while colorful and remarkably unsuccessful, should have surprised no one. After all, plea bargaining or persuading criminals to snitch on their associates — a tactic frowned upon by international legal experts — is widely used in the U.S. police and legal system. Over the last year or so, however, a trickle of information about the other secret program has come to light and it opens an astonishing new window into the privatization of U.S. intelligence.
Hollywood in Langley
In July 2010, at his confirmation hearings for the post of the Director of National Intelligence, James Clapper explained the use of private contractors in the intelligence community: “In the immediate aftermath of the Cold War… we were under a congressional mandate to reduce the community by on the order of 20%… Then 9/11 occurred… With the gusher… of funding that has accrued particularly from supplemental or overseas contingency operations funding, which, of course, is one year at a time, it is very difficult to hire government employees one year at a time. So the obvious outlet for that has been the growth of contractors.”
Thousands of “Green Badges” were hired via companies like Booz Allen Hamilton and Qinetiq to work at CIA and National Security Agency (NSA) offices around the world, among the regular staff who wore blue badges. Many of them — like Edward Snowden — performed specialist tasks in information technology meant to augment the effectiveness of government employees.
Then the CIA decided that there was no aspect of secret war which couldn’t be corporatized. So they set up a unit of private contractors as covert agents, green-lighting them to carry guns and be sent into U.S. war zones at a moment’s notice. This elite James Bond-like unit of armed bodyguards and super-fixers was given the anodyne name Global Response Staff (GRS).
Among the 125 employees of this unit, from the Army Special Forces via private contractors came Raymond Davis and Dane Paresi; from the Navy SEALs Glen Doherty, Jeremy Wise, and Tyrone Woods. All five would soon be in the anything-but-covert headlines of newspapers across the world. These men — no women have yet been named — were deployed on three- to four-month missions accompanying CIA analysts into the field.
Davis was assigned to Lahore, Pakistan; Doherty and Woods to Benghazi, Libya; Paresi and Wise to Khost, Afghanistan. As GRS expanded, other contractors went to Djibouti, Lebanon, and Yemen, among other countries, according to a Washington Post profile of the unit.
From early on, its work wasn’t exactly a paragon of secrecy. By 2005, for instance, former Special Forces personnel had already begun openly discussing jobs in the unit at online forums. Their descriptions sounded like something directly out of a Hollywood thriller. The Post portrayed the focus of GRS personnel more mundanely as “designed to stay in the shadows, training teams to work undercover and provide an unobtrusive layer of security for CIA officers in high-risk outposts.”
“They don’t learn languages, they’re not meeting foreign nationals, and they’re not writing up intelligence reports,” a former U.S. intelligence official told that paper. “Their main tasks are to map escape routes from meeting places, pat down informants, and provide an ‘envelope’ of security… if push comes to shove, you’re going to have to shoot.”
In the ensuing years, GRS embedded itself in the Agency, becoming essential to its work. Today, new CIA agents and analysts going into danger zones are trained to work with such bodyguards. In addition, GRS teams are now loaned out to other outfits like the NSA for tasks like installing spy equipment in war zones.
The CIA’s Private Contractors (Don’t) Save the Day
Recently these men, the spearhead of the CIA’s post-9/11 contractor war, have been making it into the news with startling regularity. Unlike their Hollywood cousins, however, the news they have made has all been bad. Those weapons they’re packing and the derring-do that is supposed to go with them have repeatedly led not to breathtaking getaways and shootouts, but to disaster. Jason Bourne, of course, wins the day; they don’t.
Juan Cole has a very good post: Nelson Mandela’s declaration to the court as his trial began in Pretoria at the Supreme Court of South Africa on April 20, 1964. His declaration begins:
I am the first accused. I hold a bachelor’s degree in arts and practised as an attorney in Johannesburg for a number of years in partnership with Oliver Tambo. I am a convicted prisoner serving five years for leaving the country without a permit and for inciting people to go on strike at the end of May 1961.
At the outset, I want to say that the suggestion that the struggle in South Africa is under the influence of foreigners or communists is wholly incorrect. I have done whatever I did because of my experience in South Africa and my own proudly felt African background, and not because of what any outsider might have said. In my youth in the Transkei I listened to the elders of my tribe telling stories of the old days. Amongst the tales they related to me were those of wars fought by our ancestors in defence of the fatherland. The names of Dingane and Bambata, Hintsa and Makana, Squngthi and Dalasile, Moshoeshoe and Sekhukhuni, were praised as the glory of the entire African nation. I hoped then that life might offer me the opportunity to serve my people and make my own humble contribution to their freedom struggle.
Some of the things so far told to the court are true and some are untrue. I do not, however, deny that I planned sabotage. I did not plan it in a spirit of recklessness, nor because I have any love of violence. I planned it as a result of a calm and sober assessment of the political situation that had arisen after many years of tyranny, exploitation, and oppression of my people by the whites.
I admit immediately that I was one of the persons who helped to form Umkhonto we Sizwe. I deny that Umkhonto was responsible for a number of acts which clearly fell outside the policy of the organisation, and which have been charged in the indictment against us. I, and the others who started the organisation, felt that without violence there would be no way open to the African people to succeed in their struggle against the principle of white supremacy. All lawful modes of expressing opposition to this principle had been closed by legislation, and we were placed in a position in which we had either to accept a permanent state of inferiority, or to defy the government. We chose to defy the law.
We first broke the law in a way which avoided any recourse to violence; when this form was legislated against, and then the government resorted to a show of force to crush opposition to its policies, only then did we decide to answer violence with violence.
The African National Congress was formed in 1912 to defend the rights of the African people, which had been seriously curtailed. For 37 years – that is, until 1949 – it adhered strictly to a constitutional struggle. But white governments remained unmoved, and the rights of Africans became less instead of becoming greater. Even after 1949, the ANC remained determined to avoid violence. At this time, however, the decision was taken to protest against apartheid by peaceful, but unlawful, demonstrations. More than 8,500 people went to jail. Yet there was not a single instance of violence. I and 19 colleagues were convicted for organising the campaign, but our sentences were suspended mainly because the judge found that discipline and non-violence had been stressed throughout.
During the defiance campaign, the Public Safety Act and the Criminal Law Amendment Act were passed. These provided harsher penalties for protests against [the] laws. Despite this, the protests continued and the ANC adhered to its policy of non-violence. In 1956, 156 leading members of the Congress Alliance, including myself, were arrested. The non-violent policy of the ANC was put in issue by the state, but when the court gave judgment some five years later, it found that the ANC did not have a policy of violence. . .
Everyone seems to love Nelson Mandela now, though at the time the US and South Africa and others considered him little more than a terrorist. Juan Cole points out that the US and Israel strongly supported Apartheid (possibly because both nations have an underclass (African-Americans for the US, Palestinians for Israel) and thus aligned themselves with the white overlords of South Africa):
The attempt to make Nelson Mandela respectable is an ongoing effort of Western government spokesmen and the Western media.
He wasn’t respectable in the business circles of twentieth-century New York or Atlanta, or inside the Beltway of Washington, D.C. He wasn’t respectable for many of the allies of the United States in the Cold War, including Britain and Israel.
I visited Soweto in 2012 and went to Mandela’s old house. It was a moving experience. I don’t want him to be reduced to a commercialized icon on this day of all days.
We should remember that for much of the West in the Cold War, South Africa’s thriving capitalist economy was what was important. Its resources were important. Its government, solely staffed by Afrikaners and solely for Afrikaners, was seen as a counter-weight to Soviet and Communist influence in Africa. Washington in the 1980s obsessed about Cuba’s relationship to Angola (yes).
That the Afrikaners treated black Africans like dirt and discriminated against them viciously, denying them the franchise or any hint of equality, was considered in Western capitals at most an unfortunate idiosyncrasy that could not be allowed to interfere with the West’s dependence on Pretoria in fighting the international Left.
The African National Congress had attempted nonviolent protest in the 1950s, but the white Afrikaaner government outlawed all those techniques and replied with deadly force. In the early 1960s when Nelson Mandela turned to sabotage, the United States was a nakedly capitalist country engaged in an attempt to ensure that peasants and workers did not come to power. It was a deeply racist society that practiced Apartheid, a.k.a. Jim Crow in its own South.
The US considered the African National Congress to be a form of Communism, and sided with the racist Prime Ministers Hendrik Verwoerd and P.W. Botha against Mandela.
Decades later, in the 1980s, the United States was still supporting the white Apartheid government of South Africa, where a tiny minority of Afrikaaners dominated the economy and refused to allow black Africans to shop in their shops or fraternize with them, though they were happy to employ them in the mines. Ronald Reagan declared Nelson Mandela, then still in jail, a terrorist, and the US did not get around to removing him from the list until 2008! Reagan, while delivering pro forma denunciations of Apartheid or enforced black separation and subjugation, nevertheless opposed sanctions with teeth on Pretoria. Reagan let the racist authoritarian P.W. Botha come to Washington and met with him.
Likewise British PM Margaret Thatcher befriended Botha and castigated Mandela’s ANC as terrorists. As if the Afrikaners weren’t terrorizing the black majority! She may have suggested to Botha that he release Mandela for PR purposes, but there is not any doubt on whose side she stood.
The Israeli government had extremely warm relations with Apartheid South Africa, to the point where Tel Aviv offered the Afrikaners a nuclear weapon (presumably for brandishing at the leftist states of black Africa). That the Israelis accuse Iran of being a nuclear proliferator is actually hilarious if you know the history. Iran doesn’t appear ever to have attempted to construct a nuclear weapon, whereas Israel has hundreds and seems entirely willing to share.
In the US, the vehemently anti-Palestinian Anti-Defamation League in San Francisco spied on American anti-Apartheid activists on behalf of the Apartheid state. If the ADL ever calls you a racist, you can revel in the irony.
Ronald Reagan imagined that there were “moderates” in the Botha government. There weren’t. He wanted “constructive engagement” with them. It failed. The Afrikaners imposed martial law. Reagan tried to veto Congressional sanctions on Pretoria in 1986 but Congress over-rode him.
Nelson Mandela was a socialist who believed in the ideal of economic equality or at least of a decent life for everyone in society. He was also a believer in parliamentary government. So, he was a democratic socialist.
The current Republican Party . . .
In Salon Joan Walsh also points to the weirdness of the current GOP’s praise for Mandela, when the GOP basically opposes everything Mandela stood for:
I tried to honor Nelson Mandela on the day of his death, and love my political enemies. But the white-washing of Mandela’s legacy, as well as the role of the United States in supporting both apartheid and Mandela’s long imprisonment, has to be rebutted.
It began on Mandela’s 95th birthday in July, when House Speaker John Boehner had the audacity to declare in a tribute “At times it can almost feel like we are talking about an old friend.”
It got much worse when Sen. Ted Cruz announced Thursday night: “Nelson Mandela will live in history as an inspiration for defenders of liberty around the globe.”
But Cruz’s political heroes opposed Mandela as a terrorist and a communist, and there’s little doubt the red-baiting Texas senator would have done the same had he been in Congress back then. (The Daily Beast’s Peter Beinart and Foreign Policy’s Sam Kleiner(from July) have the two best pieces about “apartheid amnesia” I’ve read.)
It’s shocking how little American leaders of both parties did to oppose the rise and consolidation of the brutal apartheid regime in the ‘50s and ’60s, but it was Richard Nixon who developed closer ties. The anti-apartheid movement of the 1970s and ’80s – where Barack Obama got his political start; I covered the University of Wisconsin’s successful divestment movement with the Daily Cardinal in 1978 — was demonized as the far left at the time. Moderates proposed alternatives like the Sullivan Principles, named after Rev. Leon Sullivan, a General Motors board member, which tried (and failed) to impose a code of conduct on companies doing business in South Africa (Sullivan eventually agreed they weren’t enough).
Ronald Reagan made it a priority to fight domestic and international divestment efforts — efforts that, in the end, helped pressure the South African government to enter negotiations and free Nelson Mandela. Reagan vetoed an amazingly (if belatedly) bipartisan bill to impose tough sanctions on the apartheid regime. Of course then-Congressman Dick Cheney had voted against the sanctions in 1986, and he defended his position while running for vice president in 2000, telling ABC: ”The ANC was then viewed as a terrorist organization. … I don’t have any problems at all with the vote I cast 20 years ago.”
The Heritage Foundation was a clubhouse for apartheid backers; as late as 1990, when Mandela had been freed from prison and traveled to the U.S., Heritage suggested he was a terrorist, “not a freedom fighter.” Grover Norquist advised pro-apartheid South African student groups and declared that the issue “is the one foreign policy debate that the Left can get involved in and feel that they have the moral high ground,” while insisting that it was a “complicated situation.” It was not.
As late as 2003, the National Review attacked Mandela for opposing the Iraq war. His “vicious anti-Americanism and support for Saddam Hussein should come as no surprise,” NR wrote, “given his longstanding dedication to communism and praise for terrorists.”
It’s also disrespecting Mandela to leave his radicalism out of his tributes. For a time he believed ending apartheid would require armed resistance, and although he eventually renounced violence, he refused to do so as a condition of being released from prison. He was a revolutionary who believed in a radical redistribution of wealth, and a global warrior against poverty, to the end. Yes, it’s important to remember his legacy of reconciliation, and love, toward white South Africans who had brutalized him. But it’s equally important to remember the commitment to equality that let him endure prison, and adopt reconciliation as the best strategy to achieve freedom and justice.
So I’m not dwelling on the hypocrisy of the right at this point, but I can’t ignore it either. . .
The system of law works best when people trust it. Marc Herman reports in Pacific Standard on a study of judicial conflict of interest rules:
The Center for Public Integrity just released what it’s calling an investigation of conflict of interest rules for judges in the 50 states and the District of Columbia. On an A-F scale, the outfit graded state rules that keep judges from presiding over cases where a personal interest might intrude. Forty-two states and D.C. scored an F, meaning CPI thinks the disclosure rules were too lax, or didn’t exist in the first place. No state’s legal system got an A or a B, according to CPI’s standard. California pulled a gentleman’s C.
If those depressing-sounding rankings are fair, what do they mean? The gist is that there isn’t enough legislation out there to prevent judicial corruption. So some corruption occurred:
After reviewing three years of personal financial disclosures, the Center found judges who authored opinions favoring companies in which they owned stock. The Center found judges who ruled on cases even when family members were receiving income from one of the parties. And it found judges who accepted lavish gifts — like a $50,000 trip from a lawyer.
Sounds bad. The rest of the research, however, doesn’t offer too many cases that sound particularly grave. In the most alarming incident highlighted, a California supreme court judge participated in a case involving Wells Fargo Bank, despite . . .
The idea that the US should essentially strip the Federal government bare is short-sighted in the extreme, and the NIH is another victim. The chart is from this post by Kevin Drum, worth the click.
I recently blogged about the movie TWA Flight 800, which I saw on Netflix streaming. I don’t think the movie trots out a conspiracy theory, though it would be easy to weave one from the findings presented. But rather than theory, the movie focuses mainly on physical evidence and the process used in the investigation—both of which are quite damning to the final report.
Conspiracy theories are in the news again because of various speculations about the JFK assassination, whose 50th anniversary was on 22 November. But conspiracies do in fact happen, so simply saying “conspiracy theory” is insufficient to dismiss a theory (unless you believe that no conspiracy ever exists). At ConsortiumNews.com Robert Parry writes about an earlier conspiracy that at the time was dismissed as a “conspiracy theory”:
In the insular world of Manhattan media, there’s much handwringing over the latest blow to print publications as New York Magazine scales back from a weekly to a biweekly. But the real lesson might be the commercial failure of snarky writing, the kind that New York demonstrated in its recent hit piece on “conspiracy theories.”
What was most stunning to me about the article, pegged to the 50th anniversary of John F. Kennedy’s assassination, was that it began by ridiculing what is actually one of the best-documented real conspiracies of recent decades, the CIA’s tolerance and even protection of cocaine trafficking by the Nicaraguan Contra rebels in the 1980s.
According to New York Magazine, the Contra-cocaine story – smugly dubbed “the last great conspiracy theory of the twentieth century” – started with the claim by ”crack kingpin” Ricky Ross that he was working with a Nicaraguan cocaine supplier, Oscar Danilo Blandon, who had ties to the Contras who, in turn, had ties to the CIA.Author Benjamin Wallace-Wells writes: “The wider the aperture around this theory, the harder its proponents work to implicate Washington, the shakier it seems: After several trials and a great deal of inquiry, no one has been able to show that anyone in the CIA condoned what Blandon was doing, and it has never been clear exactly how strong Blandon’s ties to the contraleadership really were, anyway.”
So, it was all a goofy “conspiracy theory.” Move along, move along, nothing to see here. But neither Wallace-Wells nor his New York Magazine editors seem to have any idea about the actual history of the Contra-cocaine scandal. It did not begin with the 1996 emergence of Ricky Ross in a series of articles by San Jose Mercury-News investigative reporter Gary Webb, as Wallace-Wells suggests.
The Contra-cocaine scandal began more than a decade earlier with a 1985 article that Brian Barger and I wrote for the Associated Press. Our article cited documentary evidence and witnesses – both inside the Contra movement and inside the U.S. government – implicating nearly all the Contra groups fighting in Nicaragua under the umbrella of Ronald Reagan’s CIA.
Our Contra-cocaine article was followed up by a courageous Senate investigation led by Sen. John Kerry of Massachusetts who further documented the connections between cocaine traffickers, the Contras and the Reagan administration in a report issued in 1989.
Yet, part of the scandal always was how the Reagan administration worked diligently to undercut investigations of the President’s favorite “freedom fighters” whether the inquiries were undertaken by the press, Congress, the Drug Enforcement Administration or federal prosecutors. Indeed, a big part of this cover-up strategy was to mock the evidence as “a conspiracy theory,” when it was anything but.
Big Media’s Complicity
Most of the mainstream news media played along with the Reagan administration’s mocking strategy, although occasionally major outlets, like the Washington Post, had to concede the reality of the scandal.
For instance, during the drug-trafficking trial of Panamanian dictator Manuel Noriega in 1991, U.S. prosecutors found themselves with no alternative but to call as a witness Colombian Medellín cartel kingpin Carlos Lehder, who — along with implicating Noriega — testified that the cartel had given $10 million to the Contras, an allegation first unearthed by Sen. Kerry.
“The Kerry hearings didn’t get the attention they deserved at the time,” a Washington Post editorial on Nov. 27, 1991, acknowledged. “The Noriega trial brings this sordid aspect of the Nicaraguan engagement to fresh public attention.”
Yet, despite the Washington Post’s belated concern about the mainstream news media’s neglect of the Contra-cocaine scandal, there was no serious follow-up anywhere in Big Media – until 1996 when Gary Webb disclosed the connection between one Contra cocaine smuggler, Danilo Blandon, and the emergence of crack cocaine via Ricky Ross.
But the premier news outlets – the likes of the Washington Post, the New York Times and the Los Angeles Times – didn’t take this new opportunity to examine what was a serious crime of state. That would have required them to engage in some embarrassing self-criticism for their misguided dismissal of the scandal. Instead, the big newspapers went on the attack against Gary Webb.
Their attack line involved narrowing their focus to Blandon – ignoring the reality that he was just one of many Contras involved in cocaine smuggling to the United States – and to Ross – arguing that Ross’s operation could not be blamed for the entire crack epidemic that ravaged U.S. cities in the 1980s. And the newspapers insisted that the CIA couldn’t be blamed for this cocaine smuggling because the agency had supposedly examined the issue in the 1980s and found that it had done nothing wrong.
Because of this unified assault from the major newspapers – and the corporate timidity of the San Jose Mercury-News editors – Webb and his continuing investigation were soon abandoned. Webb was pushed out of the Mercury-News in disgrace.
That let the mainstream U.S. media celebrate how it had supposedly crushed a nasty “conspiracy theory” that had stirred up unjustified anger in the black community, which had been hit hardest by the crack epidemic. The newspapers also could get some brownie points from Republicans and the Right by sparing President Reagan’s legacy a big black eye.
But Webb’s disclosure prompted the CIA’s Inspector General Frederick Hitz to undertake the first real internal investigation of the ties between the Contra-cocaine smugglers and the CIA officers overseeing the Contra war in Nicaragua.
The CIA’s Confession
When Hitz’s final investigative report was published in fall 1998, the CIA’s defense against Webb’s series had shrunk to a fig leaf: that the CIA did not conspire with the Contras to raise money through cocaine trafficking. But Hitz made clear that the Contra war had taken precedence over law enforcement and that the CIA withheld evidence of Contra drug-smuggling crimes from the Justice Department, Congress, and even the CIA’s own analytical division.
Besides tracing the extensive evidence of Contra trafficking through the entire decade-long Contra war, the inspector general interviewed senior CIA officers who acknowledged that they were aware of Contra-drug smuggling but didn’t want its exposure to undermine the struggle to overthrow Nicaragua’s leftist Sandinista government.
According to Hitz, the CIA had “one overriding priority: . . .
The big GOP fear seems to be that helping people will hurt them by making them become “dependent.” (So far as I know, they cite no studies by simply base their conclusions on how they would behave if they themselves received help.) Paul Rosenberg in Salon writes about a study that shows reduction in dependency:
Hilary Hoynes is a University of California at Berkeley economist who wrote a particularly notable paper last year. Instead of increasing dependency, as conservative critics have repeatedly claimed, Hoyen’s paper showed that, for women at least, food stamp use during pregnancy and early childhood has exactly the opposite impact of what conservatives allege: It actually increases economic self-sufficiency when children grow up, in the next generation.
That was just one of two main results reported in “Long Run Impacts of Childhood Access to the Safety Net,” which Hoynes co-authored with Diane Whitmore Schanzenbach and Douglas Almond. As stated in the paper’s abstract, access to food stamps for women leads to “increases in economic self-sufficiency (increases in educational attainment, earnings, and income, and decreases in welfare participation).” Hoynes and her colleagues took advantage of the fact that food stamp programs were established county-by-county over a period of years, creating a sort of “natural experiment” beginning half a century in the past.
“Hoynes’ work has been timely, innovative and revealing,” said Arloc Sherman, a senior researcher at the Center on Budget and Policy Priorities, which has highlighted Hoynes’ work this year as food stamps and the SNAP program have become a major subject of controversy. “Hoynes and her collaborators have really broadened our understanding of how programs like food stamps not only relieve hardship in the moment but can trigger long-lasting gains in participating children’s later health and education. The implications of the research are considerable. In this long view, such assistance is not only helping struggling families to scrape by, it’s a good investment in the next generation of citizens and workers.”
Hoynes herself said, “This work indicates that there are important benefits of the safety net that to date have been ignored. They predict that a more generous safety net can reduce health disparities. More generally, the emerging evidence points to an important role for investments in early life — and those investments generate important returns in terms of better health and economic outcomes in adulthood.”
It’s a startling result in light of the onslaught of conservative claims to the contrary, but it’s somewhat less startling — though still quite illuminating — in light of what’s actually known about the impacts of hunger on childhood development back in the “reality based community,” where population-based studies of hunger impacts date back to the 1970s, when researchers first began reporting on the long-term, adult impacts on children born during and shortly after the so-called Dutch “Hunger Winter,” a period from November 1944 through May 1945, when a large part of the Netherlands was subjected to drastically reduced rations under Nazi occupation.
But to really appreciate the significance of this research, one must also appreciate two other aspects of Hoynes’ recent research, which combine to provide a three-pronged counterattack on the right’s “culture of dependency” narrative. First, she has done previous research establishing short-term benefits — not just for food stamps, but also the for the earned income tax credit — specifically, a reduction in low-birthweight babies, a significant indicator of well-being. This research alone is sufficient to show that safety net programs are achieving the goals of bettering people’s lives, adding more weight to the already well-established statistics on poverty reduction. Second, she has done research into safety net program utilization over the course of economic recession and recovery, research that shows that the current levels of food stamp and other program use are in line with past history, and not a sign of any alleged “explosion” in a “culture of dependency” under Obama, as the right-wing noise machine would have it.
Thus, Hoynes’ work provides powerful evidence for a three-pronged counterattack against this conservative narrative, which has come to play a dominant role in Republican politics in the post-Bush/Obama/Tea Party era: 1) The safety net works in the short term, producing measurable improvements in newborn health; 2) it works in the long term, improving health for both men and women, and reducing dependency among women in the next generation; and 3) it works currently in much the same manner as it has worked in the past. The long-term effects findings are clearly the most remarkable, which is why they’re worth looking into more closely. But it’s the overall combination of evidence — along with the work of others working on other aspects of the safety net — that provides a robust picture of what the real-world safety net actually does to build better lives, pushing back against the onslaught of right-wing lies.
In July, for example, when House Republicans were first threatening massive food stamp cuts, the CBPP released a report, “SNAP Enrollment Remains High Because the Job Market Remains Weak.” It’s common sense, of course. As the report stated in its very first sentence, . . .
In Salon Josh Eidelson reports an encouraging development:
With accusations of abuse directed at private companies providing public services, a package of privatization safeguards is expected to be introduced in nearly half the nation’s state legislatures in the next session, according to a group pushing the measures.
“If you contract to fix your car or paint your house” and “don’t figure out exactly in advance what you want, and get that done precisely, and you don’t watch real closely, then you get screwed,” said Donald Cohen, who directs In the Public Interest — a project of the Partnership for Working Families, which produces research critical of the subcontracting of government work — and chairs the ITPI Action Fund, a 501(c)(4) that pushes legislation. “That’s what happens with contracting.” ITPI’s funders include foundations and unions.
In a report being released Wednesday afternoon, ITPI urges adoption of a battery of measures to confront alleged abuses by such companies providing public services. According to ITPI Action Fund, legislators in nearly half the states – including California, Maryland and Pennsylvania – plan to introduce versions of that legislation.
To increase transparency, the ITPI package would require governments to disclose online how much contractors cost and how many people they employ, and require each company with a contract to “open its books and its meetings to the public.” To promote accountability, ITPI urges states to establish minimum staffing devoted to oversight, bar scofflaw companies from receiving contracts, require clauses letting governments cancel contracts based on broken promises, and limit contracts to companies that guarantee a cost savings of at least 10 percent. To advance “shared prosperity,” ITPI would require wage and benefit standards for subcontracted employees, and impact analyses taking into account potential outsourcing’s effect on the environment, local business and social services. And in the name of competition, ITPI’s program would forbid language promising profits for contractors or allowing automatic contract renewals, and would require that direct public employees have the chance to submit competing proposals for work the government is considering contracting out.
“There’s a myth around private sector doing things cheaper, better, faster, which turns out not to be accurate,” Cohen told Salon. He contended that recent outsourcing abuses make the case for reform.
The forthcoming ITPI report, “Out of Control: The Coast-to-Coast Failures of Outsourcing Public Services to For-Profit Corporations,” offers a parade of such alleged abuses. Among them: In New Mexico, a district judge sided with the city of Truth or Consequences (that’s the actual name) when it refused to share video recordings of city commission meetings on the grounds that an open records law didn’t cover the private contractor that did the recording (that ruling was overturned by an appeals court). In Indiana, a private toll road operator refused to let state troopers shut down a toll road due to a snowstorm. In Florida, the Palm Beach Post found that three out of six private prisons saved taxpayers no money. In Chicago, a 75-year contract signed in 2009 restricts the government from adding bicycle lanes or sidewalk space because of the potential impact on private parking meter profits. In New York City, a 1998 contract with the private company CityTime to oversee time records of public employees was still unfinished a dozen years later, after costing taxpayers over 10 times the expected $63 million price tag.
“The rationale has long been that work or things might be done better by bringing in outside experts,” said New York City Comptroller John Liu, whose office investigated CityTime. “In New York City, that line of thinking has largely failed … Part of it is the sheer lack of oversight over these outside consultants, who are given too much free rein and too much flexibility in the contractual agreements with the city.” Liu told Salon that the CityTime example had also shown that “the work can always be done in-house, if the will is there.”
“Desperate governments will do desperate things,” said Cohen, “and … there is a huge industry whose purpose and mission is now to walk the halls … saying, ‘We’ve got a deal for you.’” Still, he argued that public attention to the role of contracting in the Healthcare.gov and Edward Snowden stories and the trans-ideological appeal of “fiscally prudent government” offered an opportunity to effect reforms. Signs of progress cited by ITPI include the suspension of the process of Chicago airport privatization, the cancellation of a Cincinnati parking lease, and media scrutiny on a Nashville private prison contract under which taxpayers owe Corrections Corporation of America extra cash when prison occupancy falls below a contractual minimum. . .
Very interesting story in Salon by Vinnie Rotondaro:
In under a year, Pope Francis has managed to rouse and inspire Catholics across the world with his calls of a “church for the poor.” He has done this without making any changes to Church doctrine.
Last week, Francis continued his populist charge, releasing a powerful papal exhortation titled Evangelii Gaudium. The document decries economic inequality as “the result of ideologies which defend the absolute autonomy of the marketplace and financial speculation,” ideologies, like trickle down economics, which, “reject the right of states, charged with vigilance for the common good, to exercise any form of control.”
“A new tyranny is thus born,” the pope wrote, “invisible and often virtual, which unilaterally and relentlessly imposes its own laws and rules.”
Again and again, by virtue of his tone and contextual aim, Francis wins over many (including much of the mainstream press). Even non-believers and the disaffected have taken notice. But while much of his popularity can be attributed to his populist charm, there also seems to be an element of surprise in the public’s reaction to his papacy, as if the Pope’s simple, Christ-like message of love and inclusion has come as a shock to the system – as something new, unexpected.
Why? Take a look at the agenda items addressed earlier last month by U.S. Conference of Catholic Bishops at their annual meeting in Baltimore. The bishops of the richest, most powerful and increasingly unequal nation in the world, convening in a city wracked by generational poverty, talked about pornography, they discussed contraception and gay marriage, and addressed questions of minor liturgical importance. Poverty was not on the agenda.
The image offered up was that of a place where the old-guard rules, where reactionary tsk-tskers inveigh on what people can and cannot do in their personal lives, where “liberal” political concerns are mentioned while “conservative” causes are crusaded over.
And if the whispers that some bishops “appear willing to wait out this pope,” or the election of the conference’s new chair, Archbishop Joseph Kurtz, a “smiling conservative” who signed the Manhattan Declaration and cannot seriously be seen as a reformer, are any indication, it doesn’t look likely this image will change anytime soon.
Why does Pope Francis surprise us? He surprises us because he seems unlike so much the hierarchy he represents.
But let’s not jump the gun. A quick spin through history shows it’s not so much Francis who is unlike his Church, but his Church which is unlike its past, and in attempting to bring Catholicism back-to-the-future, as it were, it’s conceivable that the pope could trigger a significant political shift here in the U.S.A.
Throughout most of the 20th century, . . .
Amy Goodman has a good program at Democracy Now!, video and transcript. The blurb:
With hundreds of thousands of people now on the government’s terrorist watch lists, a closely watched trial begins today in San Francisco. Stanford University Ph.D. student Rahinah Ibrahim is suing the U.S. government after she was barred from flying from Malaysia back to the United States in 2005 to complete her studies at Stanford after her name was placed on the list. The New York Times reports that the federal government’s terrorist watch list, officially called the “Terrorist Screening Database,” has grown to at least 700,000 people, and those on the list are often subjected to extra scrutiny, prohibited from flying, and interrogated while attempting to cross borders. The government refuses to divulge who is on the list, how one can get off the list, and what criteria is used to place someone on the list in the first place. Oftentimes, people have no idea their name is in the database until they attempt to board a flight. We speak with Anya Bernstein, associate professor at the SUNY Buffalo Law School and author of the article, “The Hidden Costs of Terrorist Watch Lists.”
Those who (hope to) fly will find this of great interest—there are many routes to the No-Fly List.
Interesting list. From the post, by Juan Cole:
In many key ways America’s political and financial practices make it in absolute terms far more corrupt than the usual global South suspects. After all, the US economy is worth over $16 trillion a year, so in our corruption a lot more money changes hands.
1. Instead of having short, publicly-funded political campaigns with limited and/or free advertising (as a number of Western European countries do), the US has long political campaigns in which candidates are dunned big bucks for advertising. They are therefore forced to spend much of their time fundraising, which is to say, seeking bribes. All American politicians are basically on the take, though many are honorable people. They are forced into it by the system. House Majority leader John Boehner has actually just handed out cash on the floor of the House from the tobacco industry to other representatives.
When French President Nicolas Sarkozy was defeated in 2012, soon thereafter French police actually went into his private residence searching for an alleged $50,000 in illicit campaign contributions from the L’Oreale heiress. I thought to myself, seriously? $50,000 in a presidential campaign? Our presidential campaigns cost a billion dollars each! $50,000 is a rounding error, not a basis for police action. Why, George W. Bush took millions from arms manufacturers and then ginned up a war for them, and the police haven’t been anywhere near his house.
American politicians don’t represent “the people.” With a few honorable exceptions, they represent the the 1%. American democracy is being corrupted out of existence.
2. That politicians can be bribed to reduce regulation of industries like banking (what is called “regulatory capture”) means that they will be so bribed. Billions were spent and 3,000 lobbyists employed by bankers to remove cumbersome rules in the zeroes. Thus, political corruption enabled financial corruption (in some cases legalizing it!) Without regulations and government auditing, the finance sector went wild and engaged in corrupt practices that caused the 2008 crash. Too bad the poor Afghans can’t just legislate their corruption out of existence by regularizing it, the way Wall street did.
3. That the chief villains of the 2008 meltdown (from which 90% of Americans have not recovered) have not been prosecuted is itself a form of corruption.
4. The US military budget is bloated and enormous, bigger than the military budgets of the next twelve major states. What isn’t usually realized is that perhaps half of it is spent on outsourced services, not on the military. It is corporate welfare on a cosmic scale. I’ve seen with my own eyes how officers in the military get out and then form companies to sell things to their former colleagues still on the inside.
5. . . .
Interesting article by R.J. Eskow in Salon:
You don’t have to be an unqualified fan of the Affordable Care Act to recognize the lunacy of most Republican objections to it. From “death panels” to “a loss of liberty,” there’s only one consistent through-line to most of their objections: They come from Republicans, they’re directed at a Democratic president, and they’re irrational.
The president’s self-imposed deadline for fixing the website has arrived and, while it’s still far from perfect, the complaints are likely to become broader once again. The Republicans may not realize it, but that way lies danger.
More than once, Democrats have made the mistake of taking victory laps for a plan with very real problems. But the Republicans are setting traps for themselves – traps they may find it difficult to escape, especially if Democrats are shrewd enough to take advantage of them.
This shortsightedness already wounded them once, in the 2012 election, when candidate Mitt Romney was forced to attack a program that was nearly identical to the one thatGov. Mitt Romney implemented in Massachusetts. It looked absurd – because it was. Romney’s campaign was probably always a lost cause, but that didn’t help.
For the Republicans, there’s more where that came from.
The trouble starts with their gleeful rubbing of hands over the Healthcare.gov rollout. Gloating about the website is unwise for a couple of reasons. First, the website’s design and implementation was conducted by a private government contractor, CGI Global, not by government employees. There are many lessons to be learned from the website’s problems, but one of them clearly seems to be this: The privatization of government services, a key goal for the Republican Party, can work very poorly.
Accounts of the Obamacare implementation read like a how-to manual in inept contracting with outside corporations, and the administration deserves to take a hit for that. But the problem isn’t that government created the website. A larger part of the problem lies in the fact that it used a private contractor to do the job.
Worse, the administration chose to use a company whose specialty was not healthcare administration but “government contracting.” The fact that this is now an industry of its own, and one with enormous growth, shows just how far the privatization trend has come on the federal level.
That’s a problem. Professional government contractors know how to game the government procurement system for maximum profits, and those profit margins are added to the cost for taxpayers.
CGI Global, the all-purpose government contractor that handled the website, is a case in point. Even though the Obama administration has made a point of saying government should end no-bid contracts, this project – the most important of Obama’s presidency – was offered on a no-bid contract.
As someone who once led a company that contracted with government agencies, I can tell you that somebody “worked the system” extremely well on this one. Unfortunately, the “system” works much better for the contractors than it does for the public. Every time Republicans crow about the website’s problems, another thought should be implanting itself in the public’s mind: privatizing government services is a very bad idea.
The challenge for Republicans runs even deeper than that. They’ve been mocking the very concept behind the Obamacare exchanges. It’s a concept that made the rollout extremely difficult. The idea was that government would create an electronic “marketplace” where people could comparison-shop for health insurance. This, we were told, would keep costs down by employing market forces and competition.
This also happens to be an excellent way to describe the Republicans’ plan for Medicare. The description is still up at Rep. Paul Ryan’s website:
Beginning in 2024, for those workers born in 1959 or later, Medicare would offer them a choice of private plans competing alongside traditional fee-for-service option(sic) on a newly created Medicare Exchange (emphasis ours) … The Medicare Exchange would provide all seniors with a competitive marketplace where they could chose a plan the same way members of Congress do.
Every time the Republicans tell horror stories or make fun of the ACA’s exchange, they’re telling people that their own plan for Medicare is going to turn the most popular, cost-effective and successful health plan in the country into a tragedy – or a joke.
They’re also sabotaging their own arguments for privatizing Social Security. The plan that George W. Bush proposed in 2005 called upon the government to administer a portfolio of private investment plans on behalf of retirees. There’s still talk of reviving this GOP proposal. Rep. Paul Ryan, leading House Republican and 2012 vice-presidential candidate, continues to push a privatization scheme that even the Bush administration described as “irresponsible.”
As Obamacare goes, so goes the Social Security privatization plan.
There’s a reason why their negative characterizations of Obamacare match their own proposals so closely. As many people know, the Affordable Care Act began its life as a right-wing proposal meant to blunt the drive toward healthcare reform during the Clinton administration. Republicans loved the idea back then. They loved it when Gov. Arnold Schwarzenegger proposed something similar in California. And they loved it when future presidential candidate Mitt Romney implemented it in Massachusetts.
That’s why they’re in such a trap now. Their attacks have already trashed the credibility of the Heritage Foundation, which was a principal architect of the plan back in the Clinton years. The Heritage Foundation’s Robert Moffit brought ridicule on himself and his organization when he wrote that the ACA’s individual mandate was “an Unconstitutional Violation of Personal Liberty” that “Strikes at the Heart of American Federalism,” adding that “It is an assertion of federal power that is inherently at odds with the original vision of the Framers.”
In fact, most experts agree that the idea of the individual mandate originated with the Heritage Foundation itself in a 1989 paper that proposed that the government “mandate all households to obtain adequate insurance.” The paper by Stuart M. Butler argues that:
Many states now require passengers to wear seatbelts for their own protection. Many others require anybody driving a car to have liability insurance. But neither the federal government nor any state requires all households to protect themselves from the potentially catastrophic costs of a serious injury or illness.
The paper continues, “Under the Heritage plan, there would be such a requirement.” And in case there is still any doubt about whose plan contains this individual mandate proposal, the section of the document containing these words is titled “The Heritage Plan.”
Then there’s the issue on which Republicans have scored most heavily . . .
Michael Lind writes at Salon:
When it comes to debating conservatives about the future of the American economy, progressives are at a disadvantage. Conservatives are united behind a single vision of the ideal economy, while progressives are divided among three rival schools of economic strategy.
On economics, unlike on social issues or foreign policy, there is no distinction between conservatism and libertarianism. The mainstream right’s economic vision is libertarian, pure and simple: smaller government, lower taxes, free trade and deregulation. Add to this the goal of replacing universal, tax-financed social insurance programs such as Social Security and Medicare with means-tested vouchers to subsidize for-profit providers of retirement savings and medical insurance and medicine, and you have pretty much the whole right-wing economic program.
Nothing like that consensus exists to the left of center. Instead, three distinct and conflicting traditions of political economy coexist under the rubric of liberalism or progressivism. The three are small producerism, unionism and economic rights.
The three schools of center-left economic thought have coexisted, now as rivals and now as allies, since the early 1900s. Each has had its own constituency or social base. . .
“Doesn’t Eat, Doesn’t Pray, Doesn’t Love” begins:
The question of whether for-profit companies can claim a religious identity, one that exempts them from obeying a generally applicable law, is fully worthy of the attention the Supreme Court is about to give it. But to the extent that much of the commentary about the challenges to the Affordable Care Act’s contraception-coverage insurance mandate frames the issue as a debate about the rights of corporations – as a next step beyond Citizens United’s expansion of corporate free speech – I think it misses the point. What really makes these cases so rich, and the reason the court’s intervention will dramatically raise the temperature of the current term, lies elsewhere.
The religious-based challenges that have flooded the federal courts from coast to coast – more than 70 of them, of which the Supreme Court agreed on Tuesday to hear two – aren’t about the day-in, day-out stuff of jurisprudence under the First Amendment’s Free Exercise Clause: Sabbath observance, employment rights, tax exemptions. They are about sex.
As such, the cases open a new front in an old war. I don’t mean the overblown “war on religion” that some Catholic leaders have accused the Obama administration of waging. Nor do I mean the “war on women” that was such an effective charge last year against a bevy of egregiously foot-in-mouth Republican politicians.
I mean that this is the culture war redux – a war not on religion or on women but on modernity.
All culture wars are that, of course: the old culture in a goal-line stance against a new way of organizing society, a new culture struggling to be born. Usually, that’s pretty obvious. This time, somehow, it seems less so, maybe because the battle is being fought in the complex language of law, namely a 20-year-old law called the Religious Freedom Restoration Act.
This tendentiously named statute, aimed at overturning a 1990 Supreme Court decision that cast a skeptical eye on claims to religious exemptions from ordinary laws, provides that the government “shall not substantially burden a person’s exercise of religion” unless the burden serves a “compelling” government interest and is the “least restrictive means” of doing so.
What’s a substantial burden? What governmental interest is sufficiently compelling? And with particular respect to the two new Supreme Court cases, is a for-profit corporation a “person” that can engage in religious exercise? . . .
Very important column by Susan Stellin in the NY Times:
GOVERNMENTS wade into treacherous waters when they compile lists of people who might cause their countries harm. As fears about Japanese-Americans and Communists have demonstrated in the past, predictions about individual behavior are often inaccurate, the motivations for list-making aren’t always noble and concerns about threats are frequently overblown.
So it might seem that current efforts to identify and track potential terrorists would be approached with caution. Yet the federal government’s main terrorist watch list has grown to at least 700,000 people, with little scrutiny over how the determinations are made or the impact on those marked with the terrorist label.
“If you’ve done the paperwork correctly, then you can effectively enter someone onto the watch list,” said Anya Bernstein, an associate professor at the SUNY Buffalo Law School and author of “The Hidden Costs of Terrorist Watch Lists,” published by the Buffalo Law Review in May. “There’s no indication that agencies undertake any kind of regular retrospective review to assess how good they are at predicting the conduct they’re targeting.”
What’s more, the government refuses to confirm or deny whether someone is on the list, officially called the Terrorist Screening Database, or divulge the criteria used to make the decisions — other than to say the database includes “individuals known or suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism and terrorist activities.”
Even less is known about the secondary watch lists that are derived from the main one, including the no-fly list (used to prevent people from boarding aircraft), the selectee and expanded selectee lists (used to flag travelers for extra screening at airport checkpoints), the TECS database (used to vet people entering or leaving the United States), the Consular Lookout and Support System (used to screen visa applications) and the known or suspected terrorists list (used by law enforcement in routine police encounters).
For people who have landed on these lists, the terrorist designation has been difficult to challenge legally — although that may be about to change. On Monday, a lawsuit brought by a traveler seeking removal of her name from the no-fly list, or at least due process to challenge that list, is going to trial in Federal District Court in San Francisco, after almost eight years of legal wrangling.
In that case, a Stanford University Ph.D. student named Rahinah Ibrahim was prevented from boarding a flight at San Francisco International Airport in 2005, and was handcuffed and detained by the police. Ultimately, she was allowed to fly to Malaysia, her home country, but she has been unable to return to the United States because the State Department revoked her student visa.
According to court filings, two agents from the Federal Bureau of Investigation visited Ms. Ibrahim a week before her trip and asked about her religious activities (she is Muslim), her husband and what she might know of a Southeast Asian terrorist organization. A summary of that interview obtained by Ms. Ibrahim’s lawyer includes a code indicating that the visit was related to an international terrorism investigation, but it is not clear what other evidence — like email or phone records — was part of that inquiry.
“We’ve tried to get discovery into whether our client has been surveilled and have been shut down on that,” said Elizabeth Pipkin, a lawyer with McManis Faulkner, the firm representing Ms. Ibrahim pro bono. “They won’t answer that question for us.”
The government says that revealing this type of information would jeopardize national security. In April, Attorney General Eric H. Holder Jr. asserted to the court “a formal claim of the state secrets privilege” in the case.
In another case, Latif v. Holder, 13 American citizens who have been denied boarding on flights are seeking removal of their names from any watch list, as well as the reasons they have been banned and an opportunity to rebut any derogatory information.
“People who are accused of being enemy combatants at Guantánamo have the ability to challenge their detention, however imperfect that now is,” said Hina Shamsi, a lawyer with the American Civil Liberties Union, which is representing the plaintiffs. “It makes no sense that people who have not actually been accused of any wrongdoing can’t challenge” their inclusion on a watch list.
The Terrorist Screening Center, which administers the main terrorist watch list, declined to discuss its procedures, or to release current data about the number of people on various watch lists, and how many of them are American citizens. A T.S.C. official did say that fewer than 1 percent of the people in the main terrorist database are United States citizens or legal permanent residents, but there is no way to confirm that number.
Reports by the Government Accountability Office and other oversight agencies have raised concerns about how people are nominated to be on the terrorist watch list, the accuracy of information in the database and the effectiveness of procedures to remove or correct inaccurate records. . .
Doesn’t this keeping of lists remind one of the worst excesses of totalitarian governments, especially the secrecy of the lists, the secrecy of the procedures, the lack of any legal route to challenge the lists, and so on. The administration is showing more and more signs that they believe they do not have to answer to the public.
From the article:
UPDATE: Also worth reading is this article on watchlists by the Airline Pilots Security Alliance. From that article:
Clearly, there are people who are reasonably suspected of terrorist ties based on reliable intelligence and who should be prohibited from boarding a commercial airliner; and, in principle, the concept of “no-fly lists” is a good one. But, in practice, and by any standard, the TSA’s management and oversight of its terror watch list program has been abysmal, missing obvious terrorists, targeting innocent Americans both deliberately and due to incompetence, and making passengers bristle at the thought of being required to provide private information to an agency with a poor record of controlling or using it in good faith, and that gives passengers little recourse if their personal information results in an erroneous determination they are a security threat.
- The TSA failed to put Osama Bin Laden on its “no-fly list” but it did add Senator Edward Kennedy to it.
- In 2004, TSA diverted a commercial airliner to forcibly remove the singer, Cat Stevens, as a potential terrorist. Cat Stevens wrote the song, “Peace Train.”
- Not long after, TSA removed a known terrorist from an airline flight about to depart the U.S. Unfortunately, the individual was in the custody of federal agents and being deported from the United States. TSA still refused to allow him to leave.
Such incompetence would be comical if it did not have such serious ramifications for innocent citizens. There are cases where professional airline pilots find themselves on terror watch lists for reasons they can’t fathom; and are, thus unable to earn a living because they are no longer permitted to fly commercial airliners. Pilots report their telephone calls to the TSA are, in many cases, ignored. Two pilots on terror watch lists are deputized federal officers in the Federal Flight Deck Officer Program; so they are authorized to protect their cockpits with firearms when on duty, but not permitted to fly as passengers when off duty. It is surreal.
The TSA has also announced it does not limit names placed on its “no-fly lists” to suspected terrorists. The agency also adds the names of passengers judged unruly in flight or in preparation for flight. A passenger who, in a moment of anger, snaps at a flight attendant, jokes about a bomb, or ignores the Fasten Seat Belt sign, could, ultimately, find him or herself prohibited from flying on any commercial airliner forever — with no charges filed, no guilty finding and little recourse.
Most chilling, the TSA classifies as, Sensitive to National Security, the criteria it uses to place passengers’ names on its lists in general, or the specific reasons a given passenger was put on a “no-fly” list. It is thus, impossible for the passenger to refute or correct the TSA’s arbitrary decision.
- In spring of 2005, a female passenger suggested to TSA screeners at an airport checkpoint that they wouldn’t be able to find any contraband in her bag even if she werecarrying any. Her assertion is confirmed by numerous independent reports showing TSA screeners miss as much as 95% of contraband when it is deliberately hidden. In response, the TSA prohibited her from flying, as a security threat, even though she hadnot made any threat – but, they failed to remove her bag from the flight. When the flight arrived at the flight’s destination, her bag was removed, taken to a remote area and deliberately blown apart by a demolitions team, despite the fact, the woman had never made a threat and the bag, having arrived, was obviously no longer a threat itself, if it ever had been. The woman was never charged with any violation. Whether the TSA’s action was due to vengeance against a passenger critical of the agency, or unbelievable incompetence, it is clear its “no-fly” decisions are not constrained by reason or logic.
This same agency is now accessing millions of passengers’ private information to use to decide if they should be permitted to fly or not, as part of its new “Secure Flight Program.” A Government Accountability Office report published only four months before the program would go into effect found that 9 out of the 10 measures mandated by Congress to protect innocent passengers’ information, privacy and freedom to fly had not been implemented by the TSA.
Even if the terror watch list program is someday operated by a responsible agency, its limitations are obvious: . . .
The chart above is from a very interesting piece by Arindrajit Dube in the NY Times:
During most of the 20th century, wages in the United States were set not just by employers but by a mix of market and institutional mechanisms. Supply and demand were important factors; collective bargaining and minimum wage laws also played a key role. Under Presidents Franklin D. Roosevelt and Richard M. Nixon, we even implemented more direct forms of wage controls.
These direct interventions, however, were temporary, and unions have become rare in most parts of the United States — virtually disappearing from the private sector. This leaves minimum wage policies as one of the few institutional levers for setting a wage standard. But while we can set a wage floor using policy, should we? Or should we leave it to the market and deal with any adverse consequences, like poverty and inequality, using other policies, like tax credits and transfers? These longstanding questions take on a particular urgency as wage inequality continues to grow, and as we consider specific proposals to raise the federal minimum wage — currently near a record low — and to index future increases to the cost of living.
The idea of fairness has been at the heart of wage standards since their inception. This is evident in the very name of the legislation that established the minimum wage in 1938, the Fair Labor Standards Act. When Roosevelt sent the bill to Congress, he sent along a message declaring that America should be able to provide its working men and women “a fair day’s pay for a fair day’s work.” And he tapped into a popular sentiment years earlier when he declared, “No business which depends for existence on paying less than living wages to its workers has any right to continue in this country.”
This type of concern for fairness actually runs deep in the human psyche. There is a widespread sense that it is unfair of employers to take advantage of workers who may have little recourse but to work at very low wages. For example, the economists Colin F. Camererand Ernst Fehr have documented in numerous experimental studies that the preference for fairness in transactions is strong: individuals are often willing to sacrifice their own payoffs to punish those who are seen as acting unfairly, and such punishments activate reward-related neural circuits. People also strongly supportbanning transactions they see as exploitative of others — even if they think such a ban would entail some economic costs.
Of course, if most minimum wage workers were middle-class teenagers, many of us might shrug off concerns about their wages, since they are taken care of in other ways. But in reality, the low-wage work force has become older and more educated over time. In1979, among low-wage workers earning no more than $10 an hour (adjusted for inflation), 26 percent were teenagers between 16 and 19, and 25 percent had at least some college experience. By 2011, the teenage composition had fallen to 12 percent, while over 43 percent of low-wage workers had spent at least some time in college. Even among those earning no more than the federal minimum wage of $7.25 in 2011, less than a quarter were teenagers.
Support for increasing the minimum wage stretches across the political spectrum. As Larry M. Bartels, a political scientist at Vanderbilt, shows in his book “Unequal Democracy,” support in surveys for increasing the minimum wage averaged between 60 and 70 percent between 1965 and 1975. As the minimum wage eroded relative to other wages and the cost of living, and inequality soared, Mr. Bartels found that the level of support rose to about 80 percent. He also demonstrates that reminding the respondents about possible negative consequences like job losses or price increases does not substantially diminish their support.
These patterns show up in recent survey data as well, as over three-quarters of Americans, including a solid majority of Republicans, say they support raising the minimum wage to either $9 or $10.10an hour. It is therefore not a surprise that when they have been given a choice, voters in red and blue states alike have consistently supported, by wide margins, initiatives to raise the minimum wage. In 2004, 71 percent of Florida voters opted to raise and inflation-index the minimum wage, which today stands at $7.79 per hour. That same year, 68 percent of Nevadans voted to raise and index their minimum wage, which is now $8.25 for employees without health benefits. Since 1998, 10 states have put minimum wage increases on the ballot; voters have approved them every time.
But the popularity of minimum wages has not translated into legislative success on the federal level. Interest group pressure — especially from the restaurant lobby — has been one factor. Ironically, the very popularity of minimum wages may also have contributed to the failure to automatically index the minimum wage to inflation: Democratic legislators often prefer to increase the wage themselves since it allows them to win more political points. While 11 states currently index the minimum wage, only one, Vermont, did so legislatively; the rest were through ballot measures.
As a result of legislative inaction, inflation-adjusted minimum wages in the United States have declined in both absolute and relative terms for most of the past four decades. The high-water mark for the minimum wage was 1968, when it stood at $10.60 an hour in today’s dollars, or 55 percent of the median full-time wage. In contrast, the current federal minimum wage is $7.25 an hour, constituting 37 percent of the median full-time wage. In other words, if we want to get the minimum wage back to 55 percent of the median full-time wage, we would need to raise it to $10.78 an hour. . .
Using what he sees as conservative principles to advocate a policy long championed by the left, Mr. Unz argues that significantly raising the minimum wage would help curb government spending on social services, strengthen the economy and make more jobs attractive to American-born workers.
“There are so many very low-wage workers, and we pay for huge social welfare programs for them,” he said in an interview. “This would save something on the order of tens of billions of dollars. Doesn’t it make more sense for employers to pay their workers than the government?”
The comments to Dube’s article are quite interesting. Here’s one from “Greg” in Massachusetts:
Another useful datapoint is to compare a full-time job at minimum wage to the per-capita GDP. In the period 1938-1978, a full-time minimum wage job yielded, on the average, 60% of per-capita GDP. That fraction is now less than 30%. Thus, doubling the minimum wage to about $15/hr brings us back to a sustainable historic norm.
Of course, we will be told that unemployment will skyrocket. Given that the minimum wage has been increased 20 times over its history, including an increase of 87% in 1950, there should be plenty of empirical evidence to support this hypothesis if it is true–but we are never given any such evidence. Instead, we are told “Econ 101!” and “It stands to reason!” When the empirical evidence fails to support the theory, rethink the theory.