Archive for April 2nd, 2012
Interesting report in the NY Times by Gina Kolata:
If every aspect of a person’s DNA is known, would it be possible to predict the diseases in that person’s future? And could that knowledge be used to forestall the otherwise inevitable?
The answer, according to a new study of twins, is, for the most part, “no.”
While sequencing the entire DNA of individuals is proving fantastically useful in understanding diseases and finding new treatments, it is not a method that will, for the most part, predict a person’s medical future.
So, the new study concludes, it is not going to be possible to say that, for example, Type 2 diabetes will occur with absolute certainty unless a person keeps a normal weight, or that colon cancer is a foregone conclusion without frequent screening and removal of polyps. Conversely, it will not be possible to tell some people that they can ignore all the advice about, for example, preventing a heart attack because they will never get one.
“The punch line is that this sort of personalized medicine will not in any way be the most important determinant of patient care,” said Dr. Bert Vogelstein of Johns Hopkins, who, with his colleagues and his son Joshua, analyzed the power of sequencing all of a person’s DNA to determine an individual’s risk of disease. The study, published online Monday in the journal Science Translational Medicine, involved data from 53,666 identical twins in registries from the United States, Sweden, Finland, Denmark and Norway. The registries included data on 24 diseases, telling how often one twin, both or neither got a disease.
Since identical twins share all of their genes, the investigators could ask to what extent genes predict an increased chance of getting a disease. Using a mathematical model, they reached an answer: not much. Most people will be at average risk for most of the 24 diseases.
They asked: Would those who ultimately got one of the 24 diseases have been forewarned by DNA sequencing? “Unfortunately, it tells them they are at roughly the same risk as the general population,” said Dr. Vogelstein.
The researchers also asked . . .
This seems quite consistent with the US’s slide into an authoritarian (if not totalitarian) government. Reported by Adam Liptak in the NY Times:
The Supreme Court on Monday ruled by a 5-to-4 vote that officials may strip-search people arrested for any offense, however minor, before admitting them to jails even if the officials have no reason to suspect the presence of contraband.
Justice Anthony M. Kennedy, joined by the court’s conservative wing, wrote that courts are in no position to second-guess the judgments of correctional officials who must consider not only the possibility of smuggled weapons and drugs but also public health and information about gang affiliations.
About 13 million people are admitted each year to the nation’s jails, Justice Kennedy wrote.
Under Monday’s ruling, he wrote, “every detainee who will be admitted to the general population may be required to undergo a close visual inspection while undressed.”
Justice Stephen G. Breyer, writing for the four dissenters, said strip-searches were “a serious affront to human dignity and to individual privacy” and should be used only when there was good reason to do so.
The decision endorses a more recent trend, from appeals courts in Atlanta, San Francisco and Philadelphia, in allowing searches no matter how minor the charge. Some potential examples cited by dissenting judges in the lower courts and by Justice Breyer on Monday included violating a leash law, driving without a license and failing to pay child support.
The Supreme Court case arose from the arrest of Albert W. Florence in New Jersey in 2005. Mr. Florence was in the passenger seat of his BMW when a state trooper pulled his wife, April, over for speeding. A records search revealed an outstanding warrant based on an unpaid fine. (The information was wrong; the fine had been paid.)
Mr. Florence was held for a week in jails in two counties, and he was strip-searched twice. There is some dispute about the details but general agreement that he was made to stand naked in front of a guard who required him to move intimate parts of his body. The guards did not touch him. . .
Continue reading. Obviously, it could be you. And note that actually paying your fines is not necessarily going to protect you.
It’s so strange to see the country moving so rapidly in the direction it’s going and so many who not only don’t mind but actually support increasingly aggressive actions by police and government against … well, against us: the citizens.
The current ruling continues the dismantling of the 4th Amendment. Already the government has granted itself the power to seize your laptop and cellphone, without cause or suspicion, and keep them as long as it wants and require you to provide passwords. (This is currently done primarily at borders, but I’m sure it will move into common practice.)
Text of the 4th Amendment, which is no longer observed:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
It took a big hit, of course, when George W. Bush authorized the NSA to track and record all electronic communications (internet and telephone) without a search warrant or indeed any cause: just listen in on all American citizens, and of course Barack Obama has continued the practice.
UPDATE: Interesting: The British public apparently will not stand for the constant government surveillance of their communications that is now practiced in the US. Alan Cowell reports in the NY Times:
British lawmakers and rights activists joined a chorus of protest Monday against plans by the government to give the intelligence and security services the ability to monitor the phone calls, e-mails, text messages and Internet use of every person in the country.
In a land where tens of thousands of surveillance cameras attest to claims by privacy advocates that Britain is the Western world’s most closely monitored society, the proposal has touched raw nerves, compounding arguments that its citizens live under what critics call an increasingly intrusive “nanny state.”
The debate in recent years has pitted those who justify greater scrutiny by reference to threats of terrorism and organized crime against those who cleave to more traditional notions of individual privacy.
But the current proposal would go a step further, raising the question of how security agencies can themselves keep track of a proliferation of newer technologies such as Skype, instant messaging and social networking sites that permit instant communication outside more traditional channels.
“What we do need to make sure is that, as technology changes, we are able to maintain our current capability in this area,” a spokesman for Prime Minister David Cameron said, speaking in return for anonymity under departmental rules.
The Home Office said the new measures were vital to provide police and security services with “communications data to investigate serious crime and terrorism and to protect the public.”
Under the proposal, reported in The Sunday Times of London, a law to be introduced this year would empower the authorities to order Internet companies to install hardware enabling the government’s monitoring agency, known by its initials, GCHQ, to examine individual communications without a warrant.
A similar effort to enhance the authorities’ powers was made by the previous Labour government in 2006, but it was abandoned after ferocious opposition, including from the two parties that now form the coalition government — the dominant Conservatives and the smaller Liberal Democrats — and are now re-introducing the same legislation..
Currently, government eavesdroppers and police need a warrant to monitor specific communications. But the new system would permit the authorities to track communications data like “time, duration and dialing numbers of a phone call or an e-mail address,” the Home Office said in a statement. . . .
Continue reading. Not quite at the point the US has achieved, where the content is also stored and kept (so that the NSA can look for suspicious words and phrases), but getting there.
Living alone seems to be an increasing trend. Eric Klinenberg has an interesting article on the phenomenon in The Guardian. From the article:
. . . Contemporary solo dwellers in the US are primarily women: about 18 million, compared with 14 million men. The majority, more than 16 million, are middle-aged adults between the ages of 35 and 64. The elderly account for about 11 million of the total. Young adults between 18 and 34 number more than 5 million, compared with 500,000 in 1950, making them the fastest-growing segment of the solo-dwelling population. Unlike their predecessors, people who live alone today cluster together in metropolitan areas.
Sweden has more solo dwellers than anywhere else in the world, with 47% of households having one resident; followed by Norway at 40%. In Scandinavian countries their welfare states protect most citizens from the more difficult aspects of living alone. In Japan, where social life has historically been organised around the family, about 30% of all households have a single dweller, and the rate is far higher in urban areas. The Netherlands and Germany share a greater proportion of one-person households than the UK. And the nations with the fastest growth in one-person households? China, India and Brazil.
But despite the worldwide prevalence, living alone isn’t really discussed, or understood. We aspire to get our own places as young adults, but fret about whether it’s all right to stay that way, even if we enjoy it. We worry about friends and family members who haven’t found the right match, even if they insist that they’re OK on their own. We struggle to support elderly parents and grandparents who find themselves living alone after losing a spouse, but we are puzzled if they tell us they prefer to remain alone.
In all of these situations, living alone is something that each person, or family, experiences as the most private of matters, when in fact it is an increasingly common condition.
When there is a public debate about the rise of living alone, commentators present it as a sign of fragmentation. In fact, the reality of this great social experiment is far more interesting – and far less isolating – than these conversations would have us believe. The rise of living alone has been a transformative social experience. It changes the way we understand ourselves and our most intimate relationships. It shapes the way we build our cities and develop our economies.
So what is driving it? . . .
AI researchers are beginning to move away from trying to embed human-style intelligence in machines and are now working to see what sorts of machine-style intelligences they can bring about. Marcus du Sautoy writes in The Guardian:
‘I propose to consider the question “Can machines think?”‘ Not my question but the opening of Alan Turing’s seminal 1950 paper which is generally regarded as the catalyst for the modern quest to create artificial intelligence. His question was inspired by a book he had been given at the age of 10: Natural Wonders Every Child Should Know by Edwin Tenney Brewster. The book was packed with nuggets that fired the young Turing’s imagination including the following provocative statement:
“Of course the body is a machine. It is vastly complex, many times more complicated than any machine ever made with hands; but still after all a machine. It has been likened to a steam machine. But that was before we knew as much about the way it works as we know now. It really is a gas engine; like the engine of an automobile, a motor boat or a flying machine.”
If the body were a machine, Turing wondered: is it possible to artificially create such a contraption that could think like he did? This year is Turing’s centenary so would he be impressed or disappointed at the state of artificial intelligence? Do the extraordinary machines we’ve built since Turing’s paper get close to human intelligence? Can we bypass millions of years of evolution to create something to rival the power of the 1.5kg of grey matter contained between our ears? How do we actually quantify human intelligence to be able to say that we have succeeded in Turing’s dream? Or is the search to recreate “us” a red herring? Should we instead be looking to create a new sort of machine intelligence different from our own?
Last year saw one of the major landmarks on the way to creating artificial intelligence. Scientists at IBM programmed a computer called Watson to compete against the best the human race has to offer in one of America’s most successful game shows: Jeopardy! It might at first seem a trivial target to create a machine to compete in a general knowledge quiz. But answering questions such as: “William Wilkinson’s An account of the principalities of Wallachia and Moldavia inspired this author’s most famous novel” requires a very sophisticated piece of programming that can return the answer quickly enough to beat your rival to the buzzer. This was in fact the final question in the face-off with the two all-time champions of the game show. With the answer “Who is Bram Stoker?” Watson claimed the Jeopardy! crown.
Watson is not IBM’s first winner. . .
Continue reading. Some interesting links, and toward the end, this:
. . . For me one of the most striking experiments in AI is the brainchild of the director of the Sony lab in Paris, Luc Steels. He has created machines that can evolve their own language. A population of 20 robots are first placed one by one in front of a mirror and they begin to explore the shapes they can make using their bodies in the mirror. Each time they make a shape they create a new word to denote the shape. For example the robot might choose to name the action of putting the left arm in a horizontal position. Each robot creates its own unique language for its own actions.
The really exciting part is when these robots begin to interact with each other. One robot chooses a word from its lexicon and asks another robot to perform the action corresponding to that word. Of course the likelihood is that the second robot hasn’t a clue. So it chooses one of its positions as a guess. If they’ve guessed correctly the first robot confirms this and if not shows the second robot the intended position.
The second robot might have given the action its own name, so it won’t yet abandon its choice, but it will update its dictionary to include the first robot’s word. As the interactions progress the robots weight their words according to how successful their communication has been, downgrading those words where the interaction failed. The extraordinary thing is that after a week of the robot group interacting with each other a common language tends to emerge. By continually updating and learning, the robots have evolved their own language. It is a language that turns out to be sophisticated enough to include words that represent the concept of “left” and “right”. These words evolve on top of the direct correspondence between word and body position. The fact that there is any convergence at all is exciting but the really striking fact for me is that these robots have a new language that they understand yet the researchers at the end of the week do not comprehend until they too have interacted and decoded the meaning of these new words. . .
Another interview from The Browser, and extremely interesting and informative:
In a recent post on your website, you said there was “moral rot” in America. And you say: “It’s located in the public behaviour of people who control our economy and are turning our democracy into a financial slush pump.” Can you expand on this?
An economy depends fundamentally on public morality; some shared standards about what sorts of activities are impermissible because they so fundamentally violate trust that they threaten to undermine the social fabric. Without trust it has to depend upon such complex contracts and such weighty enforcement systems that it would crumble under its own weight. What we’ve seen over the last two decades in the United States is a steady decline in the willingness of people in leading positions in the private sector – on Wall Street and in large corporations especially – to maintain those minimum standards. The new rule has become making the highest profits possible regardless of the social consequences.
In the first three decades after World War II – partly because America went through that terrible war and also experienced before that the Great Depression – there was a sense in the business community and on Wall Street of some degree of social responsibility. It wasn’t talked about as social responsibility, because it was assumed to be a bedrock of how people with great economic power should behave. CEOs did not earn more than 40 times what the typical worker earned. Rarely were there mass layoffs by profitable firms. The marginal income tax on the highest income earners in the 1950s was 91%. Even the effective rate, after all deductions and tax credits, was still well above 50%. The game was not played in a cutthroat way. In fact, consumers, workers, the community, were all considered stakeholders of almost equal entitlement as shareholders.
Around about the late 1970s and early 1980s, all of this changed quite dramatically. The change began on Wall Street. Wall Street convinced the Reagan administration, and subsequent administrations and congresses, to deregulate and to undermine the set of regulations that were put in place after the crash of 1929 – particularly during the Roosevelt administration – to prevent a repeat of the excesses of the 1920s. As a result of that move towards deregulation, we saw a steady decline in standards – a kind of race to the bottom – on Wall Street and then in executive suites. In the 1980s we had junk bond scandals combined with insider trading. In the 1990s we had the beginnings of a speculative binge culminating in the dotcom bubble. Sad to say, under the Clinton administration the Glass-Steagall Act – that had been part of the banking act of 1933, separating investment banking from commercial banking – was repealed. In 2001 and 2002 we had Enron and the corporate looting scandals. Not only did this reveal the dark side of executive behaviour among some of the most admired companies in America – Enron had been listed among the nation’s most respected companies before that time – but also the complicity of Wall Street. Wall Street traders were actively involved in the Enron travesty. And then, of course, we had all of the excesses leading up to the crash of 2008.
Where has the moral centre of American capitalism disappeared? It’s ironic that at a time the Republican presidential candidates and state legislators are furiously focusing on private morality – what people do in their bedrooms, contraception, abortion, gay marriage – we have this far more significant crisis in morality. Wall Street is back to its same old tricks. Last week, Greg Smith, a vice-president of Goldman Sachs, accused the firm of putting profits before clients. What else is new? Almost every other Wall Street firm is doing precisely the same thing and they’ve been doing it for years.
Having identified the problem with American capitalism today, what’s the solution? You have said elsewhere that progressives need to save capitalism from its own excesses, but there has been a Democrat in the White House since 2009 and few signs of major reform. Who’s going to tackle this “moral rot”?
It hasn’t happened. The Dodd-Frank bill was an attempt to rein in Wall Street, but Wall Street lobbyists have almost eviscerated that act and have been mercilessly attacking the regulations issued. Pursuant Republicans have not even appropriated sufficient money to enforce the shards of the act that remain.
The Glass-Steagall Act has to be resurrected. There has to be a limit on the size of big banks. The current big banks have to be broken up using anti-trust laws, as we broke up the oil cartels in the early years of the 20th century. We’ve got to put limits on executive pay and have a much more progressive income tax so that people who are earning tens if not hundreds of millions of dollars a year are paying at a rate that they paid before 1981, which is at least 70% at the highest marginal level. We also need to get money out of politics.
But how are these changes going to come about while the very wealthy can so effectively control and manipulate the political process? . . .
Continue reading. This is important stuff because it points out exactly how the US is being destroyed.
Extremely interesting article in Salon by Linda McQuaig and Neil Brooks; it’s simply an excerpt from their book Billionaire’s Ball: Gluttony and Hubris in an Age of Epic Inequality, which I’m definitely going to read:
There are many words that could be used to describe Barack Obama, but one adjective decidedly doesn’t fit: Aggressive. So it was more than passing strange when a prominent member of Wall Street — Stephen Schwarzman, chairman of the private equity giant Blackstone Group — compared actions by President Obama to one of the most notoriously aggressive acts by one of history’s most aggressive villains. Speaking to the board of a nonprofit group, Schwarzman fiercely denounced initiatives by the Obama administration: “It’s war. It’s like when Hitler invaded Poland in 1939.”
In the arena of political commentary, few things are considered more clearly below-the-belt than comparing an opponent to Hitler. So there was a small stir in August 2010 when it was reported that Schwarzman — whom Time magazine had included on its 100 most influential people list only three years earlier — had likened Obama to the Nazi strongman. Schwarzman acknowledged making the remark and then apologized for it, while reaffirming the sentiment behind it. But what was striking about the Hitler comment — besides its sheer viciousness and absurdity — was what had provoked it. Schwarzman wasn’t complaining about undue military force, torture, or ethnic cleansing. He was likening the president to the most reviled man in history on the grounds that Obama was trying to close a tax loophole that allowed hedge fund and private equity managers (like Schwarzman) to pay tax at a rate that Warren Buffett famously noted was lower than that paid by their secretaries.
In an era marked by gluttony and hubris, Steve Schwarzman has still managed to stand out.
His 60th birthday party in Manhattan in 2007 was so lavish — with live performances by Rod Stewart and Martin Short — it became Wall Street legend. Then there’s Schwarzman’s 35-room Park Avenue residence, his sprawling estate in Saint-Tropez, a spectacular spread in Jamaica, and his massive Palm Beach estate, where the executive chef says it typically costs about $3,000 a weekend to feed just Schwarzman and his wife.
Schwarzman is a major figure in private equity, part of the surging field of “alternative asset” financial institutions that, along with hedge and real estate funds, appeared on the horizon two decades ago and now control trillions of dollars in assets. While hedge funds are well-known for contributing to the subprime mortgage crash, private equity funds are notorious for taking over established firms with borrowed money and essentially pillaging them. The bought-out companies are typically saddled with increased debt from the takeover and forced to make massive dividend and fee payouts to the private equity managers and their investors, while employees are shedded and union contracts gutted. The companies are usually chopped up into smaller pieces and sold soon afterwards at inflated prices, creating another windfall for the private equity managers. By 2007, the Blackstone Group had taken control of more than 112 companies worth nearly $200 billion. In 2011, Schwarzman ranked 169th on Forbes’worldwide billionaire list, worth an estimated $5.9 billion.
Schwarzman may be rougher at the edges than most of the hedge fund and private equity crowd. But his outburst against Obama reminds us of the “war” he and others — by themselves or by proxies — have been engaged in to minimize their contribution to the public treasury. It’s an all-too-familiar tale of how effective the rich are at getting their way, even when the battle is being played out in a very public arena where a small group of billionaires advancing their own self-interest would seem a very tough sell.
- – – – – – – – – – – -
Victor Fleischer didn’t set out to be a 21st-century Robin Hood. His real aim was just to get tenure.
Fleischer joined the New York law firm Polk Davis in the late 1990s, working on the formations of private equity and venture capital funds. He was struck by the very low rates of tax paid by fund managers, even compared to the already low tax rates being paid by executives receiving corporate stock options. Fleischer wasn’t discovering something new; the rules had been in place since 1954. Nor was he outraged or even particularly interested in the question of tax fairness. At the time, he was simply interested in the impact that the tax rules governing so-called “carried interest” might have on the law firm’s clients.
The question stayed in Fleischer’s mind after he left Polk Davis in 2001 and became a law professor specializing in taxation. Hoping to get a paper published to improve his chances of securing tenure, Fleischer put together his thoughts on the taxation of private equity funds. Now that he was no longer constrained by working for people in the private equity field, he started to pay attention to what seemed to him to be a “quirk” in the law that distorted tax principles while undermining distributive justice.
He identified the fact that managers of private equity, venture capital and hedge funds were claiming a significant part of their incomes as capital gains (taxed at 15 percent), rather than treating them as regular income (taxed at 35 percent). That substantial difference in rates was magnified by the enormity of the incomes in question. A private equity manager receiving, say, $600 million as a capital gain would pay $90 million in tax. If the same income were treated as income from salary, it would be taxed at 35 percent (and also be subject to a 2.9 percent payroll tax), bringing the private equity manager’s tax bill to $227.4 million — almost $140 million more.
The ostensible purpose of the lower capital gains rate is to compensate investors for the risk they take in investing their capital. But private equity and fund managers aren’t investing their own capital. They’re investing other people’s capital. They’re simply money managers. By claiming capital gains treatment, they are passing off regular income as capital gains, simply to save themselves taxes. . .
The outsourcing mania continues apace, with disastrous results all too frequently. Thomas Toft-Hansen has an op-ed in the NY Times on one sort of outsourcing:
ON Oct. 12, 2010, Jimmy Mubenga was deported from Britain. The 46-year-old Angolan had come to the country as a refugee 16 years earlier. But after his involvement in a pub brawl and a subsequent criminal conviction, the government ordered his deportation. Three private security guards escorted him through Heathrow Airport and onto British Airways Flight 77 to Luanda, Angola. The exact details of what followed are still unclear and currently subject to criminal investigation.
Several passengers onboard the plane reported that Mr. Mubenga repeatedly complained that he could not breathe and that he was being held down with his head between his knees by security guards. As the airplane taxied to the runway in London, Mr. Mubenga lost consciousness and later died.
Immigration control has traditionally been viewed as an inalienable sovereign function of the state. But today migration management has increasingly been taken over by private contractors. Proponents of privatization have been keen to argue that the use of contractors does not mean that governments lose control. Yet, privatization introduces a corporate veil that blurs both public oversight and legal accountability.
Despite efforts to introduce outside supervisors, performance reports and other monitoring mechanisms, the private nature of these companies breaks the ordinary administrative chain of command, placing both governments and the public at a disadvantage in terms of ensuring transparency.
Private companies seldom have an interest in securing public oversight, as any criticism may entail negative economic consequences. Australasian Correctional Management, which ran detention centers in Australia from 1998 to 2004, was known to require medical staff members or teachers entering its facilities to sign confidentiality agreements preventing them from disclosing any information regarding detainees or the administration of the centers. Being foreigners, migrants and refugees have always had a hard time gaining access to outside complaint mechanisms and advocacy institutions. As an employee in charge of reviewing disciplinary cases at a Corrections Corporation of America facility in Houston once told a reporter from this paper, “I’m the Supreme Court.”
The corporate veil also distorts lines of legal responsibility. Human rights law is largely designed on the presumption that it is states and not private companies that exercise sovereign powers like detention or border control. Legally holding governments accountable for human rights violations by contractors requires an additional step showing that it is the state and not just the corporation or individual employee that is responsible for the misconduct.
Mr. Mubenga’s case is not unique. Numerous reports have been filed about misconduct, violence and abuse perpetrated by contractors carrying out migration functions. The three security guards responsible for deporting Mr. Mubenga worked for the Anglo-Danish security company G4S. Before Mr. Mubenga’s death, G4S held the exclusive contract with the U.K. Border Agency to provide escorts for immigration detainees deported from the country. The firm subsequently lost this contract, but this didn’t end its involvement in managing migration.
As the world’s largest security company with more than 650,000 employees, . . .
Continue reading. You can see the direction we’re heading as the government more and more falls under the control of big businesses.
Charles Floerster has an interesting article at Nation of Change:
There is a great controversy that is raging between good and evil at the soda fountain or pop machine but the general public does not seem to be aware of it. The implications or consequences of the dietary changes that have occurred since introduction of artificial sweeteners into our lives has dramatically increased but in many cases, silently. However, there are countless medical doctors, scientists and researchers that are aware of the effects but their voices are largely drowned out by the industry.
As in politics and other endeavors where mind-control plays a prominent role, advertising and propaganda are the most effective tools of those who are pitching a program or a product. The reasons for large infusions of cash could be to cover corporate wrongdoing, agency corruption, incompetency or just to hide plain carelessness but usually, profit motive is the driving force.
Ethical business practices should promote periodic reviews but it appears that the only reviews have been of the annual reports. Why have we not had an on-going agency study or review for the last 25 years or so? Could it be that we are so taken in by the all encompassing custodial nature of total government that we have lost the ability to think and act on matters that concern our most vital possession, our health?
On a personal level there is no in-between on diet drinks, either you like them or you hate them. No matter what the Food and Drug Administration (FDA) says or what any critical medical study shows, people are fiercely loyal to their diet drinks.
There are unimaginable numbers of other products that contain aspartame besides diet drinks but these products do not generate the intense loyalty as the fizzy cola thirst-quenchers. Included in these unnecessarily altered products are medicines, toothpaste, yogurt, baked goods and other specialty drinks. Commercialism forges ahead of good science and another man-made substance of questionable value has been added to the food chain. The detractors don’t buy it but those addicted purchase it with an irrational compulsion. Like those with a narcotic habit, they don’t seem to mind paying to satisfy the craving. And pay they do, to the tune of billions of dollars a year.
Just what is this magical potion, aspartame? It is a combination of methanol and two amino acids, phenylalanine and aspartic acid. In 1965, James Schlattler, a chemist working at G.D. Searle discovered the substance quite by accident while working on a drug for another medical purpose. It was found to be many, many times sweeter than sugar but without the calories.
The chemistry of aspartame as it breaks down in the human body is well documented but acceptance of the science depends on one’s alliance with the industry or with the skeptic side. Regardless of one’s position on the subject, metabolism of aspartame in the human body and the after-effects, or lack thereof, continues to be a intensely controversial subject.
National Institutes of Health (NIH) describes a metabolite as any substance produced during metabolism (digestion or other bodily chemical processes). In medical terms, a metabolite usually refers to the product that remains after the drug is broken down (metabolized) by the body.
Virtually all industry funded studies discount any adverse effects of aspartame metabolites. Typical “friendly clinical reviews” of aspartame toxicity will most likely find the authors are closely related to the producers of aspartame. Conversely,and almost without fail, independent studies claim serious and deleterious consequences as result of aspartame consumption.
A similar parallel could be drawn from the life-cycle of a popular non-food product. The Model 35 Beech Bonanza airplane . . .
Continue reading. One theory about the decline of the Roman Empire was the use of lead pipes for drinking water. (“Plumber” in fact is derived from the Latin word for lead.) Lead poisoning leads to mental problems including an inclination to violence. (The marked decline of violence in the US is due to the removal of lead additives from gasoline and thus the environment.) The problem with poisons that are in the general environment is that they tend to affect pretty much everyone, so you don’t see a contrast between the healthy and the sick.
I think everyone knows how to divide (say) a piece of cake fairly: A cuts the piece in half, B gets first choice. Problem solved.
Dean Baker offers a similarly simple solution to a different problem:
The NYT had a good piece reporting on the fact that public sector pension funds that have invested heavily in alternative investments (e.g. hedge funds, real estate funds and private equity funds) have done much worse than those that just held traditional investments (e.g. stocks and bonds). While the managers of these alternative investments did quite well collecting fees, the governments did not.
There is a simple way to avoid this problem. If the funds made compensation for the managers of these investments almost entirely contingent on their beating a conventional market basket, then the risk would be shared. If managers are not willing to accept such contracts it implies that they don’t believe they will be able to beat the returns on conventional instruments. If the managers don’t believe that they can beat conventional returns, then governments should not either.
Judging by the books you have written and your research, your two great passions are finding out more about consumption and the environment. How do you see the two as linking together?
At the most fundamental level we can think about all environmental pressures on the planet as stemming from consumption because production, after all, is only for the purpose of consumption. That may be a slightly simplistic way of thinking about it. But by consumption I don’t just mean households, I am also thinking about the whole chain of effects that lead to the appearance of a computer or a car or a house. All consumption has environment impacts and one of the issues I have been most concerned with in my work is the scale of consumption.
So you would argue for lessening the scale of our consumption?
I think lessening is complicated because one of the things we know from a lot of the social science research and common sense is that getting people to give up what they have is very difficult. The bigger issue is getting on to a new path. The wealthy countries have been on a path of steadily rising consumption with parallel environmental impacts and that is what we really need to grapple with now. We have got a population of seven billion people, the vast majority of whom want to consume more. We are already consuming at a level which is beyond what the planet can tolerate, so wealthy countries need to figure out how to reconfigure their consumption patterns and they need to get off that upward trajectory.
Given that we have such a powerful culture of consumption how do you think we can realistically bring about that change?
I think culture is a great word to bring into this conversation. Since World War II our culture has centred on growth and material acquisition. Transforming that culture is at the root of being able to make the transition to an environmentally viable – I don’t want to even use the word sustainable – world. I think this involves giving people new outlets for creativity, new sources of joy, happiness, wonder and social connection.
So essentially you think people should try to change their materialistic outlook on life and try to seek pleasure in other forms?
Yes. And for me one of the best ways to do this is through production. We need to foster a culture where people are creative not just on the consumption side but on the production side as well. A number of trends have started already. If we think about online culture, one of the things we know is that people are passionate about producing content. They are blogging; they are producing videos; they are making music. These are great production projects that people are extremely passionate about. And then we see, on a much smaller but growing scale, a movement offline where people are growing things, making things, crafting things and doing arts. There is a revival of cooking and baking. All this ties in with what I am talking about, which is a shift to a much more producer-oriented culture, which then starts to be more than just hobbies. We can start to reconceptualise the economy in ways that see this producer behaviour as actually capable of giving people money and resources and access to goods and so forth. So, for example, people who learn to make something well can begin to sell or trade with it. Others are starting careers having learned how to build eco-houses, or engage in permaculture, or make pottery or jewellery. Similarly, the “collaborative consumption” movement is partly about people renting or bartering space in their homes, or yards or garages or assets that can be shared, such as cars or machinery.
Let’s see how your books tie into those ideas. Your first choice, Distinction: A Social Critique of the Judgement of Taste by Pierre Bourdieu, explores how our background plays such a dominant role in what we choose to consume.
Bourdieu is very much foundational in terms of understanding consumer culture. In contrast to many accounts that focus on advertising and marketing as somehow beaming desire into people, Bourdieu shows how the patterns of consumption that dominate in a society come out of structures of social inequality. He groups the consumer realm with the economic. I think that is very important because . . .
A very thin bike—and rather beautiful, as well.
If you have a garden or have access to a good supply of organic fruits and vegetables, check out this review — for the photos, if nothing else.
The key factor in my best shaves lately seems to be the ARC Weber—and possibly the blade, since I’m still using the original Astra Superior Platinum that came in it. At any rate, today’s shave scores quite high on smoothness & satisfaction.
The Edwin Jagger synthetic (of the type called “artificial badger”) is a complimentary copy, unusual for me. But they offered, and since I had already bought and tried a previous EJ synthetic, I was willing to experiment with this one. The earlier one, which I later sold, was the Chatsworth (at the right), and that handle lacks shoulders so the lather had a tendency to slip down the knot and onto the handle. This model, as you see, has more of a shoulder.
The new synthetic did a fine job. It’s very nice that excellent shaving brushes are now available even for those who wish to avoid using animal products. I got a fine lather, and I have to say that I must start using the Geo. F. Trumper Coconut Shaving Cream more frequently: it has a fine fragrance and produces a terrific lather.
Once again, the ARC Weber and ASP blade did a beautifully job: easy shaving, no nicks, no burn, but a totallly smooth finish. This morning I used the alum block, and then a splash of the Klar Seifen Sandalwood aftershave. It’s nice, but I think I like the Klassik better.
I am going to have to suggest the ARC Weber as a beginning razor to those who can afford it. While the Edwin Jagger DE8x series (or the Mühle equivalents) are excellent razors, they are not in the same league as the ARC Weber (and, of course, they cost about half what the Weber costs, so it’s only natural that the Weber would be better).
I now want to try the ARC Weber with some shaving soaps, so tomorrow we’re shifting gears and going in a soap direction.