Later On

A blog written for those whose interests more or less match mine.

Archive for August 15th, 2018

Interesting answer on Quora re: American attitudes toward Trump and Obama

leave a comment »

I quote:

Why is the right so reluctant to admit Obama did anything right?

Daniel Weinstein, studied at University of California, San Diego

I’ve lived in Australia for the last 44 years. While I’m still American enough to vote, I think that living elsewhere gives me a some perspective that might be elusive if I was in the thick of things. I don’t think this question is well served by a political answer. I don’t think politics have much to do with the fanatical opposition to Obama.

The current president seems to have unwavering, unquestioning support from his base. The level of blind acceptance of his many obviously offensive actions and opinions is reminiscent of the Guru Syndrome. The basic premise required by anyone who chooses to follow a Guru is that the Guru is right. The words and actions of the Guru cannot be subjected to any sort of critical analysis. The Guru is right by definition. The very concept of right is a function of whatever the Guru says or does. For reasons that elude me a significant portion of the American electorate has accepted Donald Trump as their personal Guru.

When it comes to Obama it appears that in some way the opposite occurred. I am just as baffled by the fundamental rejection of all things Obama by a significant segment of the American population as I am of the deification of Trump. But it certainly looked to me like much of the same part of the population that loves Trump made Obama into an anti-Guru. They held him as fundamentally evil and dangerous. They appeared to hold that whatever he did or believed was wrong by definition. He needed to be opposed no matter what because… well, just because.

I’m not in a position to state that it was because he’s black. I have no doubt that had a lot to do with it, but it may not be the whole story. Donald didn’t become president because of his politics, policies or personality. He became president in spite of them. Obama possibly did win because of his politics, policies and personality. He was, and is, inspiring and admirable. For many in the US he was, and is, an outsider. He’s not like us and it is unacceptable that someone who is not one of us be our leader. It’s even more unacceptable that the outsider is inspiring and admirable – that makes him even more dangerous. He is a usurper, he has no right to be here and must be deposed. The right spent 8 years opposing everything Obama did or said, and Donald Trump rode his nasty, racist opposition to all things Obama all the way to the white house.

There is something weird and scary about what’s gone on and is going on in the American experiment at the moment.

Written by LeisureGuy

15 August 2018 at 5:34 pm

Posted in Politics

Why obesity has increased so drastically

leave a comment »

I’ve added a link this Guardian article by George Monbiot to my (rather lengthy) post giving my current diet advice. The article begins:

When I saw the photograph I could scarcely believe it was the same country. A picture of Brighton beach in 1976, featured in the Guardian a few weeks ago, appeared to show an alien race. Almost everyone was slim. I mentioned it on social media, then went on holiday. When I returned, I found that people were still debating it. The heated discussion prompted me to read more. How have we grown so fat, so fast? To my astonishment, almost every explanation proposed in the thread turned out to be untrue.

Unfortunately, there is no consistent obesity data in the United Kingdom before 1988, at which point the incidence was already rising sharply. But in the United States, the figures go back further. They show that, by chance, the inflection point was more or less 1976. Suddenly, at around the time that the photograph was taken, people started becoming fatter – and the trend has continued ever since.

The obvious explanation, many on social media insisted, is that we’re eating more. Several pointed out, not without justice, that food was generally disgusting in the 1970s. It was also more expensive. There were fewer fast food outlets and the shops shut earlier, ensuring that if you missed your tea, you went hungry.

So here’s the first big surprise: we ate more in 1976. According to government figures, we currently consume an average of 2,130 kilocalories a day, a figure that appears to include sweets and alcohol. But in 1976, we consumed 2,280 kcal excluding alcohol and sweets, or 2,590 kcal when they’re included. I have found no reason to disbelieve the figures.

Others insisted that the cause is a decline in manual labour. Again, this seems to make sense, but again the data doesn’t support it. A paper last year in the International Journal of Surgery states that “adults working in unskilled manual professions are over four times more likely to be classified as morbidly obese compared with those in professional employment”.

So how about voluntary exercise? Plenty of people argued that, as we drive rather than walk or cycle, are stuck to our screens and order our groceries online, we exercise far less than we did. It seems to make sense – so here comes the next surprise. According to a long-term study at Plymouth University, children’s physical activity is the same as it was 50 years ago. A paper in the International Journal of Epidemiology finds that, corrected for body size, there is no difference between the amount of calories burned by people in rich countries and those in poor ones, where subsistence agriculture remains the norm. It proposes that there is no relationship between physical activity and weight gain. Many other studies suggest that exercise, while crucial to other aspects of good health, is far less important than diet in regulating our weight. Some suggest it plays no role at all as the more we exercise, the hungrier we become.

Other people pointed to more obscure factors: adenovirus-36 infectionantibiotic use in childhood and endocrine-disrupting chemicals. While there is evidence suggesting they may all play a role, and while they could explain some of the variation in the weight gained by different people on similar diets, none appears powerful enough to explain the general trend.

So what has happened? The light begins to dawn when you look at the nutrition figures in more detail. Yes, we ate more in 1976, but differently. Today, we buy half as much fresh milk per person, but five times more yoghurt, three times more ice cream and – wait for it – 39 times as many dairy desserts. We buy half as many eggs as in 1976, but a third more breakfast cereals and twice the cereal snacks; half the total potatoes, but three times the crisps. While our direct purchases of sugar have sharply declined, the sugar we consume in drinks and confectionery is likely to have rocketed (there are purchase numbers only from 1992, at which point they were rising rapidly. Perhaps, as we consumed just 9kcal a day in the form of drinks in 1976, no one thought the numbers were worth collecting.) In other words, the opportunities to load our food with sugar have boomed. As some experts have long proposed, this seems to be the issue.

The shift has not happened by accident. As Jacques Peretti argued in his film The Men Who Made Us Fat, food companies have invested heavily in designing products that use sugar to bypass our natural appetite control mechanisms, and in packaging and promoting these products to break down what remains of our defences, including through the use of subliminal scents. They employ an army of food scientists and psychologists to trick us into eating more than we need, while their advertisers use the latest findings in neuroscience to overcome our resistance.

They hire biddable scientists and thinktanks to confuse us about the causes of obesity. Above all, just as the tobacco companies did with smoking, they promote the idea that weight is a question of “personal responsibility”. After spending billions on overriding our willpower, they blame us for failing to exercise it.

To judge by the debate the 1976 photograph triggered, it works. . .

Continue reading.

Written by LeisureGuy

15 August 2018 at 3:31 pm

How America Convinced the World to Demonize Drugs

leave a comment »

J.S. Rafaeli has an interesting article in Vice, with this subhead:

Much of the world used to treat drug addiction as a health issue, not a criminal one. And then America got its way.

The article begins:

In Baltimore, a young black man is sent to prison for felony cannabis possession. In Glasgow, Scotland, an apartment door is kicked in by the drugs squad. In Afghanistan, a field of poppies is incinerated from the air. In Mexico, police corrupted by drug cartels are implicated in disappearances and massacres.

The War on Drugs is generally presented as a global phenomenon. Each country has its own drug laws and enforces them as they see fit. Despite small regional differences, the world—we are told—has always been united in addressing the dangers of illicit drug use through law enforcement.

This is a lie.

When one traces back the history of what we now call the War on Drugs, one discovers it has a very specific origin: the United States. The global development of the drug war is inseparable from the development of US imperialism, and indeed, is a direct outgrowth of that imperialism.

Prior to the 19th century, drugs now illegal were widely used across the world. Remedies derived from opium and cannabis were used for pain relief, and less widely for “recreation.” Queen Victoria herself was fond of both opium and cannabis, before being introduced to cocaine later in life.

Then came the American railroads.

Thousands of Chinese workers came to America during the mid-1800s to build the Central Pacific Railroad. Once the track was complete, however, they immediately became regarded as a threat to white American workers. In 1882, Congress passed the Chinese Exclusion Act, the only US law to ever successfully ban immigration solely on the basis of race.

One method of stirring up anti-Chinese hatred was to attack the practice of opium smoking. Although morphine and laudanum were popular as a medicine throughout the US, Chinese opium was seen as a threat to American Christian morality, and particularly to American Christian women.

By 1881, as the Exclusion Act was being debated in Congress, reports began flooding out of San Francisco of opium dens where “white women and Chinamen sit side by side under the effects of this drug—a humiliating sight to anyone with anything left of manhood.”

Newspaper editorials thundered that the Chinese opium menace must be wiped out lest it “decimate our youth, emasculate the coming generation, if not completely destroy the population of our coast,” and that for white Americans, smoking opium was “not at all consistent with their duties as Capitalists or Christians.”

Crucially, however, the first modern prohibition regime was not founded in America itself, but in its first overseas colony. In 1898, America conquered the Philippines in the Spanish–American War. Charles H. Brent, the openly racist Episcopal bishop of the Philippines, despised opium users, and appealed to President Roosevelt to ban this “evil and immoral” habit. By 1905, Brent had succeeded in installing the first American prohibition regime—not in the US itself, but in the Philippines.

Unsurprisingly, the ban failed. Bishop Brent decided that continued opium use must be the fault of the booming trade in China, and wrote again to President Roosevelt, urging that the US had a duty to “promote some movement that would gather in its embrace representatives from all countries where the traffic and use of opium is a matter of moment.” The idea of international control of the drug trade had been born.

In the American debate, drug addiction had been framed as an infection and contamination of white America by foreign influences. Now, that vision was internationalized. To protect white American moral purity, the supply of drugs from overseas had to be curtailed at their source. As the campaigner, Richard P. Hobson had it, “like the invasions and plagues of history, the scourge of narcotic drug addiction came out of Asia.”

In 1909, America succeeded in convening the first International Commission on Opium in Shanghai. Representing the US was Bishop Brent and the doctor Hamilton Wright, who was to become a major force in the American prohibitionist movement. For the next century, almost every major international conference and commission on drug control was formed through American pressure and influence.

Interestingly, despite what we are told about the “special relationship,” the country that offered the most consistent and organized resistance to the American drive toward drug prohibition was the United Kingdom. Time and again, Great Britain diplomatically frustrated American attempts to impose prohibition regimes and establish international protocols.

This was partly because the British were themselves operating lucrative opium monopolies in their own overseas colonies, but also because they resented “overtones of high-mindedness and superior virtue.” Britain had its own system of dealing with drug addiction—treating it as a medical rather than a law enforcement issue—and, for a long time, resisted the moralizing hysteria of the American approach.

But it was difficult for the US to push the prohibition of drugs on the rest of the world while not enforcing it itself. Wright began spearheading a fresh campaign for full drug prohibition within the US—once again built almost entirely on racial prejudice.

But this time, a new drug had emerged to capture America’s fevered imagination, with a fresh racial minority to use it to persecute. The drug was cocaine, and the minority was African Americans. In 1910, Wright submitted a report to the Senate stating that “this new vice, the cocaine vice… has been a potent incentive in driving the humbler negroes all over the country to abnormal crimes.”

There followed an explosion of headlines linking black people to cocaine use and criminality. The New York Times ran a typical story under the headline “NEGRO COCAINE FIENDS—NEW SOUTHERN MENACE.” The story tells of “a hitherto inoffensive negro” who had reportedly taken cocaine and been sent into a frenzy. The local police chief was forced to shoot him several times to bring him down. Cocaine, it was implied, was turning black men into superhuman brutes. As the medical officer quoted in the article put it, “the cocaine nigger sure is hard to kill.”

This hysteria resulted in the Harrison Narcotics Tax Act of 1914, instituting the prohibition of drugs across the United States. Over the next 50 years, America would aggressively seek to internationalize its form of prohibition across the world. . .

Continue reading. There’s much more, to America’s shame.

Written by LeisureGuy

15 August 2018 at 2:43 pm

Hume the humane

leave a comment »

Julian Baggini, a writer and founding editor of The Philosophers’ Magazine whose latest book is A Short History of Truth(2017), writes in Aeon:

Socrates died by drinking hemlock, condemned to death by the people of Athens. Albert Camus met his end in a car that wrapped itself around a tree at high speed. Nietzsche collapsed into insanity after weeping over a beaten horse. Posterity loves a tragic end, which is one reason why the cult of David Hume, arguably the greatest philosopher the West has ever produced, never took off.

While Hume was lying aged 65 on his deathbed at the end of a happy, successful and (for the times) long life, he told his doctor: ‘I am dying as fast as my enemies, if I have any, could wish, and as easily and cheerfully as my best friends could desire.’ Three days before he died, on 25 August 1776, probably of abdominal cancer, his doctor could still report that he was ‘quite free from anxiety, impatience, or low spirits, and passes his time very well with the assistance of amusing books’.

When the end came, Dr Black reported that Hume ‘continued to the last perfectly sensible, and free from much pain or feelings of distress. He never dropped the smallest expression of impatience; but when he had occasion to speak to the people about him, always did it with affection and tenderness … He died in such a happy composure of mind, that nothing could exceed it.’

In his own lifetime Hume’s reputation was mainly as a historian. His career as a philosopher started rather inauspiciously. His first precocious attempt at setting out his comprehensive new system of philosophy, A Treatise of Human Nature (1739-40), published when he was 26, ‘fell dead-born from the press, without reaching such distinction as even to excite a murmur among the zealots’, as he later recalled, with self-deprecating exaggeration.

Over time, however, his standing has grown to the highest level. A few years ago, thousands of academic philosophers were asked which non-living philosopher they most identified with. Hume came a clear first, ahead of Aristotle, Kant and Wittgenstein. Scientists, who often have little time for philosophy, often make an exception for Hume. Even the biologist Lewis Wolpert, who says philosophers are ‘very clever but have nothing useful to say whatsoever’ makes an exception for Hume, admitting that at one stage he ‘fell in love’ with him.

Yet the great Scot remains something of a philosopher’s philosopher. There have been no successful popular books on him, as there have been for the likes of Montaigne, Nietzsche, Socrates, Wittgenstein and the Stoics. Their quotes, not his, adorn mugs and tea towels, their faces gaze down from posters. Hume hasn’t ‘crossed over’ from academic preeminence to public acclaim.

The reasons why this is so are precisely the reasons why it ought not to be. Hume’s strengths as a person and a thinker mean that he does not have the kind of ‘brand’ that sells intellectuals. In short, he is not a tragic, romantic figure; his ideas do not distil into an easy-to-summarise ‘philosophy of life’; and his distaste for fanaticism of any kind made him too sensible and moderate to inspire zealotry in his admirers.

Hume had at least two opportunities to become a tragic hero and avoid the cheerful end he eventually met. When he was 19, he succumbed to what was known as ‘the disease of the learned’, a melancholy that we would today call depression. However, after around nine months, he realised that this was not the inevitable fate of the wise but the result of devoting too much time to his studies. Hume realised that to remain in good health and spirits, it was necessary not only to study, but to exercise and to seek the company of friends. As soon as he started to do this he regained his cheer and kept it pretty much for the rest of his life.

This taught him an important lesson about the nature of the good life. As he later wrote in An Enquiry Concerning Human Understanding (1748): ‘The mind requires some relaxation, and cannot always support its bent to care and industry.’ Philosophy matters, but it is not all that matters, and although it is a good thing, one can have too much of it. ‘Abstruse thought and profound researches I prohibit,’ says Hume, ‘and will severely punish, by the pensive melancholy which they introduce, by the endless uncertainty in which they involve you.’ The life ‘most suitable to the human race’ is a ‘mixed kind’ in which play, pleasure and diversion matter as well as what are thought of as the ‘higher’ pursuits. ‘Be a philosopher,’ advised Hume, ‘but, amidst all your philosophy, be still a man.’

In 1770, Hume was also presented with an opportunity for martyrdom, in somewhat bathetic circumstances. The Nor’ Loch in Edinburgh, where Princes Street Gardens now stands, was being drained as part of the expansion of the city. Walking across it one day, Hume fell into the bog that still remained. He cried for help but unfortunately for him, the women who heard him recognised him as ‘the great infidel’ and were not inclined to save him. Hume reasonably pointed out that all Christians should help anyone irrespective of their beliefs, but their understanding of the parable of the Good Samaritan was not as up to scratch as his and they refused to save him unless he became a Christian there and then, reciting the Lord’s Prayer and the creed.

A Socrates would perhaps have refused and died in the name of truth. Hume, however, was not going to allow the stupidity of others to cut his own life short, so he did what any sensible person should do: he went along with their request without any intention of keeping his promise.

In this he was following the example of the only other philosopher to rival Hume for all-time greatness: Aristotle. Here is another thinker whose stock among cognoscenti couldn’t be higher, but who has failed to capture the public’s imagination (although Edith Hall’s recent book Aristotle’s Way(2018) is trying to change that). Not coincidentally, I think, Aristotle also refused to play the martyr. Like Socrates, he was condemned to death for impiety. Also like Socrates, he had the opportunity to flee the city to safety. Unlike Socrates, that is exactly what he did. So while everyone knows how Socrates died, few know that Aristotle, like Hume, died in his 60s, probably also of stomach cancer.

It is somewhat perverse that the attractiveness of a philosophy seems to be directly correlated with how miserable its author’s life was. However, that is not the only reason why there are few self-ascribed Humeans outside academe. Hume’s philosophy does not add up to an easily digestible system, a set of rules for living. Indeed, Hume is best known for three negative theses.

First, our belief in the power of cause and effect, on which all our reasoning about matters of fact rests, is not justified by either observation or by logical deduction. We only ever see one thing following another: we never observe any power that makes one thing necessitate an effect. Even if we could be satisfied that we had established x caused y, logic can’t establish any general principle of causation, since all the regularities we have observed in nature were in the past, but the principle of cause and effect is assumed to apply in the present and future. Logically, you can never arrive at a truth about the future based entirely on premises that concern the past: what has been is not the same as what will be.

Hume did not deny cause and effect were real. We could not reason about any matter of empirical fact without assuming their reality, as his own writings frequently do. However, he was clear that this linchpin of sensible thinking is not itself established by reason or experience. This is philosophically strong stuff but hardly the source of inspirational Instagram quotes.

Hume is also well known for his arguments against various aspects of religion, although he never came out as a fully fledged atheist. Most famously, he argued that it would never be rational to accept the claim of a miracle, since the evidence that one had occurred would always be weaker than the evidence that such things never happen. It would always be more likely that the witness to a miracle was mistaken or lying than that the miracle actually took place. But again, skepticism about the claims of traditional religion does not amount to a substantive, positive philosophy.

Hume’s third notable negative claim does have the benefit of a stirring slogan, albeit one that is somewhat opaque: ‘Reason is, and ought only to be, the slave of the passions.’ Reason by itself gives us no motivation to act, and certainly no principles on which to base our morality. If we are good it is because we have a basic fellow-feeling that makes us respond with sympathy to the suffering of others and with pleasure at the thought of them thriving. The person who does not see why she should be good is not irrational but heartless.

As these three core claims illustrate, Hume’s philosophy is essentially skeptical, and skepticism seems to take away more than it offers. However, understood correctly, Humean skepticism can and should be the basis for a complete approach to life. It is built on the skeptical foundations of a brutally honest assessment of human nature, which could be seen as the essence of Hume’s project. It is not accidental that his first attempt to set out his philosophy was called A Treatise of Human Nature. Humanity was his primary subject.

Hume saw human beings as we really are, stripped of all pretension. We are not immortal souls temporarily encaged in flesh, nor the pure immaterial minds Descartes believed he had proved we were. Humans are animals – remarkable, highly intelligent ones – but animals nonetheless. Hume did not just bring human beings down to Earth, he robbed us of any enduring essence. Arguing against Descartes’s claim that we are aware of ourselves as pure, undivided egos, Hume challenged that when he introspected, he found no such thing. What we call the ‘self’ is just a ‘bundle of perceptions’. Look inside yourself, try to find the ‘I’ that thinks and you’ll only observe this thought, that sensation: an ear worm, an itch, a thought that pops into your head.

Hume was echoing a view that was first articulated by the early Buddhists, whose ‘no-self’ (anattā) view is remarkably similar. He also anticipated the findings of contemporary neuroscience which has found that there is no central controller in the brain, no one place where the sense of self resides. Rather the brain is constantly executing any number of parallel processes. What happens to be most central to consciousness depends on the situation.

As for our intellect, Hume demonstrated how extraordinary it could be by rigorously showing how imperfect it really is. Pure reason, of the kind celebrated by Descartes, was largely impotent. . .

Continue reading. There’s much more.

Written by LeisureGuy

15 August 2018 at 2:36 pm

Posted in Books

Tagged with

Pennsylvania Grand Jury Says Church Had a ‘Playbook for Concealing the Truth’

leave a comment »

Scott Dodd reports in the NY Times:

Avoid scandal. Use euphemisms. Ask inadequate questions. Lock complaints away in a “secret archive.” Above all, don’t tell the police.

Those are some of the tactics that leaders of the Roman Catholic Church in Pennsylvania used to conceal child sexual abuse by priests over a period of 70 years, according to a grand jury report released Tuesday.

“It’s like a playbook for concealing the truth,” said the grand jury, whose investigation identified more than 1,000 sexual abuse victims in six Catholic dioceses in Pennsylvania.

Special agents from the F.B.I.’s National Center for the Analysis of Violent Crime reviewed evidence collected by the grand jury, the report says, and identified a series of practices that were regularly used by the six dioceses to cover up reports of abuse.

“While each church district had its idiosyncrasies, the pattern was pretty much the same,” the report says. “The main thing was not to help children, but to avoid ‘scandal.’ That is not our word, but theirs; it appears over and over again in the documents we recovered.”

[Read: Church covered up child sex abuse in Pennsylvania for decades.]

Here is how the grand jury, in caustic terms, described the Catholic Church’s methods for covering up abuse and protecting priests:

First, make sure to use euphemisms rather than real words to describe the sexual assaults in diocese documents. Never say “rape”; say “inappropriate contact” or “boundary issues.”

Second, don’t conduct genuine investigations with properly trained personnel. Instead, assign fellow clergy members to ask inadequate questions and then make credibility determinations about the colleagues with whom they live and work.

Third, for an appearance of integrity, send priests for “evaluation” at church-run psychiatric treatment centers. Allow these experts to “diagnose” whether the priest was a pedophile, based largely on the priest’s “self-reports,” and regardless of whether the priest had actually engaged in sexual contact with a child.

Fourth, when a priest does have to be removed, don’t say why. Tell his parishioners that he is on “sick leave,” or suffering from “nervous exhaustion.” Or say nothing at all.

Fifth, even if a priest is raping children, keep providing him housing and living expenses, although he may be using these resources to facilitate more sexual assaults.

Sixth, if a predator’s conduct becomes known to the community, don’t remove him from the priesthood to ensure that no more children will be victimized. Instead, transfer him to a new location where no one will know he is a child abuser.

Finally and above all, don’t tell the police. Child sexual abuse, even short of actual penetration, is and has for all relevant times been a crime. But don’t treat it that way; handle it like a personnel matter, “in house.”

This from an organization that presumes to lecture others on morality.

Written by LeisureGuy

15 August 2018 at 2:09 pm

Trump Goes for Broke on Claim Military Received No Money Before His Watch. (He’s Still Wrong.)

leave a comment »

Linda Qiu reports in the NY Times:

WHAT WAS SAID

“Last year, we secured a historic $700 billion to rebuild our military. And now the National Defense Authorization Act paves the way for 1,700 — listen to this now. So we’ve been trying to get money. They never gave us money for the military for years and years. And it was depleted. We got $700 billion. And next year, already approved, we have $716 billion to give you the finest planes and ships and tanks and missiles anywhere on earth.”

— President Trump, speaking to Army soldiers at Fort Drum, N.Y., on Monday

THE FACTS

False.

Mr. Trump’s claim is wrong on two fronts: that the approved funding levels are “historic” and that the military “never” had money “for years and years.” It’s also not clear what he was referring to when he said the act “paves the way for 1,700.”

The John S. McCain National Defense Authorization Act for fiscal year 2019, which Mr. Trump signed on Monday, provides $716 billion for the Pentagon’s basic operations and war spending, as well as the Department of Energy’s national security programs.

That’s not the largest military budget in recent history, let alone all of American history. Even if inflation is not taken into account, President Barack Obama signed a $726 billion National Defense Authorization Actfor the 2011 fiscal year.

Adjusted for inflation, Congress authorized more money for the Pentagon every fiscal year between 2007 and 2012, during the peak of the wars in Iraq and Afghanistan.

Mr. Trump may have been referring to the sequester, in which Congress placed limits on military spending in 2011; they were effectively lifted in February. But his statement — that the Pentagon “never” received money during that time — is patently wrong. As The New York Times has previously reported:

From 2012 to 2017, the Pentagon’s annual budget had decreased as a percent of the economy. But it still hovered around $600 billion — a far cry from “no money” at all.

The United States’ military spending has consistently outstripped the rest of the world’s. In fact, it has been higher than the next seven to 11 countries combined since 2012, according to data from the Stockholm International Peace Research Institute.

OTHER CLAIMS

After the signing of the bill, Mr. Trump made several more inaccurate claims at a fund-raiser in Utica, N.Y.

Source: Senate Armed Forces Committee, Pentagon comptroller, Congressional Research Service

Written by LeisureGuy

15 August 2018 at 12:39 pm

Social Connection Makes a Better Brain

leave a comment »

This article by Emily Esfahani Smith in the Atlantic is from 2013, but it caught my eye today:

Matthew Lieberman, a distinguished social psychologist and neuroscientist, basically won the lottery. This past summer, he was offered three million dollars for an academic position—one million in raw income and two to do lab research. That’s a king’s ransom for a psychology professor. On average, psychology professors make less than six figures and rely on a patchwork of modest grants to sustain their research. All Lieberman had to do was spend four months this year and next year in Moscow, a nice enough city, doing some research—which he would have done anyway at home at UCLA.

But there was a catch. He would have to be away from his wife Naomi and seven-year-old son Ian for those eight months. They could not join him in Moscow. He had a basic trade-off problem, one that kept him up for many nights: Should I take the money and give up those eight months with my family or should I stay home and give up the money and research opportunities? In one form or another, we’ve all faced this dilemma, if on a more modest scale. Do you work late tonight or join your family for dinner? Do you go to the conference or to your friend’s wedding? Do you prioritize your career or your relationships?

Lieberman’s new book Social: Why Our Brains Are Wired to Connect hits the shelves this month. It’s a book about relationships and why relationships are a central—though increasingly absent—part of a flourishing life. Lieberman draws on psychology and neuroscience research to confirm what Aristotle asserted long ago in his Politics: “Man is by nature a social animal … Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god.”

Just as human beings have a basic need for food and shelter, we also have a basic need to belong to a group and form relationships. The desire to be in a loving relationship, to fit in at school, to join a fraternity or sorority, to avoid rejection and loss, to see your friends do well and be cared for, to share good news with your family, to cheer on your sports team, and to check in on Facebook—these things motivate an incredibly impressive array of our thoughts, actions, and feelings.

Lieberman sees the brain as the center of the social self. Its primary purpose is social thinking. One of the great mysteries of evolutionary science is how and why the human brain got to be so large. Brain size generally increases with body size across the animal kingdom. Elephants have huge brains while mice have tiny ones. But humans are the great exception to this rule. Given the size of our bodies, our brains should be much smaller—but they are by far the largest in the animal kingdom relative to our body size. The question is why.

Scientists have debated this question for a long time, but the research of anthropologist Robin Dunbar is fairly conclusive on this point. Dunbar has found that the strongest predictor of a species’ brain size—specifically, the size of its neocortex, the outermost layer—is the size of its social group. We have big brains in order to socialize. Scientists think the first hominids with brains as large as ours appeared about 600,000-700,000 years ago in Africa. Known as Homo heidelbergensis, they are believed to be the ancestors of Homo sapiens and the Neanderthals. Revealingly, they appear to be the first hominids to have had division of labor (they worked together to hunt), central campsites, and they may have been the first to bury their dead.

One of the most exciting findings to emerge from neuroscience in recent years underlines the brain’s inherently social nature. When neuroscientists monitor what’s going on in someone’s brain, they are typically interested in what happens in it when people are involved in an active task, like doing a math problem or reaching for a ball. But neuroscientists have looked more closely at what the brain does during non-active moments, when we’re chilling out and the brain is at rest. Every time we are not engaged in an active task—like when we take a break between two math problems—the brain falls into a neural configuration called the “default network.” When you have down time, even if it’s just for a second, this brain system comes on automatically.

What’s remarkable about the default network, according to Lieberman’s research, is that it looks almost identical to another brain configuration—the one used for social thinking or “making sense of other people and ourselves,” as he writes: “The default network directs us to think about other people’s minds—their thoughts, feelings, and goals.” Whenever it has a free moment, the human brain has an automatic reflex to go social. Why would the brain, which forms only 2 percent of our body weight but consumes 20 percent of its energy, use its limited resources on social thinking, rather than conserving its energy by relaxing?

“Evolution has made a bet,” Lieberman tells me, “that the best thing for our brain to do in any spare moment is to get ready for what comes next in social terms.”

Evolution only makes bets if there are payoffs—and when it comes to being social, there are many benefits. Having strong social bonds is as good for you as quitting smoking. Connecting with other people, even in the most basic ways, also makes you happier—especially when you know they need your help.

One study of adults found that the brain’s reward center, which turns on when people feel pleasure, was more active when people gave $10 to charity than when they received $10. In another study, comforting someone in distress activated the reward center in a powerful way. Couples were brought into the lab and the girlfriend was placed inside a brain scanner while the boyfriend sat in a chair right next to her. In some cases, the boyfriend would receive a painful electrical shock.

The girlfriend, who knew when her boyfriend was being shocked, was instructed to either hold her boyfriend’s hand or to hold onto a small ball. When the scientists looked at the girlfriend’s brain activity, they found that her reward system was active when she was holding the hand of her boyfriend both when he was being shocked and when he wasn’t in pain—but it was most activewhen she held his hand as he was being shocked. Holding your boyfriend’s hand feels nice, but it’s especially meaningful when you know that he needs your love and affection.

***

When economists put a price tag on our relationships, we get a concrete sense of just how valuable our social connections are—and how devastating it is when they are broken. If you volunteer at least once a week, the increase to your happiness is like moving from a yearly income of $20,000 to $75,000. If you have a friend that you see on most days, it’s like earning $100,000 more each year. Simply seeing your neighbors on a regular basis gets you $60,000 a year more. On the other hand, when you break a critical social tie—here, in the case of getting divorced—it’s like suffering a $90,000 per year decrease in your income.

You don’t have to be a social scientist to know how badly a breakup hurts. . .

Continue reading.

Written by LeisureGuy

15 August 2018 at 11:53 am

Rod Rosenstein still doesn’t get the problem with forensics

leave a comment »

Radley Balko writes in the Washington Post:

Deputy Attorney General Rod J. Rosenstein gave a speech on Tuesday to the National Symposium on Forensic Science in Washington. This isn’t his first such speech: He gave a similar talk in February to the American Academy of Forensic Sciences conference and another about this time last year to the International Association for Identification.

I critiqued that last speech here at The Watch. In the year since, nothing much has changed. Despite a stream of crime lab scandals, the doubt cast on forensics by DNA exonerations and blistering critiques of entire fields of forensics from the scientific community, Rosenstein insists that we should stop insisting that “forensic science” meet the standards of “science,” and that we should trust the Justice Department to fix these problems internally, without input from independent scientific bodies.

For decades, police and prosecutors have pushed the fields of forensics known as pattern matching as a science.

They got away with it because the scientific community largely steered clear of the criminal-justice system. But in the 1990s, DNA testing — a field that was developed and honed in the scientific community — became common. DNA tests started to show that some of the people that forensics experts had declared guilty were, in fact, innocent. In the years since, the scientific community has become increasingly vocal about, well, the lack of science in forensic science, particular in pattern-matching disciplines.

In most pattern-matching fields, an analyst looks at two pieces of evidence — fingerprints, bite marks, the ballistics marks on bullets, footprints, tire tracks, hair fibers, clothing fibers, or “tool marks” from a screwdriver, hammer, pry bar or other object — and determines whether they’re a match. In others, like blood-spatter analysis, experts don’t even attempt to match two pieces of evidence. They simply draw conclusions based on assumptions about how blood moves through the air. These are entirely subjective fields. And that’s the heart of the problem. Even objective fields of science are plagued by confirmation bias. Scientists have to be vigilant about combating unconscious bias by conducting double-blind studies and subjecting their work to peer review and statistical analysis. To gain acceptance in the scientific community, studies must also be reproducible. To be legitimate, a scientific test should have a calculable margin for error.

None of this is true in the pattern-matching fields of forensics. So in response, defenders of these disciplines have shifted: These fields aren’t really science. They’re “soft sciences,” similar to fields such as psychiatry or economics. They might not undergo the rigors of the scientific method, the argument goes, but they still have evidentiary value.

This is the line that Rosenstein and his boss, Attorney General Jeff Sessions, have taken at the Justice Department in brushing aside scientists’ criticism. The Obama administration created the National Commission on Forensic Science so that scientists could assess the reliability and validity of some of these areas of forensics. One of Sessions’s first acts as attorney general was to allow the commission’s charter to expire without renewal. In his talk last year, Rosenstein announced a new program that would evaluate forensic fields, but it would be within the Justice Department, it would not include any “hard” scientists, and it would be led by a career prosecutor with a history of opposing efforts to bring transparency, accountability and scientific accuracy to forensics. Here’s Rosenstein’s argument from his talk on Tuesday.

Most of you work on the front lines of the criminal justice system, where forensic science has been under attack in recent years. Some critics would like to see forensic evidence excluded from state and federal courtrooms.

You regularly face Frye and Daubert motions that challenge the admission of routine forensic methods.

Many of the challenged methods involve the comparison of evidence patterns like fingerprints, shell casings, and shoe marks to known sources.  Critics argue that the methods have not undergone the right type or amount of validation, or that they involve too much human interpretation and judgment to be accepted as “scientific” methods.

Those arguments are based on the false premise that a scientific method must be instrument-based, automated, and quantitative, excluding human interpretation and judgment. Such critiques contributed to a recent proposal to amend Federal Rule of Evidence 702 for cases involving forensic evidence. The effort stems from an erroneously narrow view of the nature of science and its application to forensic evidence.

Federal Rule of Evidence 702 uses the phrase “scientific, technical, or other specialized knowledge,” which makes clear that it is designed to permit testimony that calls on skills and judgment beyond the knowledge of laypersons, and not merely of scientists who work in laboratories.

Forensic science is not only quantitative or automated. It need not be entirely free from human assumptions, choices, and judgments. That is not just true of forensic science. It is also the case in other applied expert fields like medicine, computer science, and engineering.

Often when pattern-matching analysts testify, they go to great lengths to describe how careful and precise they are at collecting and preserving evidence. They talk about all the precautions and steps they take before performing their analysis. It can sound impressive — and it’s all entirely beside the point. You can be the most careful, precise and cautious expert witness on the planet when it comes to preparing evidence for analysis, but if your actual analysis is no more than “eyeballing it,” your method of analysis still isn’t science.

Rosenstein’s speech on Tuesday has a similar effect. It’s all true, it all sounds impressive … and it all misses the point entirely. That the federal rules of evidence allow for expert testimony that “is not only quantitative or automated” is precisely the problem. That’s how the system got into trouble.

Rosenstein then went on to describe what the Justice Department is doing to improve forensic testimony, such as closer monitoring and evaluation of the testimony of FBI experts, and instituting uniform language that experts should use to quantify their level of certainty. Both initiatives, he said, are “designed to maintain the consistency and quality of our lab reports and testimonial presentations to ensure that they meet the highest scientific and ethical standards.”

Again, both of these initiatives sound impressive. But if the testimony of pattern-matching experts is being evaluated by other pattern matching experts, by federal law enforcement agents who buy into pattern-matching analysis, or really by anyone who stands to benefit from a less-skeptical outlook on forensics, you aren’t really changing anything. I’ve used this analogy many times, but it fits: If you were to assemble a commission to evaluate the scientific validity of tarot card reading, you wouldn’t populate that commission with other tarot card readers. Yet this is one of the most common critiques law enforcement officials make of the various scientific bodies that have issued warnings about forensics — that they lack any members who actually practice the fields of forensics being criticized.

There’s a similar issue with uniformity of language. Yes, if there were a standard set of phrases all forensic analysts used to express their level of certainty about a piece of evidence, that would be preferable to not having such a system. But if the analysis itself is based on little more than each expert’s subjective judgment — if there’s no measurable, quantifiable, reproducible explanation for why a hair sample is “consistent with” a suspect rather than “a match” to the suspect — then everything boils down to the credibility of that expert.

None of this is to say that all pattern-matching fields are useless. Some — like bite-mark matching — have little to no value at all and should be prohibited from courtrooms. Other fields could be useful in excluding possible suspects but are less reliable at identifying one suspect to the exclusion of all others, such as hair fiber analysis. And a few, like fingerprint analysis, could still be useful for that sort of identification, though even here analysts often overstate their certainty.

So how should we assess which fields of forensics are legitimate and which aren’t? Since Rosenstein and other advocates object to the term “scientific” — though note that in the very same speech, Rosenstein can’t help using the term to describe the Justice Department’s reforms — let’s set that debate aside. If we’re going to allow forensic expert witnesses to “match” two or more pieces of evidence in order to implicate a suspect, what is it that we want that testimony to be? If it isn’t that it be scientific, or that it adhere to Justice Department standards, or that it be within the guidelines of some obscure forensic governing body, what is it?

I think there are two things we’re looking for. First, we want these analysts to be right. If an expert says the evidence implicates a suspect, we want that suspect actually to be guilty. If a fingerprint analyst says a print found at the crime scene matches a suspect, we want that suspect to at least have been at the crime scene.

Second, we want expert testimony to be reliable. In too many areas of pattern-matching forensics, you’ll often have two reputable, certified experts offer diametrically opposing testimony about the same piece of evidence. If two well-regarded experts can look at the same piece of evidence and come to opposite conclusions, there isn’t enough certainty about that particular field to include it in a court of law. (Of course, if two experts contradict one another at trial, that also invokes the first rule — one of them must be wrong.) At this point, jurors are no longer assessing the facts; they’re assessing which expert they find more credible. And when we assess experts’ credibility, we tend to look at all sorts of factors that have little to do with the facts, such as their clothes, their mannerisms and the attorney questioning them. In fact, witnesses who offer their opinions with resolute yet baseless certainty will often seem more credible to jurors than experts who couch their opinions in the careful language of a scientist.

So here’s a proposal: For each field of pattern-matching forensics, we need an independent body to administer a proficiency test that measures accuracy, reliability or both. In the field of ballistics, for example, it wouldn’t be difficult to ask analysts to match a given number of bullets to a given number of guns. If they don’t meet a minimum level of accuracy, they’d be barred from testifying in court. (Given the stakes, that minimum standard should probably be close to 100 percent.) You could do the same for many other fields: If you’re giving testimony about footprint matches that sends people to prison, it doesn’t seem overly onerous to ask you to first prove that you know how to match footprints.

For some fields — such as bite-mark or blood-spatter analysis, or tool marks on human skin — an accuracy test would be difficult: . . .

Continue reading.

Written by LeisureGuy

15 August 2018 at 9:38 am

Something good about Sarah Huckabee Sanders

leave a comment »

I thought good actions should be recognized. Blake Peterson reports in ProPublica:

As the midterm elections approach, Republican state officials and lawmakers have stepped up efforts to block students from voting in their college towns. Republicans in Texas pushed through a law last year requiring voters to carry one of seven forms of photo identification, including handgun licenses but excluding student IDs. In June, the GOP-controlled legislature in North Carolina approved early voting guidelines that have already resulted in closing of polling locations at several colleges. And last month, New Hampshire’s Republican governor signed a law that prevents students from voting in the state unless they first register their cars and obtain driver’s licenses there.

One nationally prominent Republican, however, once took the opposite stance on student voting. As an undergraduate at Ouachita Baptist University in Arkadelphia, Arkansas, Sarah Huckabee — now White House Press Secretary Sarah Huckabee Sanders — sued to allow students to vote after being one of more than 900 purged from the county’s rolls.

“It’s almost like taxation without representation,” she said at the time. “They thought that because we were young that they could walk all over us, but obviously that’s not the case.”

Illustrating the adage that politics makes strange bedfellows, the 2002 lawsuit paired a then-20-year-old Sanders with the American Civil Liberties Union. It began, as disputes over student voting often do, with a town-and-gown conflict. Reversing the usual pattern, a Democrat rather than a Republican instigated the student disenfranchisement.

For Sanders, the daughter of Arkansas’ then-governor Mike Huckabee, the little-known episode helped her carve out a niche as a political activist in her own right. It remains relevant today both because of her influential post in the Trump administration and because it suggests that Republican efforts to restrict student voting are largely pragmatic — intended to maximize the party’s electoral chances — and could change as circumstances warrant. It also indicates that Democratic support for on-campus voting may similarly hinge on the expectation that most students lean to the left.

While the Trump administration hasn’t weighed in specifically on student voting rights, it has supported states that impose voter identification requirements or purge voter rolls. By contrast, the Obama administration pushed to expand access to the polls. Contacted by ProPublica, Sanders requested a list of written questions, but then did not respond to them.

“It’s not lost on us that Sanders has joined an administration that is actively defending unlawful voter purges and voter disenfranchisement,” said Rita Sklar, who was executive director of the ACLU of Arkansas when it represented Sanders, and holds the same position today. “Maybe she can talk to her boss about it.”

The events that cast Sanders as a voting rights advocate stemmed from an election in Clark County, Arkansas, where Ouachita Baptist is located. In 1998, before Sanders enrolled there, an Ouachita junior named Jonathan Huber, who grew up in Louisiana, was elected to the county governing board.

While the student body at OBU, a small, religious college in southwest Arkansas, favored Republicans, the surrounding county historically voted Democratic. Huber, who told ProPublica he was the first Republican to win an election in the county since Reconstruction, credited his victory to the hundreds of college students he helped register to vote.

The students’ electoral muscle angered Floyd Thomas Curry, a Democratic attorney who lived in Huber’s district. In 2002, when Huber — who by then had graduated from Ouachita and settled in the area — was up for re-election to the governing board, Curry sued the county. He argued that, as temporary residents, students did not qualify to vote there.

“It became clear that my right to vote was just going down the drain,” Curry said at the time. “Even if these students voted the way I do, it’s still diluting my vote.”

A circuit court judge agreed. Just two weeks before the election, past the voter registration deadline and with early voting underway, Judge John A. Thomas ordered the county clerk to purge from the voter rolls anyone other than faculty or staff who registered using on-campus addresses, thus disenfranchising Sanders and 911 other students from OBU and neighboring Henderson State University. Thomas’s ruling cited an Arkansas law that temporary residents, including students, should vote in their hometowns — which, for Sanders, was Little Rock.

Sanders’ father, Gov. Mike Huckabee, assailed the decision. In the midst of a successful bid for re-election, he denounced the court order as an “absolute outrage,” “one of the worst things that’s happened in Arkansas politics in a long time,” and an example of the perils of “one-party fiefdom.” Shortly thereafter, he phoned Sklar, the executive director of the ACLU of Arkansas.

“Well, Rita,” Sklar recalled the governor saying. “I guess there’s something else we agree on beyond not executing juveniles.” Huckabee did not respond to requests for comment.

Days later, the ACLU filed a class-action lawsuit in federal district court in Little Rock on behalf of the disenfranchised students, with Sanders and four others named as plaintiffs. “It was an opportunity to bring a fairly standard student voting rights case about residency but in the reverse context where the victims, if you will, were conservative rather than liberal,” said Bryan Sells, then a staff attorney on the ACLU’s Voting Rights Project, who represented the students.

The ACLU’s involvement demonstrated its willingness to represent groups on both sides of the political spectrum. “It’s the kind of thing you wish people remembered when they start accusing the ACLU of being the legal arm of the Democratic Party,” Sklar said.

The ACLU contended that the constitutional guarantee of the right to vote overrode the Arkansas statute. While a state may impose reasonable residency restrictions, Sells argued, it cannot presume an entire class of voters — in this case students — are not residents without presenting a compelling reason. The lead plaintiff, Adam Copeland, grew up in foster care and had no home other than his on-campus housing.

Another plaintiff, J.D. Hays Jr., grew up a half-mile away from campus but had registered to vote using his college address. His father . . .

Continue reading.

Written by LeisureGuy

15 August 2018 at 8:37 am

The Grooming Company synthetic, Mystic Water Orange Vanilla, Maggard V3A, and 4711

leave a comment »

This is the brush whose knot had to be reglued, and it’s doing fine. This Mystic Water shave stick did a good job and I like the fragrance: not so strongly orange.The V3A is a remarkably good head—I should add that to the “recommended razors” list. Despite the “A” (for “aggressive”), it is in fact a very comfortable razor with no tendency not nick, and also very efficient (and that’s the part to which “aggressive” doubtless refers). Three passes, a good splash of 4711, and next is my walk.

Written by LeisureGuy

15 August 2018 at 6:24 am

Posted in Shaving

%d bloggers like this: