Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Books’ Category

Whys of seeing: Experimental psychology and art

leave a comment »

Ellen Winner, professor of psychology at Boston College and senior research associate at Project Zero at the Harvard Graduate School of Education, has in Aeon an extract of her most recent book, How Art Works: A Psychological Exploration:

Many philosophical questions about the arts would benefit from some serious empirical research. One thinker who welcomed empirical findings was the art historian E H Gombrich (1909-2001), who was influenced by findings in experimental psychology showing that perception is a matter of inference rather than direct seeing. But all too often philosophers have relied on intuitions and hunches without seeking information about how people actually interact with works of art. If we want to understand the arts, it’s time to take experimental psychology seriously.

Today, experimental philosophers and philosophically inclined psychologists are designing experiments that can help to answer some of the big philosophical questions about the nature of art and how we experience it – questions that have puzzled people for centuries, such as: why do we prefer original works of art to forgeries? How do we decide what is good art? And does engaging with the arts make us better human beings?

Christ and the Disciples at Emmaus, believed to have been painted by Johannes Vermeer in the 17th century, hung in the Museum Boijmans Van Beuningen in Rotterdam for seven years; in 1937, it was admired by the Vermeer expert Abraham Bredius as ‘the masterpiece of Johannes Vermeer of Delft’. But in 1945, Han van Meegeren confessed that he had forged this painting (and many others), and should thus be deemed as great an artist as Vermeer. But this did not happen. The same work formerly revered was now derided.

There are two kinds of art forgeries: invented forgeries in the style of an established artist, and copy forgeries, which are reproductions of existing works. Most commonly, forgers such as van Meegeren produce invented forgeries. Copy forgeries are less common; these are more difficult to get away with since it is often known where the original resides. Moreover, because it is impossible to make a perfect copy by hand, one can always see (or hope to see) differences between the original and the copy, and use these differences to disparage the copy. The art critic Clive Bell in 1914 suggested that exact copies always lack life: the lines and forms in the original are caused by emotions in the artist’s mind that are not present in the copier’s mind. The philosopher Nelson Goodman in 1976 argued that, even if we can detect no physical difference between the original and the copy, just knowing that one painting is the original and the other is the copy tells us that there could be subtle differences that we cannot see now but that we might learn to see later. This knowledge shapes our aesthetic experience of what we believe to be a direct copy.

The puzzle posed by forgery is this: why does our perception and evaluation of an artwork change simply by learning it is a forgery? After all, the work itself has not changed. Philosophers have taken two broad positions on this question.

According to the formalist position, when the original and the forgery are visually indistinguishable, they are not aesthetically different. For example, Monroe Beardsley in 1959 argued that we should form our aesthetic judgments only by attending to the perceptual properties of the picture before us, and not by considering when or how the work was made or who it was made by. So why did people change their evaluation of the Vermeer painting once van Meegeren confessed to being the artist? According to Alfred Lessing, writing in 1965, this response can be chalked up to social pressures: ‘Considering a work of art aesthetically superior because it is genuine, or inferior because it is forged, has little or nothing to do with aesthetic judgment or criticism. It is rather a piece of snobbery.’ This view assumes that artworks have perceptual properties that are unaffected by our knowledge about the background of the work.

According to the historicist position, what we perceive in a work is influenced by what we know about it. Despite the original and the forgery being visually indistinguishable, they are aesthetically different precisely because of what the formalists deny is relevant – our beliefs about who made the work, when, and how. The German critic Walter Benjamin in the 1930s argued that our aesthetic response takes into account the object’s history, ‘its unique existence in a particular place’. He believed that a forgery has a different history and thus lacks the ‘aura’ of the original. The philosopher and critic Arthur Danto took a similar historicist position in 1964 when he asked us to consider why a Brillo box by Andy Warhol that is visually identical to a Brillo box in a supermarket is a work of art. To determine that the box in the museum is a work of art ‘requires something the eye cannot descry – an atmosphere of artistic theory, a knowledge of the history of art: an artworld’. Denis Dutton claimed in 2009 that we perceive a forgery to be aesthetically inferior to an original because we take into account the kind of achievement the work represents – the artist’s process – and a forgery represents a lesser kind of achievement than an original.

Psychologists have stepped into the fray to determine how much the label ‘forgery’ affects our response to a work of art – and, if so, why. The first question is easier to answer than the second. Studies show that just telling people that a work is a forgery (or even the less-charged term ‘copy’) causes them to rate that work lower on a host of aesthetic dimensions. Artworks labelled forgeries or copies are rated as less good, less beautiful, less awe-inspiring, less interesting, lower in monetary value, and even physically smaller than the same image shown to other respondents as an ‘original’. In addition, brain activation changes: while the visual areas of the brain didn’t change in response to whether Rembrandt paintings were labelled ‘authentic’ or ‘copy’, the label ‘authentic’ resulted in greater activation of the orbitofrontal cortex – an area associated with reward and monetary gain.

Clearly, people don’t behave how the formalists thought that they should. What is causing their appreciation to be diminished? One possibility is that our sense of forgery’s moral evil unconsciously influences our aesthetic response. Another is that our knowledge of forgery’s worthlessness on the art market has the same kind of unconscious effect. But if we could strip forgery of its connection with deception and lack of monetary value, would it still be devalued? And, if so, can we demonstrate that the historicist position is correct?

With my research team, I put this to the test by showing people two duplicate images of an unfamiliar art work side by side, telling them that the painting on the left was the first in a planned series of 10 identical works by a painter. Participants were then told one of three different stories about who made the work on the right: that it was by the artist, by the artist’s assistant, or by a forger. For those told it was the artist’s assistant, we specified that the assistant’s copy had the artist’s stamp on it, and that having a team of assistants was typical artistic practice (hence not fraudulent). The auction price of $53,000 was listed below all images (right and left) except for the forgery, which was listed at only $200.

We asked people to rate the copy relative to the original on six dimensions:

Which one is more creative?
Which one do you like more?
Which one is more original?
Which one is more beautiful?
Which one is the better work of art?
Which one is more likely to be influential?

Responses fell into two categories: broadly evaluative (what formalists called aesthetic) – with reference to beauty, goodness and liking; and historical-evaluative (what historicists called historical) – with reference to creativity, originality and influence. We reasoned that forgeries would always be the most devalued of the three kinds of copies because of their immorality and their lack of monetary worth. The artist’s copy, however, is like a forgery without these two marks against it. Thus, our key comparison was between responses to the artist’s versus the assistant’s copy, relative to the original.

We found that, on broadly evaluative dimensions, the artist’s and the assistant’s copies were rated identically – with no distinctions in beauty, liking or goodness. Thus, our participants behaved like formalists. Previous studies reporting lower beauty ratings for images labelled forgeries had presented works one at a time. But here, when the original and the forgery were presented simultaneously, people were forced to concede that there was no beauty difference.

On historical-evaluative ratings, however, the story was different. People rated the assistant’s copy as less creative, original and influential than the artist’s copy – even though both works were copies, both signed by the artist, and both worth the same monetarily. People now behaved as historicists, consistent with Danto’s position that visually identical Brillo boxes are not artistically identical.

These findings tell us that, when moral and monetary considerations are ruled out, there is still something wrong with a forgery. It’s not quite what Dutton thought, because while an original certainly represents a different kind of achievement from a forgery, there is really no difference in achievement between an artist’s copy and an assistant’s copy. Both are copies, after all. So what is it that’s wrong then?

I submit that it’s the aura that Benjamin talked about, which is dependent most critically on who made the work. Benjamin’s idea of ‘aura’ is consistent with what psychologists call essentialism – the view that certain special objects (eg, my wedding ring, or my childhood teddy bear) gain their identity from their history, and have an underlying nature that cannot be directly observed, a view developed extensively by the psychologist Susan Gelman. This is why we reject perfect replicas of such objects: we want the original. We appear to treat works of art this way too – as if they contain the essence of the artist, or the artist’s mind. We prefer the copy by the artist to the copy by the assistant because only the former contains that essence. This leads to the conclusion that just knowing that we are looking at a painting by Vermeer (even if it is a copy of a Vermeer by Vermeer) makes us feel like we are communing with Vermeer. Do we really want to find out that we were actually communing with van Meegeren?

These findings predict that we will not respond well to what the future is bringing us: three-dimensional prints of paintings virtually indistinguishable from the originals, and works of art generated by computers. These works will not allow us to infer the mind of the human artist.

The American art critic Peter Schjeldahl put this well when he wrote in 2008:

The spectre of forgery chills the receptiveness – the will to believe – without which the experience of art cannot occur. Faith in authorship matters. We read the qualities of a work as the forthright decisions of a particular mind, wanting to let it commandeer our own minds, and we are disappointed when it doesn’t.

If we read into a work of art the artist’s decisions, as Schjeldahl writes, then we are inferring a mind behind the work. Can we do this for abstract art? And, if so, can this help us distinguish art by great abstract expressionists from superficially similar art by children and animals?

Tension between those who revere and those who deride abstract art can be seen even among the most highly regarded art historians. In Art and Illusion(2000), Gombrich focused on representational art as a great human achievement, and disparaged abstract art as a display of the artist’s personality rather than skill. Contrast this to the attitude of the late American art historian Kirk Varnedoe, who was chief curator of painting and sculpture at the Museum of Modern Art from 1988 to 2001. In Pictures of Nothing(2006), Varnedoe responds explicitly to Gombrich’s challenge, writing that abstract art is a signal human achievement created in a new language, and filled with symbolic meaning. The ‘mind-boggling range of drips, stains, blobs, blocks, bricks, and blank canvases’ seen in museums of modern art are not random spills, he wrote. Rather, like all works of art, they are ‘vessels of human intention’ and they ‘generate meaning ahead of naming’. They represent a series of deliberate choices by the artist, and involve invention and evoke meanings – for example, energy, space, depth, repetition, serenity, discord.

Chimps, monkeys and elephants have all been given paints, brushes and paper on which to make marks. And their paintings, like those of preschoolers, bear a superficial resemblance to abstract expressionist paintings. Who hasn’t heard someone deride abstract art as requiring no skill at all, with statements such as ‘My kid could have done that!’

We wanted to find out whether people see more than they think they do in abstract art – whether they can see the mind behind the work. We createdpairs of images that looked eerily alike at first glance. Each pair consisted of a painting by a famous abstract expressionist whose works were found in at least one major art-history textbook (eg, Mark Rothko, Hans Hofmann, Sam Francis, Cy Twombly, Franz Kline and others) and a painting either by a child or a nonhuman animal (chimp, gorilla, monkey or elephant). The question we asked was whether people would prefer, and judge as better, works by artists over works by children and animals. And, if so, on what basis? . . .

Continue reading.

Written by LeisureGuy

17 January 2019 at 12:34 pm

Posted in Art, Books, Science

This is what happens when you take Ayn Rand

leave a comment »

Denise Cummins writes at PBS.org:

“Ayn Rand is my hero,” yet another student tells me during office hours. “Her writings freed me. They taught me to rely on no one but myself.”

As I look at the freshly scrubbed and very young face across my desk, I find myself wondering why Rand’s popularity among the young continues to grow. Thirty years after her death, her book sales still number in the hundreds of thousands annually — having tripled since the 2008 economic meltdown. Among her devotees are highly influential celebrities, such as Brad Pitt and Eva Mendes, and politicos, such as current Speaker of the House Paul Ryan and Republican presidential candidate Ted Cruz.

The core of Rand’s philosophy — which also constitutes the overarching theme of her novels — is that unfettered self-interest is good and altruism is destructive. This, she believed, is the ultimate expression of human nature, the guiding principle by which one ought to live one’s life. In “Capitalism: The Unknown Ideal,” Rand put it this way:

Collectivism is the tribal premise of primordial savages who, unable to conceive of individual rights, believed that the tribe is a supreme, omnipotent ruler, that it owns the lives of its members and may sacrifice them whenever it pleases.

By this logic, religious and political controls that hinder individuals from pursuing self-interest should be removed. (It is perhaps worth noting here that the initial sex scene between the protagonists of Rand’s book “The Fountainhead” is a rape in which “she fought like an animal.”)

WATCH: Why do the rich get richer? French economist Piketty takes on inequality in ‘Capital’

The fly in the ointment of Rand’s philosophical “objectivism” is the plain fact that humans have a tendency to cooperate and to look out for each other, as noted by many anthropologists who study hunter-gatherers. These “prosocial tendencies” were problematic for Rand, because such behavior obviously mitigates against “natural” self-interest and therefore should not exist. She resolved this contradiction by claiming that humans are born as tabula rasa, a blank slate, (as many of her time believed) and prosocial tendencies, particularly altruism, are “diseases” imposed on us by society, insidious lies that cause us to betray biological reality. For example, in her journal entry dated May 9, 1934, Rand mused:

For instance, when discussing the social instinct — does it matter whether it had existed in the early savages? Supposing men were born social (and even that is a question) — does it mean that they have to remain so? If man started as a social animal — isn’t all progress and civilization directed toward making him an individual? Isn’t that the only possible progress? If men are the highest of animals, isn’t man the next step?

The hero of her most popular novel, “Atlas Shrugged,” personifies this “highest of animals”: John Galt is a ruthless captain of industry who struggles against stifling government regulations that stand in the way of commerce and profit. In a revolt, he and other captains of industry each close down production of their factories, bringing the world economy to its knees. “You need us more than we need you” is their message.

To many of Rand’s readers, a philosophy of supreme self-reliance devoted to the pursuit of supreme self-interest appears to be an idealized version of core American ideals: freedom from tyranny, hard work and individualism. It promises a better world if people are simply allowed to pursue their own self-interest without regard to the impact of their actions on others. After all, others are simply pursuing their own self-interest as well.

Modern economic theory is based on exactly these principles. A rational agent is defined as an individual who is self-interested. A market is a collection of such rational agents, each of whom is also self-interested. Fairness does not enter into it. In a recent Planet Money episode, David Blanchflower, a Dartmouth professor of economics and former member of the Central Bank of England, laughed out loud when one of the hosts asked, “Is that fair?”

“Economics is not about fairness,” he said. “I’m not going there.”

Economists alternately find alarming and amusing a large body of results from experimental studies showing that people don’t behave according to the tenets of rational choice theory. We are far more cooperative and willing to trust than is predicted by the theory, and we retaliate vehemently when others behave selfishly. In fact, we are willing to pay a penalty for an opportunity to punish people who appear to be breaking implicit rules of fairness in economic transactions.

So what if people behaved according to Rand’s philosophy of “objectivism”? What if we indeed allowed ourselves to be blinded to all but our own self-interest?

An example from industry

In 2008, Sears CEO Eddie Lampert decided to restructure the company according to Rand’s principles.

Lampert broke the company into more than 30 individual units, each with its own management and each measured separately for profit and loss. The idea was to promote competition among the units, which Lampert assumed would lead to higher profits. Instead, this is what happened, as described by Mina Kimes, a reporter for Bloomberg Business:

An outspoken advocate of free-market economics and fan of the novelist Ayn Rand, he created the model because he expected the invisible hand of the market to drive better results. If the company’s leaders were told to act selfishly, he argued, they would run their divisions in a rational manner, boosting overall performance.

Instead, the divisions turned against each other — and Sears and Kmart, the overarching brands, suffered. Interviews with more than 40 former executives, many of whom sat at the highest levels of the company, paint a picture of a business that’s ravaged by infighting as its divisions battle over fewer resources.

A close-up of the debacle was described by Lynn Stuart Parramore in a Salon article from 2013:

It got crazy. Executives started undermining other units because they knew their bonuses were tied to individual unit performance. They began to focus solely on the economic performance of their unit at the expense of the overall Sears brand. One unit, Kenmore, started selling the products of other companies and placed them more prominently than Sears’ own products. Units competed for ad space in Sears’ circulars…Units were no longer incentivized to make sacrifices, like offering discounts, to get shoppers into the store.

Sears became a miserable place to work, rife with infighting and screaming matches. Employees, focused solely on making money in their own unit, ceased to have any loyalty to the company or stake in its survival.

We all know the end of the story: Sears share prices fell, and the company appears to be headed toward bankruptcy. The moral of the story, in Parramore’s words:

What Lampert failed to see is that humans actually have a natural inclination to work for the mutual benefit of an organization. They like to cooperate and collaborate, and they often work more productively when they have shared goals. Take all of that away and you create a company that will destroy itself.

An example from Honduras

In 2009, Honduras experienced a coup d’état when the Honduran Army ousted President Manuel Zelaya on orders from the Honduran Supreme Court. What followed was succinctly summarized by Honduran attorney Oscar Cruz:

The coup in 2009 unleashed the voracity of the groups with real power in this country. It gave them free reins to take over everything. They started to reform the Constitution and many laws — the ZEDE comes in this context — and they made the Constitution into a tool for them to get rich.

As part of this process, the Honduran government passed a law in 2013 that created autonomous free-trade zones that are governed by corporations instead of the countries in which they exist. So what was the outcome? Writer Edwin Lyngar described vacationing in Honduras in 2015, an experience that turned him from Ayn Rand supporter to Ayn Rand debunker. In his words:

The greatest examples of libertarianism in action are the hundreds of men, women and children standing alongside the roads all over Honduras. The government won’t fix the roads, so these desperate entrepreneurs fill in potholes with shovels of dirt or debris. They then stand next to the filled-in pothole soliciting tips from grateful motorists. That is the wet dream of libertarian private sector innovation.

He described the living conditions this way:

On the mainland, there are two kinds of neighborhoods, slums that seem to go on forever and middle-class neighborhoods where every house is its own citadel. In San Pedro Sula, most houses are surrounded by high stone walls topped with either concertina wire or electric fence at the top. As I strolled past these castle-like fortifications, all I could think about was how great this city would be during a zombie apocalypse.

Without collective effort, large infrastructure projects like road construction and repair languish. A resident “pointed out a place for a new airport that could be the biggest in Central America, if only it could get built, but there is no private sector upside.”

A trip to a local pizzeria was described this way:

We walked through the gated walls and past a man in casual slacks with a pistol belt slung haphazardly around his waist.  Welcome to an Ayn Rand libertarian paradise, where your extra-large pepperoni pizza must also have an armed guard.

This is the inevitable outcome of unbridled self-interest set loose in unregulated markets.

Yet devotees of Ayn Rand still argue that unregulated self-interest is the American way, that government interference stifles individualism and free trade. One wonders whether these same people would champion the idea of removing all umpires and referees from sporting events. What would mixed martial arts or football or rugby be like, one wonders, without those pesky referees constantly getting in the way of competition and self-interest?

Perhaps another way to look at this is to ask why our species of hominid is the only one still in existence on the planet, despite there having been many other hominid species during the course of our own evolution. . .

Continue reading.

Written by LeisureGuy

16 January 2019 at 9:43 am

How a Blue Tooth Led Scholars to a Medieval Manuscript Mystery

leave a comment »

Jessica Leigh Hester writes in Atlas Obscura:

AT FIRST GLANCE, THERE WAS nothing unusual-looking about the old tooth, or the skeleton of the woman it came from. Neither tooth nor bones showed signs of deformity, disease, or trauma. The tooth was pointy, yellowed, average—and that’s exactly why scientists wanted a closer look.

The plan had been to use the tooth—buried with its onetime owner in a monastery cemetery in Dalheim, Germany in the 11th or early 12th century—to better understand diet and health in the Middle Ages. Teeth, particularly gunk-encrusted ones, can reveal all sorts of habits and behaviors, because tartar, or hardened plaque, is “the only part of your body that fossilizes when you’re still alive,” says Christina Warinner, an archaeogeneticist at the Max Planck Institute for the Science of Human History in Germany. Bacteria, pollen, and little bits of food can all be trapped in this matrix, making teeth “a little time capsule of your life history,” she says. While archaeologists are often focused on pottery sherds or pieces of metal or stone, Warinner adds, “small artifacts, the kind that are too small to see, often preserve better than anything else.” As the team would discover, this microscopic debris can hold puzzles and clues about life and labor centuries ago.

Warinner and her collaborators first looked at the tooth, from the woman designated B78, under a microscope for a window into the daily life of the women who lived and worked in the small monastery. But the researchers soon realized they were looking at something more unusual. According to Warinner, her colleague Anita Radini, an archaeologist at the University of York, said, “I don’t know what’s going on, but it’s blue.” At first, Warinner thought this might be an exaggeration—maybe it was just a little grayish? But it turned out to be flecked with bits that were resplendently, unmistakably blue—the color of a cloudless sky. “Under the microscope,” Warinner says, “it was clear [B78] had been rather extraordinary.”

Radini, Warinner, and their team analyzed some of the several hundred blue particles suspended in B78’s hardened plaque, and determined them to be lazurite, the naturally occurring mineral that gives lapis lazuli its brilliant blue hue. They suspected that the crystals came from interacting with a rich ultramarine pigment, made by grinding lapis lazuli into a fine powder. The semiprecious stone had been traded into Europe from Afghanistan, perhaps through Alexandria, Venice, or other trading hubs. But how did it get into B78’s mouth, and stay there?

In a new paper in Science Advances, the team offers a number of theories, which could help rewrite what we know about the roles that women played in the creation of Middle Age manuscripts. The authors suggest that the pigment got onto B78’s teeth as she habitually touched paintbrush bristles to her mouth to taper them to a point, or when she prepared the blue pigment, a process that is known to create clouds of azure dust. In either case, they suggest, she was likely to have been intimately involved with illuminating manuscripts, either decorating them herself or prepping the materials that others used.

“I can’t think of another reason a sufficient quantity of lapis would be ingested, unless it was supposed to have apotropaic [magical or protective] qualities,” says Suzanne Karr Schmidt, curator of rare books and manuscripts of the Newberry Library in Chicago, who wasn’t involved in the research. The paper’s authors also consider and dismiss the possibilities that the pigment was consumed for a medicinal purposes (since the practice wasn’t widespread in Germany at the time), or accidentally ingested during “devotional osculation,” or the ritualistic kissing of an illuminated prayer book. (These kissing rituals didn’t become particularly popular until the 14th and 15th centuries, the authors note, and probably would have also resulted in the kisser picking up other pigments or materials beyond the dazzling blue.)

For years, scholars believed that women weren’t often actively engaged in the process of creating illuminated manuscripts, the creators of which frequently went uncredited. “There really aren’t many signed illuminations from any period, though I can’t think of any female examples,” Schmidt says. In that era, she says, women were more often associated with textiles or, occasionally, from the 1400s on, embellishing manuscripts with hand-coloring or borders. Only 1 percent of books made prior to the 12th century can be attributed to women, the authors write, and historically it has been assumed that uncredited examples were made by men.

But Alison Beach, a historian at Ohio State University and a coauthor on the paper, has turned up a few examples of female illuminators staking a claim to their work, and compared these known examples against unsigned ones. “Start with what you know,” Beach says, “a name, a hand, and move away from there and try to make a match.” By comparing handwriting, she has been able to attribute some unsigned manuscripts to female illuminators. Overall, “women produced far more books than has been appreciated before,” Warinner says. The clues tend to cluster in Germany, Beach says, and a single 12th-century female scribe in Bavaria is known to have produced more than 40 books, including an illuminated gospel. Warinner and her collaborators argue that the stained tooth could begin to illuminate the hidden artistic contributions other women made during this period.

Further information about the women of Dalheim has vanished into the ether. It began as a parish church, and then became a female monastery, home to roughly 14 women at a time.  . .

Continue reading.

Written by LeisureGuy

14 January 2019 at 9:58 am

Posted in Art, Books, Daily life, Science

Free Go book as PDF: “Go Studies: A History of Adventure”

leave a comment »

I just learned of this site: ExploreBaduk.com, with Baduk being the Korean name for Go. And the first thing I saw was a free book on Go: Go Studies: A History of AdventureScroll down that page to get to the download buttons.

It’s really a terrific site, with a lot of very good content. Take look. Remember your New Year’s Resolution to learn Go. 🙂

Written by LeisureGuy

13 January 2019 at 1:43 pm

Posted in Games, Books, Go

On discovering one has type 2 diabetes

leave a comment »

I just had a realization as I wrote a response to a person who was asking for diet advice because they had just discovered that they had diabetes. I wrote:

 When I found out that I was diabetic, I was crushed. It should not have been a surprise—I was overweight and (I now realize) making poor food choices—but I didn’t realize that it was so bad. And once I learned it was basically not going to change—once you have diabetes, you can’t put it back—I became fairly depressed for a while.

But then I decided to get serious about it, and I did a lot of reading and ended up with the approach I describe in that post, and things are going pretty well.

One big thing: if a food contains refined sugar, do not put it into your mouth. Period. Quantity of Sugar in Food Supply Linked to Diabetes Rates.

If a food is made of refined flour, avoid it: no bagels, bread, pasta, and so on. Occasionally I might have a hamburger, but even then I might remove the top bun and eat it as an open-face sandwich.

I avoid potatoes in all forms and rice in all forms: both of those drive up my blood glucose quickly. (Bread does, too.)

At Thanksgiving dinner and Christmas dinner, I will have a bit of dessert, but basically for dessert I eat berries. (See Low-Carb Fruits and Berries – the Best and the Worst.)

The more you learn about food and its effects on your body, the more confident and secure you will feel. It just occurs to me that learning that you have diabetes is stunning because you lose your Locus of control. That causes depression (cf. Learned Optimism by Seligman). By reading and learning and rebuilding your approach to food, you regain your locus of control and feel that once again you are in charge of your body, a feeling I lost when I was told I had diabetes.

The new insight was why I was depressed and how I got out of it, both of which were about my locus of control.

Written by LeisureGuy

13 January 2019 at 9:04 am

If You Want to Get Better at Something, Ask Yourself These Two Questions

leave a comment »

Interesting article in the Harvard Business Review by Peter Bregman. From the article:

. . . Whatever it is, you can become better at it. But here’s the thing I know just as clearly as I know you can get better at anything: you will not get better if 1) you don’t want to and 2) you aren’t willing to feel the discomfort of doing things differently. . .

. . . Learning anything new is, by its nature, uncomfortable. You will need to act in ways that are unfamiliar. Take risks that are new. Try things that, in many cases, will be initially frustrating because they won’t work the first time. You are guaranteed to feel awkward. You will make mistakes. You may be embarrassed or even feel shame, especially if you are used to succeeding a lot —and all my clients are used to succeeding a lot.

If you remain committed through all of that, you’ll get better. . .

In this connection, let me highly recommend Mindset, by the Stanford psychologist Carol Dweck.

Written by LeisureGuy

11 January 2019 at 4:29 pm

Lest we forget: “Vice” vs. the Real Dick Cheney

leave a comment »

Nicholas Lemann writes in the New Yorker:

Adam McKay, the director of “Vice,” has an exuberant and fantastic filmmaking style that inoculates him against the kind of indignant fact-checking to which Hollywood depictions of history are often subjected. Who wants to be an old grump and point out that, for example, there is no evidence that Dick Cheney, the movie’s antihero, suggested to the President that they head out to the White House lawn for a round of circle jerk, or that Dick and Lynne Cheney spoke to each other in bed in mock-Shakespearean pentameter? But “Vice” isn’t asking to be judged purely as a work of fiction, either; its implicit claim is that it plays around with the facts about Cheney in order to get closer to the truth.

By that standard, there’s no problem about the regular flights into speculation and satire, but there is one major false note in “Vice.” That’s when a young Cheney rather plaintively asks his mentor, the congressman turned White House aide Donald Rumsfeld, “What do we believe in?” Rumsfeld bursts into uncontrollable laughter, turns away, and disappears into his office. Through the closed door we can still hear him cackling. Actually, it’s clear that Cheney, even that early, was a deeply committed and ideological conservative—one whose phlegmatic demeanor and eagerness to master the details of government masked who he really was for a very long time.

In the early nineteen-sixties, Cheney dropped out of Yale twice, but one professor there made a deep impression on him. That was H. Bradford Westerfield, a diplomatic historian who believed that it was possible that the United States would fall victim to a Communist takeover. “Ominously, the infectious defeatism drifts across the Atlantic and begins to insinuate itself into the mind of America,” he warned in his book “The Instruments of America’s Foreign Policy.” Another crucial experience for the Cheneys—both of whom were children of career federal civil servants—was their brief tour of duty in Madison, Wisconsin, at the height of the sixties, when they were enrolled in graduate school, at the University of Wisconsin.

Many years later, Lynne Cheney told me, “I distinctly remember going to class, and having to walk through people in whiteface, conducting guerrilla theatre, often swinging animal entrails over their heads, as part of a protest against Dow Chemical. And then the shocking thing was that you would enter the classroom and here would be all these nice young people who honestly wanted to learn to write an essay.” Dick Cheney, during an internship in Washington, D.C., took a delegation from Capitol Hill to a Students for a Democratic Society meeting in Madison, so that they could see the unvarnished face of student radicalism, and also to a faculty meeting, where he was struck by the professors’ lack of alarm over the left’s activities. Cheney and Rumsfeld’s first jobs in a Presidential Administration were at the Office of Economic Opportunity, during Richard Nixon’s first term—Rumsfeld was the director and Cheney was his deputy. This is presented in “Vice” as an anodyne bureaucratic assignment, but, because the O.E.O. had been created to carry out Lyndon Johnson’s War on Poverty, their jobs entailed dismantling the most sixties-infused agency of the federal government. From Cheney’s point of view, the work had the quality of removing the serpent from the breast of state.

The episode that best foreshadowed the Cheney we came to know in the years after the 9/11 attacks occurred at the end of his service as Secretary of Defense, under George H. W. Bush—another job that “Vice” understands in terms of power, not ideas. As the Soviet Union was collapsing, Cheney, with the help of aides such as Lewis (Scooter) Libby and Paul Wolfowitz, who later joined him in the George W. Bush Administration, commissioned a study with the bland title “Defense Planning Guidance.” It envisioned a post-Cold War world in which there would only ever be one superpower, the United States: “Our first objective is to prevent the re-emergence of a new rival,” the document said. It was skeptical of power exercised by the United Nations and other multinational alliances, as opposed to that exercised by the United States unilaterally. Cheney’s circle did not support the first President Bush’s decision to conclude the Gulf War without toppling Saddam Hussein and installing a new government in Iraq. The 9/11 attacks provided Cheney and his allies with an unexpected opportunity to enact their long-standing views.

“Vice” treats conservatism as a combination of resistance to the civil-rights movement, the Koch brothers’ eagerness to reduce taxes and regulations, and pure opportunism. Cheney’s conservatism, at heart, is none of these. It is what might be called threatism. Powerful, determined, immensely destructive forces—the Soviet Union, radical Islam, the domestic left—want to destroy American freedom and democracy. Complacent politicians, especially liberal ones, are incapable either of understanding this or of summoning the will to combat it. For the small cadre who do understand, it is imperative to use power unusually quietly, expertly, and aggressively. . .

Continue reading.

Written by LeisureGuy

5 January 2019 at 2:34 pm

%d bloggers like this: