Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Books’ Category

Interesting critical review of “How Not to Die”

leave a comment »

Denis Minger, author of Death by Food Pyramid: How Shoddy Science, Sketchy Politics and Shady Special Interests Have Ruined Our Health, has an interesting critical review of How Not to Die. I think it’s worth reading to avoid going overboard. For example, from her review:

Throughout How Not to Die, Greger distills a vast body of literature into a simple, black-and-white narrative — a feat only possible through cherry picking, one of the nutrition world’s most gainfully employed fallacies.

Cherry picking is the act of selectively choosing or suppressing evidence to fit a predefined framework. In Greger’s case, that means presenting research when it supports plant-based eating and ignoring it (or creatively spinning it) when it doesn’t.

In many cases, spotting Greger’s picked cherries is as simple as checking the book’s claims against their cited references. These foibles are small but frequent.

For example, as evidence that high-oxalate vegetables aren’t a problem for kidney stones (a bold claim, given the wide acceptance of foods like rhubarb and beets as risky for stone formers), Greger cites a paper that doesn’t actually look at the effects of high-oxalate vegetables — only total vegetable intake (pages 170-171).

Along with stating “there is some concern that greater intake of some vegetables … might increase the risk of stone formation as they are known to be rich in oxalate,” the researchers suggest the inclusion of high-oxalate veggies in participants’ diets could have diluted the positive results they found for vegetables as a whole: “It is also possible that some of the [subjects’] intake is in the form of high-oxalate containing foods which may offset some of the protective association demonstrated in this study” (1Trusted Source).

In other words, Greger selected a study that not only couldn’t support his claim, but where the researchers suggested the opposite.

Similarly, citing the EPIC-Oxford study as evidence that animal protein increases kidney stone risk, he states: “subjects who didn’t eat meat at all had a significantly lower risk of being hospitalized for kidney stones, and for those who did eat meat, the more they ate, the higher their associated risks” (page 170).

The study actually found that, while heavy meat eaters did have the highest risk of kidney stones, people who ate small amounts of meat fared better than those who ate none at all — a hazard ratio of 0.52 for low meat eaters versus 0.69 for vegetarians (2Trusted Source).

In other cases, Greger seems to redefine what “plant-based” means in order to collect more points for his dietary home team.

For instance, he credits a reversal of diabetic vision loss to two years of plant-based eating — but the program he cites is Walter Kempner’s Rice Diet, whose foundation of white rice, refined sugar, and fruit juice hardly supports the healing power of whole plants (page 119) (3).

Later, he again references the Rice Diet as evidence that “plant-based diets have been successful in treating chronic kidney failure” — with no caveat that the highly processed, vegetable-free diet in question is a far cry from the one Greger recommends (page 168) (4Trusted Source).

In other instances, Greger cites anomalous studies whose only virtue, it seems, is that they vindicate his thesis.

These cherry-picks are hard to spot even for the most dutiful reference checker, since the disconnect isn’t between Greger’s summary and the studies, but between the studies and reality.

As one example: in discussing cardiovascular disease, Greger challenges the idea that omega-3 fats from fish offer disease protection, citing a 2012 meta-analysis of fish oil trials and studies advising people to load up on the ocean’s fattiest bounty (page 20) (5Trusted Source).

Greger writes that the researchers “found no protective benefit for overall mortality, heart disease mortality, sudden cardiac death, heart attack, or stroke” — effectively showing that fish oil is, perhaps, just snake oil (page 20).

The catch? This meta-analysis is one of the most heavily criticized publications in the omega-3 sea — and other researchers wasted no time calling out its errors.

In an editorial letter, one critic pointed out that among the studies included in the meta-analysis, the average omega-3 intake was 1.5 g per day — only half the amount recommended to reduce the risk of heart disease (6Trusted Source). Because so many studies used a clinically irrelevant dosage, the analysis might have missed the cardioprotective effects seen at higher omega-3 intakes.

Another respondent wrote that the results “should be interpreted with caution” due to the study’s numerous shortcomings — including the use of an unnecessarily stringent cutoff for statistical significance (P < 0.0063, instead of the more common P < 0.05) (7Trusted Source). At more widely used P-values, the study might have deemed some of its findings significant — including a 9% reduction in cardiac death, a 13% reduction in sudden death, and an 11% reduction in heart attack associated with fish oil from food or supplements.

And yet another critic noted that any benefit of omega-3 supplementation would be hard to demonstrate among people using statin drugs, which have pleiotropic effects that resemble — and possibly mask — the mechanisms involved with omega-3s (7Trusted Source). This is important, because in several of the no-benefit omega-3 trials, up to 85% of the patients were on statins (8Trusted Source).

In the spirit of accuracy, Greger could have cited a more recent omega-3 review that dodges the previous study’s errors and — quite intelligently — explains the inconsistent outcomes among omega-3 trials (8Trusted Source).

In fact, the authors of this paper encourage the consumption of two to three servings of oily fish per week — recommending that “physicians continue to recognize the benefits of omega-3 PUFAs to reduce cardiovascular risk in their high risk patients” (8Trusted Source).

Maybe that’s why Greger didn’t mention it!

Beyond misrepresenting individual studies (or accurately citing questionable ones), How Not to Die features pages-long slogs through the fallacious cherry orchard. In some cases, entire discussions of a topic are built on incomplete evidence.

Some of the most egregious examples include: . . .

Read the whole thing. And the review is not all criticism. For example, later in the review:

As we’ve seen, Greger’s references don’t always support his claims, and his claims don’t always match reality. But when they do, it’d be smart to listen up.

Throughout How Not to Die, Greger explores many oft-ignored and myth-shrouded issues in the nutrition world — and in most cases, fairly represents the science he draws from.

Amid mounting fears about sugar, Greger helps vindicate fruit — discussing the potential for low-dose fructose to benefit blood sugar, the lack of fruit-induced harm for diabetics, and even a study in which 17 volunteers ate twenty servings of fruit per day for several months, with “no overall adverse effects for body weight, blood pressure, insulin, cholesterol, and triglyceride levels” (pages 291-292) (60Trusted Source61Trusted Source).

He rescues phytates — antioxidant compounds that can bind to certain minerals — from the vast mythology about their harm, discussing the many ways they can protect against cancer (pages 66-67).

He casts doubt on fears surrounding legumes — sometimes maligned for their carbohydrate and antinutrient content — by exploring their clinical effects on weight maintenance, insulin, blood sugar control and cholesterol (page 109).

And, most importantly to omnivores, his penchant for cherry picking occasionally pauses long enough to make room for a legitimate concern about meat. Two examples:

1. Infections From Meat

Beyond the dead, ever-beaten horses of saturated fat and dietary cholesterol, meat carries a legitimate risk that How Not to Die drags into the spotlight: human-transmissible viruses.

As Greger explains, many of humanity’s most loathed infections originated from animals — ranging from goat-given tuberculosis to measles from cattle (page 79). But a growing body of evidence suggests humans can acquire diseases not just from living in close proximity to farm animals, but also from eating them.

For many years, urinary tract infections (UTIs) were believed to originate from our own renegade E. coli strains finding their way from the gut to the urethra. Now, some researchers suspect UTIs are a form of zoonosis — that is, an animal-to-human disease.

Greger points to a recently discovered clonal link between E. coli in chicken and E. coli in human UTIs, suggesting that at least one source of infection is chicken meat that we handle or eat — not our resident bacteria (page 94) (62Trusted Source).

Worse yet, chicken-derived E. coli appears resistant to most antibiotics, making its infections particularly hard to treat (page 95) (63Trusted Source).

Pork, too, can serve as a source of multiple human illnesses. Yersinia poisoning — linked almost universally to contaminated pork — brings more than a brief fling with digestive distress: Greger notes that within one year of infection, Yersinia victims have a 47-times higher risk of developing autoimmune arthritis, and may also be more likely to develop Graves’ disease (page 96) (6465Trusted Source).

Recently, pork has come under fire for another health hazard as well: hepatitis E. Now considered potentially zoonotic, hepatitis E infection is routinely traced to pig liver and other pork products, with about one in ten pig livers from American grocery stores testing positive for the virus (page 148) (66Trusted Source67Trusted Source).

Although most viruses (hepatitis E included) are deactivated by heat, Greger warns that hepatitis E can survive the temperatures reached in rare-cooked meat — making pink pork a no-go (page 148) (68Trusted Source).

And when the virus survives, it means business. Areas with high pork consumption have consistently elevated rates of liver disease, and while that can’t prove cause and effect, Greger notes that the relationship between pork consumption and death from liver disease “correlates as tightly as per capita alcohol consumption and liver fatalities” (page 148) (69Trusted Source). In a statistical sense, each devoured pork chop raises the risk of dying from liver cancer as much as drinking two cans of beer (page 148) (70Trusted Source).

All that said, animal-derived infections are far from a strike against omnivory, per sePlant foodsoffer plenty of transmissible illnesses of their own (71). And the animals at highest risk of transmitting pathogens are — in nearly every case — raised in overcrowded, unhygienic, poorly ventilated commercial operations that serve as cesspools for pathogens (72).

Although How Not to Die remains tight-lipped on any benefits of humanely raised livestock, this is one area where quality can be a lifesaver.

2. Cooked Meat and Carcinogens

Meat and heat make a flavorful duo, but as Greger points out, high-temperature cooking poses some unique risks for animal foods.

In particular, he cites what the Harvard Health Letter called a meat-preparation paradox: “Cooking meat thoroughly reduces the risk of contracting foodborne infections, but cooking meat too thoroughly may increase the risk of foodborne carcinogens” (page 184).

A number of these foodborne carcinogens exist, but the ones exclusive to animal foods are called heterocyclic amines (HCAs).

HCAs form when muscle meat — whether from creatures of the land, the sea, or the sky — is exposed to high temperatures, roughly 125-300 degrees C or 275-572 degrees F. Because a critical component of HCA development, creatine, is found only in muscle tissue, even the most woefully overcooked veggies won’t form HCAs (73Trusted Source).

As Greger explains, HCAs were quite whimsically discovered in 1939 by a researcher who gave mice breast cancer by “painting their heads with extracts of roasted horse muscle” (page 184) (74Trusted Source).

In the decades since, HCAs have proven to be a legitimate hazard for omnivores who like their meat high up on the “done” spectrum.

Greger provides a solid list of studies — decently conducted, equitably described — showing a link between high-temperature-cooked meat and breast cancer, colon cancer, esophageal cancer, lung cancer, pancreatic cancer, prostate cancer, and stomach cancer (page 184) (75Trusted Source). In fact, cooking method appears to be a major mediator for the association between meat and various cancers that pop up in epidemiological studies — with grilled, fried, and well-done meat boosting risk significantly (76).

And the link is far from just observational. PhIP, a well-studied type of HCA, has been shown to spur breast cancer growth almost as potently as estrogen — while also acting as a “complete” carcinogen that can initiate, promote, and spread cancer within the body (page 185) (77Trusted Source).

The solution for meat eaters? A cooking method revamp. Greger explains that roasting, pan frying, grilling, and baking are all common HCA makers, and the longer a food hangs out in the heat, the more HCAs emerge (page 185). Low-temperature cooking, on the other hand, appears dramatically safer.

In what might be the closest thing to an animal food endorsement he ever offers, Greger writes, “Eating boiled meat is probably the safest” (page 184). . . .

Written by LeisureGuy

20 May 2019 at 12:13 pm

Finding pleasure in the discomfort of learning new skills

leave a comment »

I often recommend Mindset, by Carol Dweck. It’s an entertaining read and it informative in describing well the phenomenon of learning resistance. For some people, learning something new is the opposite of a pleasure: new ideas may contradict old notions with which they’ve become comfortable—so comfortable, in fact, that they see them as part of who they are, their identity. The new idea can seem like a direct personal attack.

And that’s just new ideas: new skills are even worse since the initial stages of learning a skill make us feel awkward and embarrassed by our struggles. This seems true even when we are young: those who have raised young children will have observed how out of temper—frustrated, irritable, and angry—a very young child is just as s/he embarks upon learning to walk. The old way is no good, the new way is too hard, and tears and tantrums are frequent during the transition from crawling to toddling.

Of course, the very young have little choice in the matter: they must learn to sit up, to crawl, to walk, to talk, to feed themselves, to go to potty, to dress themselves. Each step is for a while a separate struggle, but you’ve probably noticed that those same skills, now mastered, no longer arouse strong emotions.

And indeed, adults—who in general have a choice in what they do—spend most of their time in practicing skills long since mastered. (One exception I’ve noticed in business is that lower-ranking adults often do not have a choice in learning a skill: a clerk is simply required to master the intricacies of the new copier or the new phone system. I once observed a company president who wanted to transfer a call on the new phone system somewhat piteously call out to his secretary to please come in and do it for him.)

The result of spending virtually all of their time in exercising skills already required is that adult who  are beginners in a new skill are often terrible students: they know clearly what they want to do, and they are acutely aware of what they see as failures (rather than as practice trials) as they try to learn to play the piano, or to speak a foreign language, or to cook a meal (for those adults who just learning to cook), and so on. That awkwardness and embarrassment and uncertainty seems almost toxic to them if they have become unaccustomed to the early stages of learning. They view their experience in those early stages of learning as a struggle rather than an exploration. New ventures are traditionally viewed as hazardous, and novelty in one’s own routines is often initially distasteful. (Those who have enjoyed Patrick O’Brian’s series of novels about Jack Aubrey and Stephen Maturin will be familiar with the benediction upon parting, “Que no haya novedad”—May no new thing arise.)

The key, as is so often the case, is an adjustment of attitude, described well in Dweck’s book: to accept eagerly that awkwardness as the harbinger of a new skill, and initially focus your attention on your progress (which in the first stages of learning is remarkable) rather than on your results (which in the first stages of learning is best viewed only to measure progress).

If you have continued to learn new things, these difficulties are not such an issue, since they are (a) familiar and (b) you have, through practice, learned how to learn. But those who have successfully avoided learning new things, will have lost the skill (and the familiarity) of learning, so that for them the feelings aroused by the initial stages of learning become almost detestable because they so unfamiliar and uncomfortable: they are unable to detect the implicit promise. They are acutely aware of the irritating grain of sand and don’t realize that the final result is a pearl of great value: a skill whose exercise has become a source of great enjoyment (cf. traditional shaving and the Guide displayed at the right).

I’ve just embarked upon learning how to follow a plant-based diet, and I am finding the usual (and now familiar) awkwardness. I continue to be careful in my intake of net carbs and to avoid simple carbs, so that is familiar, but I’m having to learn a new pattern. When my diet was omnivorous diet, I had a (mostly unconscious) repertoire of meal patterns: I could throw together a decent meal with little thought. I knew the drill.

Those patterns naturally involved using animal protein (meat, seafood, eggs, and/or cheese), which often functioned as the pivot point of the meal, and without those, my old patterns fall apart and I find I’m somewhat at a loss. It’s like the old knock knock joke that begins, “I have a new knock-knock joke. You start.” and when the other automatically says, “Knock, knock” and your respond “Who’s there?”, there’s the baffled full stop—who is there? The vacuum in the meal patter does feel awkward. But already I’m finding/creating new patterns, and meals are becoming easier to create. The transition for me is well underway.

It takes a little time to sort out, but I started this less than one week ago with reading How Not to Die: Discover the Foods Scientifically Proven to Prevent and Reverse Disease. I’ve had six days of preparing plant-based meals for myself and already I’m beginning to find my footing—getting my sea legs, as it were (I do love O’Brian’s novels) as I embark in a new direction. For me this process is pleasurable: I love the experience of things start to come together and make sense, and in fact make a new sort of sense. I’m almost bubbling with happiness as I dream up new combination to try and as I taste new dishes. A couple of days ago I wrote about my new standard breakfast, but I now realize I can improve it even more by adding a pinch herbs and spices to it.

I’m excited. It’s not every day that one has the chance to go in a new direction, and the fact that it’s healthful makes it all the better. 🙂

Written by LeisureGuy

20 May 2019 at 9:58 am

Interesting book: “How Not to Die,” by Michael Greger M.D.

leave a comment »

How Not to Die: Discover the Foods Scientifically Proven to Prevent and Reverse Disease, by Michael Greger M.D., is essentially a very readable collection of research findings on nutrition, the gist of which that a plant-based diet is much more healthful than a diet that includes animal-derived foods (meat, fish, dairy, eggs). Greger also has a website,, which contains a large library of brief (typically 5 minutes) videos on a wide range of nutrition facts. Here’s a list of topics.

I’m convinced enough to switch my diet while still maintaining some of my own restrictions:

a. I don’t eat any foods that contain refined sugar, and I don’t use sweeteners (because I deliberately developed a taste for savory in preference to sweet), and I avoid simple carbs that are quickly digested (because I have type 2 diabetes).

b. I favor foods low in net carbs (total carbs minus dietary fiber). Thus I favor fruit low in net carbs (berries, for example). For foods high in carbs (e.g., grains), I buy them in a form and prepare them in a way that I minimize minimize the impact of the carbs. For example, I eat whole-kernel grains (oat groats, hulled barley, wheat berries), and I chill those after I cook them, which makes the starch resistant so that it is not digested so quickly. (When I eat them, I take a 1/2 cup serving and reheat it.) I don’t eat corn or rice or white potatoes, which are high in starch, though I do eat sweet potatoes occasionally. I do eat beans, which are high in fiber, but those also I refrigerate after cooking before I eat them, and if I eat them on a salad I don’t reheat.

c. I use WW Freestyle online (no meetings) to assist in portion control, tracking points. A great many foods (200 or more) are zero points, so this is not burdensome, particularly since I quickly learn the point value of foods I frequently eat (cf. how supermarket checkout clerks learn the product codes for fruits and vegetables).

I find the effort interesting and actually enjoy figuring out meals that are satisfying, healthful, and meet my criteria. It’s early days yet, so I am still figuring out a working menu, but it’s no more problematic than learning to play chess.

To facilitate meal preparation, I prep and cook foods ahead of time and keep them in the fridge. Right now I have greens (kale and red chard), Lima beans, hulled barley, and oat groats.

Because I’m buying more from the bulk bins, I stopped by the hardware store and bought a dozen canning jars ($9). We’re a two-person family, so I got pint/500ml jars, but a family of four would find quart/1L jars more suitable. I like that they are clear glass (and not plastic). I strongly recommend also getting a canning funnel, a wide-mouth funnel that fits the jars perfectly. It makes filling them easier and less spill-prone.

Based on my reading, I have jars for quinoa, navy beans (highest in fiber), Lima beans (good source of iron), oat groats, wheat berries, hulled barley, pecans, walnuts, peanuts, pepitas, flaxseed, and chia seed, and I’m sure I’ll be finding other foods suitable for a jar.

UPDATE: A Quoran pointed out to me a very interesting critical review of Greger’s book and approach. See this post. In his enthusiasm for (and fervid support of) a non-animal diet (a term that seems better than “plant-based,” I think), Dr. Greger misstates the conclusions of some studies and flatly ignores others. I’m now also reading Minger’s book, nd I think I’lll be following a “plant-primary” diet that will not avoid all animal-derived foods. I’m still working out the details, and I do think having my diet have a higher proportion of greens, nuts and seeds, vegetables, fruit, and the like is good, but I don’t think I will go whole-hog (as it were).

Written by LeisureGuy

18 May 2019 at 11:18 am

Was the writer of Shakespeare’s plays a woman?

leave a comment »

Elizabeth Winkler writes in the Atlantic:

On a spring night in 2018, I stood on a Manhattan sidewalk with friends, reading Shakespeare aloud. We were in line to see an adaptation of Macbeth and had decided to pass the time refreshing our memories of the play’s best lines. I pulled up Lady Macbeth’s soliloquy on my iPhone. “Come, you spirits / That tend on mortal thoughts, unsex me here,” I read, thrilled once again by the incantatory power of the verse. I remembered where I was when I first heard those lines: in my 10th-grade English class, startled out of my adolescent stupor by this woman rebelling magnificently and malevolently against her submissive status. “Make thick my blood, / Stop up th’ access and passage to remorse.” Six months into the #MeToo movement, her fury and frustration felt newly resonant.

Pulled back into plays I’d studied in college and graduate school, I found myself mesmerized by Lady Macbeth and her sisters in the Shakespeare canon. Beatrice, in Much Ado About Nothing, raging at the limitations of her sex (“O God, that I were a man! I would eat his heart in the marketplace”). Rosalind, in As You Like It, affecting the swagger of masculine confidence to escape those limitations (“We’ll have a swashing and a martial outside, / As many other mannish cowards have / That do outface it with their semblances”). Isabella, in Measure for Measure, fearing no one will believe her word against Angelo’s, rapist though he is (“To whom should I complain? Did I tell this, / Who would believe me?”). Kate, in The Taming of the Shrew, refusing to be silenced by her husband (“My tongue will tell the anger of my heart, / Or else my heart concealing it will break”). Emilia, in one of her last speeches in Othello before Iago kills her, arguing for women’s equality (“Let husbands know / Their wives have sense like them”).

I was reminded of all the remarkable female friendships, too: Beatrice and Hero’s allegiance; Emilia’s devotion to her mistress, Desdemona; Paulina’s brave loyalty to Hermione in The Winter’s Tale; and plenty more. (“Let’s consult together against this greasy knight,” resolve the merry wives of Windsor, revenging themselves on Falstaff.) These intimate female alliances are fresh inventions—they don’t exist in the literary sources from which many of the plays are drawn. And when the plays lean on historical sources (Plutarch, for instance), they feminize them, portraying legendary male figures through the eyes of mothers, wives, and lovers. “Why was Shakespeare able to see the woman’s position, write entirely as if he were a woman, in a way that none of the other playwrights of the age were able to?” In her book about the plays’ female characters, Tina Packer, the founding artistic director of Shakespeare & Company, asked the question very much on my mind.

Doubts about whether William Shakespeare (who was born in Stratford-upon-Avon in 1564 and died in 1616) really wrote the works attributed to him are almost as old as the writing itself. Alternative contenders—Francis Bacon; Christopher Marlowe; and Edward de Vere, the 17th earl of Oxford, prominent among them—continue to have champions, whose fervor can sometimes border on fanaticism. In response, orthodox Shakespeare scholars have settled into dogmatism of their own. Even to dabble in authorship questions is considered a sign of bad faith, a blinkered failure to countenance genius in a glover’s son. The time had come, I felt, to tug at the blinkers of both camps and reconsider the authorship debate: Had anyone ever proposed that the creator of those extraordinary women might be a woman? Each of the male possibilities requires an elaborate theory to explain his use of another’s name. None of the candidates has succeeded in dethroning the man from Stratford. Yet a simple reason would explain a playwright’s need for a pseudonym in Elizabethan England: being female.

Long before Tina Packer marveled at the bard’s uncanny insight, others were no less awed by the empathy that pervades the work. “One would think that he had been Metamorphosed from a Man to a Woman,” wrote Margaret Cavendish, the 17th-century philosopher and playwright. The critic John Ruskin said, “Shakespeare has no heroes—he has only heroines.” A striking number of those heroines refuse to obey rules. At least 10 defy their fathers, bucking betrothals they don’t like to find their own paths to love. Eight disguise themselves as men, outwitting patriarchal controls—more gender-swapping than can be found in the work of any previous English playwright. Six lead armies.

The prevailing view, however, has been that no women in Renaissance England wrote for the theater, because that was against the rules. Religious verse and translation were deemed suitable female literary pursuits; “closet dramas,” meant only for private reading, were acceptable. The stage was off-limits. Yet scholars have lately established that women were involved in the business of acting companies as patrons, shareholders, suppliers of costumes, and gatherers of entrance fees. What’s more, 80 percent of the plays printed in the 1580s were written anonymously, and that number didn’t fall below 50 percent until the early 1600s. At least one eminent Shakespeare scholar, Phyllis Rackin, of the University of Pennsylvania, challenges the blanket assumption that the commercial drama pouring forth in the period bore no trace of a female hand. So did Virginia Woolf, even as she sighed over the obstacles that would have confronted a female Shakespeare: “Undoubtedly, I thought, looking at the shelf where there are no plays by women, her work would have gone unsigned.”

A tantalizing nudge lies buried in the writings of Gabriel Harvey, a well-known Elizabethan literary critic. In 1593, he referred cryptically to an “excellent Gentlewoman” who had written three sonnets and a comedy. “I dare not Particularise her Description,” he wrote, even as he heaped praise on her.

All her conceits are illuminate with the light of Reason; all her speeches beautified with the grace of Affability … In her mind there appeareth a certain heavenly Logic; in her tongue & pen a divine Rhetoric … I dare undertake with warrant, whatsoever she writeth must needs remain an immortal work, and will leave, in the activest world, an eternal memory of the silliest vermin that she should vouchsafe to grace with her beautiful and allective style, as ingenious as elegant.

Who was this woman writing “immortal work” in the same year that Shakespeare’s name first appeared in print, on the poem “Venus and Adonis,” a scandalous parody of masculine seduction tales (in which the woman forces herself on the man)? Harvey’s tribute is extraordinary, yet orthodox Shakespeareans and anti-Stratfordians alike have almost entirely ignored it.

Until recently, that is, when a few bold outliers began to advance the case that Shakespeare might well have been a woman. One candidate is . . .

Continue reading.

Written by LeisureGuy

16 May 2019 at 11:38 am

Posted in Art, Books

Is evolutionary science due for a major overhaul?

leave a comment »

Kevin Laland, professor of behavioural and evolutionary biology at University of St Andrews in Scotland, and project leader of the extended evolutionary synthesis research programme, whose latest book is Darwin’s Unfinished Symphony: How Culture Made the Human Mind (2017), writes at Aeon:

When researchers at Emory University in Atlanta trained mice to fear the smell of almonds (by pairing it with electric shocks), they found, to their consternation, that both the children and grandchildren of these mice were spontaneously afraid of the same smell. That is not supposed to happen. Generations of schoolchildren have been taught that the inheritance of acquired characteristics is impossible. A mouse should not be born with something its parents have learned during their lifetimes, any more than a mouse that loses its tail in an accident should give birth to tailless mice.

If you are not a biologist, you’d be forgiven for being confused about the state of evolutionary science. Modern evolutionary biology dates back to a synthesis that emerged around the 1940s-60s, which married Charles Darwin’s mechanism of natural selection with Gregor Mendel’s discoveries of how genes are inherited. The traditional, and still dominant, view is that adaptations – from the human brain to the peacock’s tail – are fully and satisfactorily explained by natural selection (and subsequent inheritance). Yet as novel ideas flood in from genomics, epigenetics and developmental biology, most evolutionists agree that their field is in flux. Much of the data implies that evolution is more complex than we once assumed.

Some evolutionary biologists, myself included, are calling for a broader characterisation of evolutionary theory, known as the extended evolutionary synthesis (EES). A central issue is whether what happens to organisms during their lifetime – their development – can play important and previously unanticipated roles in evolution. The orthodox view has been that developmental processes are largely irrelevant to evolution, but the EES views them as pivotal. Protagonists with authoritative credentials square up on both sides of this debate, with big-shot professors at Ivy League universities and members of national academies going head-to-head over the mechanisms of evolution. Some people are even starting to wonder if a revolution is on the cards.

Continue reading.

Written by LeisureGuy

11 May 2019 at 9:28 am

Posted in Books, Evolution, Science

The price of plenty: how beef changed America

with one comment

Joshua Specht writes in the Guardian:

The meatpacking mogul Jonathan Ogden Armour could not abide socialist agitators. It was 1906, and Upton Sinclair had just published The Jungle, an explosive novel revealing the grim underside of the American meatpacking industry. Sinclair’s book told the tale of an immigrant family’s toil in Chicago’s slaughterhouses, tracing the family’s physical, financial and emotional collapse. The Jungle was not Armour’s only concern. The year before, the journalist Charles Edward Russell’s book The Greatest Trust in the World had detailed the greed and exploitation of a packing industry that came to the American dining table “three times a day … and extorts its tribute”.

In response to these attacks, Armour, head of the enormous Chicago-based meatpacking firm Armour & Co, took to the Saturday Evening Post to defend himself and his industry. Where critics saw filth, corruption and exploitation, Armour saw cleanliness, fairness and efficiency. If it were not for “the professional agitators of the country”, he claimed, the nation would be free to enjoy an abundance of delicious and affordable meat.

Armour and his critics could agree on this much: they lived in a world unimaginable 50 years before. In 1860, most cattle lived, died and were consumed within a few hundred miles’ radius. By 1906, an animal could be born in Texas, slaughtered in Chicago and eaten in New York. Americans rich and poor could expect to eat beef for dinner. The key aspects of modern beef production – highly centralised, meatpacker-dominated and low-cost – were all pioneered during that period.

For Armour, cheap beef and a thriving centralised meatpacking industry were the consequence of emerging technologies such as the railroad and refrigeration coupled with the business acumen of a set of honest and hard-working men like his father, Philip Danforth Armour. According to critics, however, a capitalist cabal was exploiting technological change and government corruption to bankrupt traditional butchers, sell diseased meat and impoverish the worker.

Ultimately, both views were correct. The national market for fresh beef was the culmination of a technological revolution, but it was also the result of collusion and predatory pricing. The industrial slaughterhouse was a triumph of human ingenuity as well as a site of brutal labour exploitation. Industrial beef production, with all its troubling costs and undeniable benefits, reflected seemingly contradictory realities.

Beef production would also help drive far-reaching changes in US agriculture. Fresh-fruit distribution began with the rise of the meatpackers’ refrigerator cars, which they rented to fruit and vegetable growers. Production of wheat, perhaps the US’s greatest food crop, bore the meatpackers’ mark. In order to manage animal feed costs, Armour & Co and Swift & Co invested heavily in wheat futures and controlled some of the country’s largest grain elevators. In the early 20th century, an Armour & Co promotional map announced that “the greatness of the United States is founded on agriculture”, and depicted the agricultural products of each US state, many of which moved through Armour facilities.

Beef was a paradigmatic industry for the rise of modern industrial agriculture, or agribusiness. As much as a story of science or technology, modern agriculture is a compromise between the unpredictability of nature and the rationality of capital. This was a lurching, violent process that sawmeatpackers displace the risks of blizzards, drought, disease and overproduction on to cattle ranchers. Today’s agricultural system works similarly. In poultry, processors like Perdue and Tyson use an elaborate system of contracts and required equipment and feed purchases to maximise their own profits while displacing risk on to contract farmers. This is true with crop production as well. As with 19th-century meatpacking, relatively small actors conduct the actual growing and production, while companies like Monsanto and Cargill control agricultural inputs and market access.

The transformations that remade beef production between the end of the American civil war in 1865 and the passage of the Federal Meat Inspection Act in 1906 stretched from the Great Plains to the kitchen table. Before the civil war, cattle raising was largely regional, and in most cases, the people who managed cattle out west were the same people who owned them. Then, in the 1870s and 80s, improved transport, bloody victories over the Plains Indians, and the American west’s integration into global capital markets sparked a ranching boom. Meanwhile, Chicago meatpackers pioneered centralised food processing. Using an innovative system of refrigerator cars and distribution centres, they began to distribute fresh beef nationwide. Millions of cattle were soon passing through Chicago’s slaughterhouses each year. By 1890, the Big Four meatpacking companies – Armour & Co, Swift & Co, Morris & Co and the GH Hammond Co – directly or indirectly controlled the majority of the nation’s beef and pork.

But in the 1880s, the big Chicago meatpackers faced determined opposition at every stage from slaughter to sale. Meatpackers fought with workers as they imposed a brutally exploitative labour regime. Meanwhile, attempts to transport freshly butchered beef faced opposition from railroads who found higher profits transporting live cattle east out of Chicago and to local slaughterhouses in eastern cities. Once pre-slaughtered and partially processed beef – known as “dressed beef” – reached the nation’s many cities and towns, the packers fought to displace traditional butchers and woo consumers sceptical of eating meat from an animal slaughtered a continent away.

The consequences of each of these struggles persist today. A small number of firms still control most of the country’s – and by now the world’s – beef. They draw from many comparatively small ranchers and cattle feeders, and depend on a low-paid, mostly invisible workforce. The fact that this set of relationships remains so stable, despite the public’s abstract sense that something is not quite right, is not the inevitable consequence of technological change but the direct result of the political struggles of the late 19th century.

In the slaughterhouse, someone was always willing to take your place. This could not have been far from the mind of 14-year-old Vincentz Rutkowski as he stooped, knife in hand, in a Swift & Co facility in summer 1892. For up to 10 hours each day, Vincentz trimmed tallow from cattle paunches. The job required strong workers who were low to the ground, making it ideal for boys like Rutkowski, who had the beginnings of the strength but not the size of grown men. For the first two weeks of his employment, Rutkowski shared his job with two other boys. As they became more skilled, one of the boys was fired. Another few weeks later, the other was also removed, and Rutkowski was expected to do the work of three people.

The morning that final co-worker left, on 30 June, Rutkowski fell behind the disassembly line’s frenetic pace. After just three hours of working alone, the boy failed to dodge a carcass swinging toward him. It struck his knife hand, driving the tool into his left arm near the elbow. The knife cut muscle and tendon, leaving Rutkowski with lifelong injuries.

The labour regime that led to Rutkowski’s injury was integral to large-scale meatpacking. A packinghouse was a masterpiece of technological and organisational achievement, but that was not enough to slaughter millions of cattle annually. Packing plants needed cheap, reliable and desperate labour. They found it via the combination of mass immigration and a legal regime that empowered management, checked the nascent power of unions and provided limited liability for worker injury. The Big Four’s output depended on worker quantity over worker quality.

Meatpacking lines, pioneered in the 1860s in Cincinnati’s pork packinghouses, were the first modern production lines. The innovation was that they kept products moving continuously, eliminating downtime and requiring workers to synchronise their movements to keep pace. This idea was enormously influential. In his memoirs, Henry Ford explained that his idea for continuous motion assembly “came in a general way from the overhead trolley that the Chicago packers use in dressing beef”.

Packing plants relied on a brilliant intensification of the division of labour. This division increased productivity because it simplified slaughter tasks. Workers could then be trained quickly, and because the tasks were also synchronised, everyone had to match the pace of the fastest worker.

When cattle first entered one of these slaughterhouses, they encountered an armed man walking toward them on an overhead plank. Whether by a hammer swing to the skull or a spear thrust to the animal’s spinal column, the (usually achieved) goal was to kill with a single blow. Assistants chained the animal’s legs and dragged the carcass from the room. The carcass was hoisted into the air and brought from station to station along an overhead rail.

Next, . . .

Continue reading. There’s much more.


Written by LeisureGuy

11 May 2019 at 8:39 am

Prologue to “Winners Take All: The Elite Charade of Changing the World”

leave a comment »

Anand Giridharadas has written an important book for the present moment, Winners Take All: The Elite Charade of Changing the World. To get a better understanding of why I write that, here is the prologue to the book. Read and see what you think.


All around us in America is the clank-clank-clank of the new—in our companies and economy, our neighborhoods and schools, our technologies and social fabric. But these novelties have failed to translate into broadly shared progress and the betterment of our overall civilization. American scientists make the most important discoveries in medicine and genetics and publish more biomedical research than those of any other country—but the average American’s health remains worse and slower-improving than that of peers in other rich countries, and in certain years life expectancy actually declines. American inventors create astonishing new ways to learn thanks to the power of video and the Internet, many of them free of charge—but the average twelfth grader tests more poorly in reading today than in 1992. The country has had a “culinary renaissance,” as one publication puts it, one farmers’ market and Whole Foods at a time—but it has failed to improve the nutrition of most people, with the incidence of obesity and related conditions rising over time. The tools for becoming an entrepreneur appear to be more accessible than ever, for the student who learns coding online or the Uber driver—but the share of young people who own a business has fallen by two-thirds since the 1980s. America has birthed a wildly successful online book superstore called Amazon, and another company, Google, has scanned more than twenty-five million books for public use—but illiteracy has remained stubbornly in place and the fraction of Americans who read at least one work of literature a year has dropped by almost a quarter in recent decades. The government has more data at its disposal and more ways of talking and listening to citizens—but only one-quarter as many people find it trustworthy as did in the tempestuous 1960s.

A successful society is a progress machine. It takes in the raw material of innovations and produces broad human advancement. America’s machine is broken. When the fruits of change have fallen on the United States in recent decades, the very fortunate have basketed almost all of them. For instance, the average pretax income of the top tenth of Americans has doubled since 1980, that of the top 1 percent has more than tripled, and that of the top 0.001 percent has risen more than sevenfold—even as the average pretax income of the bottom half of Americans has stayed almost precisely the same. These familiar figures amount to three and a half decades’ worth of wondrous, head-spinning change with zero impact on the average pay of 117 million Americans. Meanwhile, the opportunity to get ahead has been transformed from a shared reality to a perquisite of already being ahead. Among Americans born in 1940, those raised at the top of the upper middle class and the bottom of the lower middle class shared a roughly 90 percent chance of realizing the so-called American dream of ending up better off than their parents. Among Americans born in 1984 and maturing into adulthood today, the new reality is split-screen. Those raised near the top of the income ladder now have a 70 percent chance of realizing the dream. Meanwhile, those close to the bottom, more in need of elevation, have a 35 percent chance of climbing above their parents’ station. And it is not only progress and money that the fortunate monopolize: Rich American men, who tend to live longer than the average citizens of any other country, now live fifteen years longer than poor American men, who endure only as long as men in Sudan and Pakistan.

Thus many millions of Americans, on the left and right, feel one thing in common: that the game is rigged against people like them. Perhaps this is why we hear constant condemnation of “the system,” for it is the system that people expect to turn fortuitous developments into societal progress. Instead, the system—in America and around the world—has been organized to siphon the gains from innovation upward, such that the fortunes of the world’s billionaires now grow at more than double the pace of everyone else’s, and the top 10 percent of humanity have come to hold 90 percent of the planet’s wealth. It is no wonder that the American voting public—like other publics around the world—has turned more resentful and suspicious in recent years, embracing populist movements on the left and right, bringing socialism and nationalism into the center of political life in a way that once seemed unthinkable, and succumbing to all manner of conspiracy theory and fake news. There is a spreading recognition, on both sides of the ideological divide, that the system is broken and has to change.

Some elites faced with this kind of gathering anger have hidden behind walls and gates and on landed estates, emerging only to try to seize even greater political power to protect themselves against the mob. But in recent years a great many fortunate people have also tried something else, something both laudable and self-serving: They have tried to help by taking ownership of the problem.

All around us, the winners in our highly inequitable status quo declare themselves partisans of change. They know the problem, and they want to be part of the solution. Actually, they want to lead the search for solutions. They believe that their solutions deserve to be at the forefront of social change. They may join or support movements initiated by ordinary people looking to fix aspects of their society. More often, though, these elites start initiatives of their own, taking on social change as though it were just another stock in their portfolio or corporation to restructure. Because they are in charge of these attempts at social change, the attempts naturally reflect their biases.

The initiatives mostly aren’t democratic, nor do they reflect collective problem-solving or universal solutions. Rather, they favor the use of the private sector and its charitable spoils, the market way of looking at things, and the bypassing of government. They reflect a highly influential view that the winners of an unjust status quo—and the tools and mentalities and values that helped them win—are the secret to redressing the injustices. Those at greatest risk of being resented in an age of inequality are thereby recast as our saviors from an age of inequality. Socially minded financiers at Goldman Sachs seek to change the world through “win-win” initiatives like “green bonds” and “impact investing.” Tech companies like Uber and Airbnb cast themselves as empowering the poor by allowing them to chauffeur people around or rent out spare rooms. Management consultants and Wall Street brains seek to convince the social sector that they should guide its pursuit of greater equality by assuming board seats and leadership positions. Conferences and idea festivals sponsored by plutocrats and big business host panels on injustice and promote “thought leaders” who are willing to confine their thinking to improving lives within the faulty system rather than tackling the faults. Profitable companies built in questionable ways and employing reckless means engage in corporate social responsibility, and some rich people make a splash by “giving back”—regardless of the fact that they may have caused serious societal problems as they built their fortunes. Elite networking forums like the Aspen Institute and the Clinton Global Initiative groom the rich to be self-appointed leaders of social change, taking on the problems people like them have been instrumental in creating or sustaining. A new breed of community-minded so-called B Corporations has been born, reflecting a faith that more enlightened corporate self-interest—rather than, say, public regulation—is the surest guarantor of the public welfare. A pair of Silicon Valley billionaires fund an initiative to rethink the Democratic Party, and one of them can claim, without a hint of irony, that their goals are to amplify the voices of the powerless and reduce the political influence of rich people like them.

The elites behind efforts like these often speak in a language of “changing the world” and “making the world a better place” more typically associated with barricades than ski resorts. Yet we are left with the inescapable fact that in the very era in which these elites have done so much to help, they have continued to hoard the overwhelming share of progress, the average American’s life has scarcely improved, and virtually all of the nation’s institutions, with the exception of the military, have lost the public’s trust.

Are we ready to hand over our future to the elite, one supposedly world-changing initiative at a time? Are we ready to call participatory democracy a failure, and to declare these other, private forms of change-making the new way forward? Is the decrepit state of American self-government an excuse to work around it and let it further atrophy? Or is meaningful democracy, in which we all potentially have a voice, worth fighting for?

There is no denying that today’s elite may be among the more socially concerned elites in history. But it is also, by the cold logic of numbers, among the more predatory in history. By refusing to risk its way of life, by rejecting the idea that the powerful might have to sacrifice for the common good, it clings to a set of social arrangements that allow it to monopolize progress and then give symbolic scraps to the forsaken—many of whom wouldn’t need the scraps if the society were working right. This book is an attempt to understand the connection between these elites’ social concern and predation, between the extraordinary helping and the extraordinary hoarding, between the milking—and perhaps abetting—of an unjust status quo and the attempts by the milkers to repair a small part of it. It is also an attempt to offer a view of how the elite see the world, so that we might better assess the merits and limitations of their world-changing campaigns.

There are many ways to make sense of all this elite concern and predation. One is that the elites are doing the best they can. The world is what it is; the system is what it is; the forces of the age are bigger than anyone can resist; the most fortunate are helping. This view may allow that this helpfulness is just a drop in the bucket, but it is something. The slightly more critical view is that this elite-led change is well-meaning but inadequate. It treats symptoms, not root causes; it does not change the fundamentals of what ails us. According to this view, elites are shirking the duty of more meaningful reform.

But there is still another, darker way of judging what goes on when elites put themselves in the vanguard of social change: that it not only fails to make things better, but also serves to keep things as they are. After all, it takes the edge off of some of the public’s anger at being excluded from progress. It improves the image of the winners. With its private and voluntary half-measures, it crowds out public solutions that would solve problems for everyone, and do so with or without the elite’s blessing. There is no question that the outpouring of elite-led social change in our era does great good and soothes pain and saves lives. But we should also recall Oscar Wilde’s words about such elite helpfulness being “not a solution” but “an aggravation of the difficulty.” More than a century ago, in an age of churn like our own, he wrote, “Just as the worst slave-owners were those who were kind to their slaves, and so prevented the horror of the system being realised by those who suffered from it, and understood by those who contemplated it, so, in the present state of things in England, the people who do most harm are the people who try to do most good.”

Wilde’s formulation may sound extreme to modern ears. How can there be anything wrong with trying to do good? The answer may be: when the good is an accomplice to even greater, if more invisible, harm. In our era that harm is the concentration of money and power among a small few, who reap from that concentration a near monopoly on the benefits of change. And do-gooding pursued by elites tends not only to leave this concentration untouched, but actually to shore it up. For when elites assume leadership of social change, they are able to reshape what social change is—above all, to present it as something that should never threaten winners. In an age defined by a chasm between those who have power and those who don’t, elites have spread the idea that people must be helped, but only in market-friendly ways that do not upset fundamental power equations. The society should be changed in ways that do not change the underlying economic system that has allowed the winners to win and fostered many of the problems they seek to solve. The broad fidelity to this law helps make sense of what we observe all around: the powerful fighting to “change the world” in ways that essentially keep it the same, and “giving back” in ways that sustain an indefensible distribution of influence, resources, and tools. Is there a better way?

The secretary-general of the Organisation for Economic Co-operation and Development (OECD), a research and policy organization that works on behalf of the world’s richest countries, recently compared the prevailing elite posture to that of the fictional Italian aristocrat Tancredi Falconeri, who declared, “If we want things to stay as they are, things will have to change.” If this view is correct, then much of the charity and social innovation and give-one-get-one marketing around us may not be reform measures so much as forms of conservative self-defense—measures that protect elites from more menacing change. Among the kinds of issues being sidelined, the OECD leader, Ángel Gurría, wrote, are “rising inequalities of income, wealth and opportunities; the growing disconnect between finance and the real economy; mounting divergence in productivity levels between workers, firms and regions; winner-take-most dynamics in many markets; limited progressivity of our tax systems; corruption and capture of politics and institutions by vested interests; lack of transparency and participation by ordinary citizens in decision-making; the soundness of the education and of the values we transmit to future generations.” Elites, Gurría writes, have found myriad ways to “change things on the surface so that in practice nothing changes at all.” The people with the most to lose from genuine social change have placed themselves in charge of social change, often with the passive assent of those most in need of it.

It is fitting that an era marked by these tendencies should culminate in the election of Donald Trump. Trump is at once an exposer, an exploiter, and an embodiment of the cult of elite-led social change. He tapped, as few before him successfully had, into a widespread intuition that elites were phonily claiming to be doing what was best for most Americans. He exploited that intuition by whipping it into frenzied anger and then directing most of that anger not at elites but at the most marginalized and vulnerable Americans. And he came to incarnate the very fraud that had fueled his rise and that he had exploited. He became, like the elites he assailed, the establishment figure who falsely casts himself as a renegade. He became the rich, educated man who styles himself as the ablest protector of the poor and uneducated—and who insists, against all evidence, that his interests have nothing to do with the change he seeks. He became the chief salesman for the theory, rife among plutocratic change agents, that what is best for powerful him is best for the powerless, too. Trump is the reductio ad absurdum of a culture that tasks elites with reforming the very systems that have made them and left others in the dust.

One thing that unites those who voted for Trump and those who despaired at his being elected is a sense that the country requires transformational reform. The question we confront is whether moneyed elites, who already rule the roost in the economy and exert enormous influence in the corridors of political power, should be allowed to continue their conquest of social change and of the pursuit of greater equality. The only thing better than controlling money and power is to control the efforts to question the distribution of money and power. The only thing better than being a fox is being a fox asked to watch over hens.

What is at stake is whether the reform of our common life is led by governments elected by and accountable to the people, or rather by wealthy elites claiming to know our best interests. We must decide whether, in the name of ascendant values such as efficiency and scale, we are willing to allow democratic purpose to be usurped by private actors who often genuinely aspire to improve things but, first things first, seek to protect themselves. Yes, government is dysfunctional at present. But that is all the more reason to treat its repair as our foremost national priority. Pursuing workarounds of our troubled democracy makes democracy even more troubled. We must ask ourselves why we have so easily lost faith in the engines of progress that got us where we are today—in the democratic efforts to outlaw slavery, end child labor, limit the workday, keep drugs safe, protect collective bargaining, create public schools, battle the Great Depression, electrify rural America, weave a nation together by road, pursue a Great Society free of poverty, extend civil and political rights to women and African Americans and other minorities, and give our fellow citizens health, security, and dignity in old age.

This book offers a series of portraits of this elite-led, market-friendly, winner-safe social change. In these pages, you will meet people who ardently believe in this form of change and people who are beginning to question it. You will meet a start-up employee who believes her for-profit company has the solution to the woes of the working poor, and a billionaire investor in her company who believes that only vigorous public action can stem the rising tide of public rage. You will meet a thinker who grapples with how much she can challenge the rich and powerful if she wants to keep getting their invitations and patronage. You will meet a campaigner for economic equality whose previous employers include Goldman Sachs and McKinsey, and who wonders about his complicity in what he calls “the Trying-to-Solve-the-Problem-with-the-Tools-That-Caused-It issue.” You will meet one of the most powerful figures in the philanthropy world, who stuns his rich admirers by refusing to honor the taboo against speaking of how they make their money. You will meet a former American president who launched his career with a belief in changing the world through political action, and then, as he began to spend time with plutocrats in his post-presidential life, gravitated toward private methods of change that benefit rather than scare them. You will meet a widely lionized “social innovator” who quietly nurses doubts about whether his commercial approach to world-changing is what it is cracked up to be. You will meet an Italian philosopher who reminds us what gets sidelined when the moneyed take over change.

What these various figures have in common is that they are grappling with certain powerful myths—the myths that have fostered an age of extraordinary power concentration; that have allowed the elite’s private, partial, and self-preservational deeds to pass for real change; that have let many decent winners convince themselves, and much of the world, that their plan to “do well by doing good” is an adequate answer to an age of exclusion; that put a gloss of selflessness on the protection of one’s privileges; and that cast more meaningful change as wide-eyed, radical, and vague.

It is my hope in writing what follows to reveal these myths to be exactly that. Much of what appears to be reform in our time is in fact the defense of stasis. When we see through the myths that foster this misperception, the path to genuine change will come into view. It will once again be possible to improve the world without permission slips from the powerful.


Written by LeisureGuy

7 May 2019 at 11:18 am

%d bloggers like this: