Later On

A blog written for those whose interests more or less match mine.

Archive for October 2019

Another whole-food plant-based melange

leave a comment »

I like to record these so I can refer to them for ideas. This one used some vegetables from a Costco run, as noted.

I used the No. 12 Field Company skillet, which I heated in the oven to 350ºF before putting it on a hot burner.

While it heated, I chpped vegetables and put into a bowl, garlic in a little bowl, leeks and onions in another bowl, and the rest in a big bowl.

• 6 cloves Russian red garlic, minced – this will soon be gone for another year, alas
• about 3/4 cup chopped leeks from Costco – better to buy whole leek and chop it myself
• 1 bunch large scallions, chopped including leaves
• 1/2 small delicata squash, seeded and diced
• 1 handful green beans, cut into 1″ lengths – Costco
• 1 handful asparagus, cut into 1″ lengths – Costco
• 1/4 head red cabbage shredded
• 3 large King Oyster mushrooms chopped
• about 1.5 cups oyster mushrooms, chopped
• about 1/2 cup diced carrot
• about 1/2 cup diced daikon radish
• about 1 cup chopped broccoli – Costco
• 12 mini-San Marzano tomatoes, sliced
• 4-5 ounces homemade tempeh, diced
• 4-5 ounces fried firm tofu, diced
• about 1.5″ fresh turmeric, minced
• 1.5 tablespoons dried marjoram
• 1.5 tablespoons dried spearmint
• 2 tablespoons freshly ground black pepper
• 1 teaspoon crushed red pepper flakes
• 2 tablespoons horseradish
• 3 tablespoons no-salt-added tomato paste
• 3/4 cup low-sodium vegetable broth

When the time came, I turned on the burner, turned off the oven, moved skillet to burner, and put the handle glove on the handle. I added:

• 2 tablespoons extra-virgin olive oil
• leeks and scallions

I cooked that for a few minutes then added the garlic and gave that about a minute. Then I dumped in the rest and sautéed for several minutes, stirring frequently and carefully: it’s piled pretty high, but as it cooks it shrinks in volume.

After several minutes of sautéing, I added the vegetable broth and began simmering it, still stirring frequently. As it cooked, I prepared:

• lemons, diced like this

I removed from the lemon halves the seeds that I saw, then put them in the beaker and used the immersion blend to blend them. This I poured over the cooked melange as a final touch to brighten the flavor.

The idea is to use a good variety of vegetables. I wanted to add a 300g block of frozen chopped spinach, but I’ll do that for another melange. It does make a good addition. I would have used a couple of jalapeño peppers, but I’m out (thus the crushed red pepper flakes). When I made this last time I had a summer crookneck yellow squash, so diced that and included it. Basically, you can use whatever vegetables you want. In my experience, the result tastes good and has a nice variety of textures.

I may have omitted a vegetable or two. If I remember any, I’ll update the recipe.

To serve it I put about 1/4 cup cooked kamut in a bowl, put on it 1 tablespoon nutritional yeast and 1 tablespoon ground flax seed, then about 3/4 cup of the melange and mixed it together. At the table I sprinkled some of this chili pepper on it.

I also have been using shirataki noodles (zero calories, in effect), and so far the Liviva brand. The “spaghetti” is good, but I’m going to get the rice-shaped since it is easier to mix.

Written by Leisureguy

31 October 2019 at 6:27 pm

Tech tips for National Novel-Writing Month (aka NaNoWriMo)

leave a comment »

This year marks the 20th anniversary of the National Novel Writing Month project, which challenges people to write a 50,000-word novel in November. NaNoWriMo, as it is known, is a nonprofit that supports creative writing and education. Those who sign up for the group’s free annual event get community support, progress tracking and motivational advice to complete a book draft.

If you think you have a novel in you, here is a quick guide to digital tools to help you along your way.

(And if the thought of cranking out an average of 1,667 words a day in the NaNoWriMo challenge doesn’t fit in with your schedule or you need more prep time — don’t despair. You could write it at your own pace.)

The NaNoWriMo site has useful writing guides and other material to help you map out your plot and develop your characters before you dive in.

Check your app store for software like Novelist (free for Android), which has a text editor function and templates for organizing plot points, themes and characters — along with tools for tracking word-count goals and backing up your work. Writing Shed ($10 to $14 for iOS, iPadOS and Mac), Bear ($1.50 a month for iOS, iPadOS and Mac) and Writer Tools (free to $5 a month for Android) are similar options.

For those serious about book writing, Scrivener is a full-fledged composition and manuscript-management program. It uses a clever interface to break long documents into sections so you can write, research and rearrange word chunks more easily. Scrivener runs on Windows, Mac and iOS systems; prices start at $20 with a free trial available.

If you’ve already done your prep work, fire up your word processor. Most common programs — including Apple’s PagesDropbox PaperGoogle DocsMicrosoft Word and Zoho Writer — work on computer and mobile devices. You can write on the screen you’re currently using and have the text update everywhere, although you may need an Office 365 subscription for syncing Word files. . .

Continue reading.

Written by Leisureguy

31 October 2019 at 4:21 pm

Posted in Books, Software, Technology

The lure of secret passages and secret rooms

leave a comment »

The books I read in my youth fairly often feature a secret passage or secret room, and today there are companies that specialize in such things.

Written by Leisureguy

31 October 2019 at 4:16 pm

Posted in Daily life

Dahlia Lithwick: Why I Haven’t Gone Back to SCOTUS Since Kavanaugh

leave a comment »

Dahlia Lithwick writes in Slate:

It’s been just over a year since I sat in the hearing room and watched the final act of Brett Kavanaugh’s confirmation hearing. I listened from the back as Christine Blasey Ford and then-Judge Kavanaugh each faced the Senate Judiciary Committee to tell irreconcilable versions of what happened in the summer of 1982. The morning was spent as I’d anticipated: all of us—the press corps, the country—listening, some clearly in agony, to Ford’s account. And then Kavanaugh came in and started screaming. The reporters at the tables around me took him in with blank shock, mindlessly typing the words he was yelling.

The enduring memory, a year later, is that my 15-year-old son texted—he was watching it in school—to ask if I was “perfectly safe” in the Senate chamber. He was afraid for the judge’s mental health and my physical health. I had to patiently explain that I was in no physical danger of any kind, that there were dozens of people in the room, and that I was at the very back, with the phalanx of reporters. My son’s visceral fears don’t really matter in one sense, beyond the fact that I was forced to explain to him that the man shouting about conspiracies and pledging revenge on his detractors would sit on the court for many decades; and in that one sense, none of us, as women, were ever going to be perfectly safe again.

Kavanaugh is now installed for a lifetime at the highest court in the land. Ford is still unable to resume her life or work for fear of death threats. And the only thing the hearings resolved conclusively is that Senate Republicans couldn’t be bothered to figure out what happened that summer of 1982, or in the summers and jobs and weekends that followed. In the year-plus since, I have given many speeches in rooms full of women who still have no idea what actually happened in that hearing room that day, or why a parody of an FBI investigation was allowed to substitute for fact-finding, or why Debbie Ramirez and her Yale classmates were never even taken seriously, and why three books so far and two more books to come are doing the work of fact-finding that government couldn’t be bothered to undertake. Women I meet every week assure me that they are never going to feel perfectly safe again, which makes my son somewhat prescient. Two out of the nine sitting justices have credibly been accused of sexual impropriety against women. They will be deciding fundamental questions about women’s liberty and autonomy, having both vowed to get even for what they were “put through” when we tried to assess whether they were worthy of the privilege and honor of a seat on the highest court in the country.

My job as a Supreme Court reporter used to be to explain and translate the institution to people locked out of its daily proceedings. I did that reasonably well for 19 years, I suppose. Years upon years of sometimes partisan, often political brawling, from Bush v. Gore to the Affordable Care Act to Obergefell—and abortion, yes. But always swathed in black robes and velvet curtains, in polite questions, and case names and at least the appearance that this was all cool science, as opposed to blood sport.

What I have not acceded to is the routinization and normalization of the unprecedented seat stolen from President Barack Obama in 2016 for no reason other than Mitch McConnell wanted it, and could. And what I have also not acceded to is the routinization and normalization of an unprecedented seating of someone who managed to himself evade the very inquiries and truth-seeking functions that justice is supposed to demand. And so, while I cannot know conclusively what happened in the summer of 1982, or at the sloppy drunk parties in the years that followed at Yale, or in the falling-down summer evenings at tony D.C. law firms, or with the gambling debts, or with the leaked Judiciary Committee emails, I can say that given Senate Republicans’ refusal to investigate, acknowledge, or even turn over more than 100,000 pages of documents relating to Kavanaugh, it is surely not my job to, in the parlance of Justice Antonin Scalia, America’s favorite grief counselor, “get over it.”

The American public seems to be getting over the Kavanaugh hearings. New polling certainly suggests as much. And, having spent the bulk of last term lying low both doctrinally and also publicly, Kavanaugh appears to be ready to emerge now, in the form of a soaring Federalist Society butterfly. By his watch, apparently, it’s time, and so he will be a featured speaker at the swanky Federalist Society dinner next month (tickets are $250 for nonmembers and $200 for members). William Barr’s Justice Department last week awarded the “Attorney General’s Award for Distinguished Service,” the department’s second highest honor, to the team of attorneys that worked on Kavanaugh’s nomination last year. It was a closed ceremony.

Two of the three female justices spoke out this summer to support their new colleague. They hailed him as a mentor to his female clerks or as a collegial member of the Nine and urged us, in the case of Justice Sonia Sotomayor, to look to the future and turn the page. It is, of course, their actual job to get over it. They will spend the coming years doing whatever they can to pick off a vote of his, here and there, and the only way that can happen is through generosity and solicitude and the endless public performance of getting over it. I understand this.

As a Supreme Court reporter, I am also expected to afford the new justice that same generosity and solicitude. As a journalist, I am finding it hard to do. After all, he is a man who has already publicly condemned his critics to suffer his wrath for embarrassing him. He is a man who has promised that his doubters and detractors will “reap the whirlwind.” He should know full well that after such behavior, he will be celebrated as a hero by some, and he should understand that for millions of others, the choice will be whether to let him back into the centrist, reasonable D.C. insider fold or to push him to become what Clarence Thomas became after his own hearings: a vengeance machine that neither forgives nor forgets. Nobody other than the most radical conservative wants another vengeance machine on the high court, not one who could otherwise be a fifth vote on occasion. So the name of the game is forgiveness and forgetting, in service of long-term tactical appeasement.

That is the problem with power: It incentivizes forgiveness and forgetting. It’s why the dozens of ethics complaints filed after the Kavanaugh hearings complaining about the judge’s behavior have been easily buried in a bottomless file of appeasement, on the grounds that he’s been seated and it’s too late. The problem with power is that there is no speaking truth to it when it holds all the cards. And now, given a lifetime appointment to a position that is checked by no one, Washington, the clerkship machinery, the cocktail party circuit, the elite academy all have a vested interest in getting over it and the public performance of getting over it. And a year perhaps seems a reasonable time stamp for that to begin.

The problem with power is that Brett Kavanaugh now has a monopoly on normalization, letting bygones be bygones, and turning the page. American women also have to decide whether to get over it or to invite more recriminations. That is, for those keeping track, the very definition of an abusive relationship. You stick around hoping that he’s changed, or that he didn’t mean it, or that if you don’t anger him again, maybe it’ll all be fine when the court hears the game-changing abortion appeal this year.

I wish we could have learned what Brett Kavanaugh has actually done, said, worked on, enabled, covered for, empowered. Perhaps the next book will reveal more. Perhaps the one after that. The collective public conclusion of the most recent book, by Kate Kelly and Robin Pogrebin, seems to be that he was a sloppy, reckless, drunk youth who has largely become better, and that it is perhaps unfair to hold men to standards that we somehow always forgive when they are still boys. We didn’t get to have that conversation either. And the people who most deserve to decide whether he is, in fact, cured of these alleged acts of youthful carelessness, violence, and predation—the women who say he has harmed them—have, other than Ford, neither been heard nor recognized. I’m not certain they subscribe to the narrative that he was a naughty boy now recovered. He spent his confirmation hearing erasing them, and his boosters and fans have made their lives since unbearable. At any rate, they are also powerless, now, to change what has occurred.

It is not my job to decide if Brett Kavanaugh is guilty. It’s impossible for me to do so with incomplete information, and with no process for testing competing facts. But it’s certainly not my job to exonerate him because it’s good for his career, or for mine, or for the future of an independent judiciary. Picking up an oar to . . .

Continue reading.

Written by Leisureguy

31 October 2019 at 2:53 pm

Tempeh batch 6 in hand

leave a comment »

This is after 72 hours, and the gray and black spots are fine: the mold has just reach the stage of sporing. From Tempeh FAQ:

Under a microscope, the rhizopus mold looks like a tiny mushroom colony. The white mycelium is what holds the beans into a compact cake. The black spots are spores, similar to what is found on the underside of a mushroom. As the tempeh matures in the incubator, it begins its reproduction process, and starts to sporulate. The black or gray patches are completely harmless and actually indicate that the tempeh has matured to its peak of flavor. In Indonesia, where tempeh originated, some recipes call for overripe, or very dark tempeh.

If your tempeh has colors other than black, white or gray, please do not eat it. The aroma should be mushroom-like and texture should not be slimy. A very slight smell of ammonia is normal, as this is produced by the tempeh as it grows.

This is the slab taken from the dish and held in my hand. It has good structural cohesion and rigidity.

The experiment of just leaving it on the counter, loosely tented with aluminum foil in which slots had been slashed, worked fine. Next time I think I’ll move it to the counter after the first 24 hours, as the instructions suggest.

I had a little already. Yummy.

Update: Just has some more. I sliced off thin slabs and sautéed them in olive oil until browned, both sides. Delicious.

Written by Leisureguy

31 October 2019 at 1:33 pm

Fire Crackers, made from saltines: Sounds tasty (but I won’t eat)

leave a comment »

Just too much high-glycemic refined flour for me, not to mention the salt. But there was a time…

I do defend the right of people to choose their own foods, and this does sound tasty for those who eat this sort of thing.

Anna Hezel writes in Taste:

To most Americans, buying saltines is a little like buying paper towels. They’re there on the grocery store shelf for $2 a box when you need them, but they’re rarely a source of excitement or inspiration. Most of the time, we take the salty, flaky crackers for granted as a reliable source of calories on a sick day, or as the gratis two-piece packet alongside a bowl of chicken noodle soup at a diner.

But as many Southern home cooks know, you haven’t really had saltines until you’ve emptied a full box of them into a giant ziplock bag, poured in an entire pint of olive oil and a packet of dehydrated ranch seasoning, and then watched each saturated square turn from ivory to glossy, golden brown in the belly of a roaring oven.

“There is a long history of enterprising home cooks—Southern and otherwise—using crackers as ingredients in other dishes,” says Sheri Castle, the author of Instantly Southern. Castle’s version of this particular snack (sometimes known as fire crackers, comeback crackers, or party crackers) uses a seasoning blend of garlic powder, onion powder, red pepper flakes, and ranch dressing mix.

Texas snack food company Savory Fine Foods has even capitalized upon the semi-homemade phenomenon by selling packets of “Party Cracker Seasoning” (to be mixed with the copious glugs of canola oil) in flavors like Sweet Bar-B-Q, Garden Dill, and Texas Chipotle. The recipe on the back of each packet tells you to skip the baking part altogether and start partying right away once the crackers have been marinated overnight.

Although Castle is hesitant to claim any regional ownership over the standby, she says, “Most old Southern community cookbooks and family recipe boxes include some version of enhanced crackers such as these.” Some involve wrapping crackers in bacon, spreading them with butter before baking, or deep-frying. “Ice water saltines” or “soufflé crackers” turn the rigid cracker into a warm, puffy, rich pastry by dipping it in ice water and then baking with plenty of butter. . .

Continue reading. There’s more that’s of interest, including the recipe.

Written by Leisureguy

31 October 2019 at 1:28 pm

Follow-up interview with Dr. Neil Barnard on “What the Health”

leave a comment »

What the Health, once free on YouTube, has been removed, though it’s still available on various streaming services, including Netflix. (BTW, if you’re looking for a film on-line, I highly recommend as a search engine.) I found this interview with Dr. Barnard clarifying and interesting.

The movie documents the serious conflicts of interest that health organizations face when a substantial part of their funding comes from segments of the food industry (the beef council, the egg producers, the dairy lobby, and so on), many of which are allowed to write papers for the health organizations in exchange for their funding, something What the Health points out. This results in the sort of spin on research findings described in the previous post.

Watch the interview and see what you think.

Written by Leisureguy

31 October 2019 at 12:02 pm

Spinning Science and the Elusive Quest for Objectivity

leave a comment »

David Rettew MD writes in Psychology Today:

We like to think of scientific studies and the articles that come from these studies as cold, hard, objectives facts. Sure, scientific data can be spun and cherry picked by pundits trying to advance particular agendas—but that is thought to come from other people who introduce bias after the fact, not in the actual production of scientific evidence itself.

Such a perspective regarding the purity of scientific information did take a hit a couple decades ago in the world of medicine, when it came to light that studies of medications and other medical devices were often skewed to show products in their most favorable light. This was done through a number of techniques that ranged from statistical maneuverings to simply not publishing negative papers in the first place. Many of the authors of these studies had financial ties to the products they were investigating. If you were a pharmaceutical company excited about promoting a promising new antidepressant—and your most recent clinical trial showed the drug was a complete flop—maybe you could, for example, just reveal your data at the annual New Guinea Entomology Conference or, better yet, simply pretend the study never happened at all.

These little tricks prompted some serious changes to the way scientific data was produced and published. Scientists started to be required to disclose, in writing, all possible financial relationships that could present a conflict of interest, and journals began insisting that treatment studies be registered and described in detail before they started in order for them to be qualified for publication.

Now it’s all on the up-and-up, right? Well, maybe not so fast. A recent scientific study about scientific studies looked at the degree to which bias or “spin” is still present in our literature. The authors pulled 116 clinical trial articles from prominent psychology and psychiatry journals that tested specific types of treatment, like medication or type of psychotherapy. For an article to qualify, the main pre-defined outcome needed to be negative—meaning that, overall, the active treatment was found to not be statistically different from placebo or a control group. Then, the authors looked at the published summary of the article, what is called the abstract, to see if this negative result was fairly communicated—versus being twisted by including language that actively downplayed the result or interpreted it in a much more positive manner.

The result was that over half of the articles (56 percent) contained spin, which was most commonly placed in the Conclusions section of the abstract (where people who can’t even bother to read the entire summary go to get a quick answer). The most common type of spin was focusing on “secondary” outcomes that were positive at the expense of primary ones that weren’t.

In other words, say you were conducting a study of a medication to treat anxious adults and you measure anxiety with two different rating scales. Before you carried out the study, you would have to pick one of them as your primary measure that would ultimately determine whether or not your medication worked; you’d also be allowed to have another secondary measure (or two) under the argument that these scales captured something a little different. If your primary measure showed no effect from the medication but one of your secondary ones did, the spin would be in devoting a disproportionate amount of focus to the positive secondary scale.

Interestingly, having the study funded by a commercial entity like a pharmaceutical company, as opposed to a government-funded study, did NOT predict which studies had a spin. Indeed, the vast majority of studies with spin were not industry-funded.

Some important caveats are worth noting. First, one person’s “spin” is another person’s expanded information. The presence of spin as defined in the study certainly does not equate to the intentional manipulation of data or an effort to “fool” the reading public. Devoting some attention to secondary outcomes may be entirely appropriate and worthwhile. If you are doing a study on the treatment of ADHD, for example, it might be important to mention a treatment’s effect on anxiety levels—even if that is not the main focus of the study.

At the same time, however, this study should serve as a reminder to both authors and readers that truly objective scientific data is still hard to achieve, and can be influenced by more than just financial ties to a drug company. Many readers of scientific information are well aware of the phenomenon in which prominent scientists—who are known to have a particular position on a controversial topic—often seem to be able to conduct studies that serve to confirm their beliefs.  Again, this does not mean that there is scientific fraud, but rather that the human element may be more difficult to remove than we think.

Subtle bias can come from non-financial sources as well, including scientific ones. Once a person goes on public record with a view (or a book) that, for example, video games lead to violence (or they don’t) or that cognitive-behavior therapy is the best treatment for depression, it can be very hard to shift even a little from that position, lest that person be seen as (gasp) incorrect or, even worse, wishy-washy.

What to do about all this? From a regulatory standpoint, . . .

Continue reading.

I certainly understand the power of the public record. After some years of being on a low-carb high-fat diet and writing about it—and about how my blood glucose levels were at last under control—I did find it a bit challenging to switch to a different diet—namely, a whole-food plant-based diet—and then to announce the switch, and that’s as a modest private citizen with no public and professional reputation on the line. (And I’m glad I made the switch, FWIW: I find that I eat a greater variety of interesting food and I feel free of the tyranny of the meat-centered meal. YMMV.)

This post documented my pivot, and it was not an easy post to write. I recall a part of me chastising myself as I was writing because I was disagreeing with what I had written before. We have a hunger for consistency, but that can become an obstacle to learning and to change. From Walt Whitman’s line “Song of Myself” Part 51:

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)

Another recollection: reading of a class at Columbia School of Journalism in which students, desperately wishing to do more research, interview more people, huddled at the keyboards as the professor at the head of the class slapped a ruler on the desk, changing, “Go with what you’ve got! Go with what you’ve got!” In life, we pretty much always are going with what we’ve got, and when we get more, we sometimes must change direction and move on. That can be hard if you’ve too firmly planted your flag on some particular promontory. You always must go with what you’ve got—while always trying to learn more.


Written by Leisureguy

31 October 2019 at 11:49 am

Omega’s Mighty Midget, Martin de Candre, and the OneBlade

with 2 comments

Omega’s badger-and-boar combo brush is a might nice little guy. I think it’s (for me) one of the “standard” brushes any well-stocked shaving cabinet should have. I soak it (because of boar) and it works up a good lather immediately and holds plenty. And the lather today was excellent. This is the Martin de Candre regular shaving soap, with a lavender, mint, and rosemary fragrance (much the same as Nancy Boy, now that I think of it). I also had a tub of the fougère fragrance (which they call “fern” on their English-language page), but like the original fragrance better. I find MdC to be a terrific shaving soap. I recall when we in the US were just starting to hear about it, with the fear that what we heard was hype. So I bought a tub, tried it, and found it was the read deal: a really excellent shaving soap. It doesn’t have the exotic ingredients of today’s artisanal soaps (it’s basically stearic acid and coconut oil), but it does make a lovely lather and has its own intriguing backstory: “boiled in seawater,” “dates back probably to 1890!” …

Well, not so much: it dates back to 1980, and no mention of seawater in the current description. I did find this comment from stringbag on Reddit: “Martin de Candre and Marseille soaps in general use a fully boiled method of production, precipitating the soap out of solution using salt. Martin de Candre claim to use sea water for this purpose.” But when asked to verify the seawater, claim he couldn’t find it. He noted, “I can tell you that Marseille soap is traditionally made with seawater, and MdC is a Marseille soap maker that happens to make a few shaving soaps.”

James Woods at Bearded Blade has a good write-up of the shaving soap, and Martin de Candre’s own story is  as follows:

The Original or the first Martin de Candre Shaving Soap, the one we created in the 1980s. The discreet and elegant freshness of its perfume made from Lavender essential oils, Mint and Rosemary, was immediately successful! Within a few years, it has become the flagship product of Savonnerie … A beautiful story!

It is called “Original”, from the Latin “originalis” that we can translate as: “which was not made according to a model and which serves or will serve as a model” … It is the figurehead Martin de Candre shaving soaps.

The main advantages of MdC shaving soap:

A hot saponification, made by us in our workshop, with 100% vegetable oils (olive, copra) and potash (Alcali). Our 40 years of experience allow us to guarantee perfect control of this process.

– A fully hot saponified paste guarantees an excellent conservation, without the addition of any preservative.

– At the end of the cooking process, we add in the still hot paste essential oils which allows us to perfume our soaps at their very heart.

Weeks of drying, so that the soap loses weight, its water. Taking the time to dry the soap is a “madness” when today, everything must go very fast … but there is no secret: the paste and the perfume are greatly improved!

-A brand product, made by a French craftsman, a luxurious product … and yet one of the most economical: it lasts a good year according to your feedback and comments.

-And finally let’s talk about its lather:

The lather takes shape quickly, under the beautiful gestures of the shaving brush, you get a smooth shave, very close, and without skin aggression. The lather is abundant, it “sticks” to the skin: we get on the face like a soft and unctuous cream. This lather is really beautiful but YOU are undoubtedly speaking about it better than anyone!

The lather (which for some reason they call “foam”—I edited that passage) really is excellent, and because it is so good I was able to let it pile up on the OneBlade and do one rise per pass after all. I don’t know that I could do that with every lather, but with this morning’s MdC lather, it was easy—although there seemed to be a lot of lather gathered there on the single-sided razor.

I can see that a blade change is needed. I got an excellent shave, but it required more work and polishing that yesterday’s shave, and it seems definitely time to change the blade. The ideal would be one shave per blade: $1/day for the blade. And I can see the appeal of the razor: it really is exceptionally comfortable, and the “automatic transmission” aspect—the pivoting head—means you can just glide through the shave with long, swooping strokes.

A splash of l’Occitane Verbena as aftershave carried through the French theme and left me feeling great and ready for the day.

Written by Leisureguy

31 October 2019 at 10:47 am

Posted in Shaving

Another new-to-me food: Miyeok Julgi Bokkeum

with one comment

You probably have it all the time. 🙂 As I’ve commented before, I enjoy buying at the supermarket some food I’ve never and bringing it home to research, prepare, and eat. Sometimes it goes well, sometimes it goes great, and rarely does it disappoint (though I admit there have been some). The hits then become a part of regular rotation, like tong ho, greens of the chrysanthemum family: extremely tasty.

So when I saw Salted Brown Seaweed Stem in a handy pack, I couldn’t resist. When I first opened the pack, though, I sort of wished I had: a tangled mass of tough seaweed bristling with large salt crystals. But I hit the internet and found this explanation and recipe, and now I’m about to cook it.

Rinsing it well in three bowls of cold water and then soakiing it for forty minutes in another bowl of cold water removed the salt and also tenderized it. It was still a tangle, but I just put it in a heap and chopped it.

I will update this after I eat some. I’m following the recipe except I won’t add salt (low-salt diet) and won’t use cooking mirin but regular mirin, which is sweet so I won’t use sugar.

Update: Not bad. A little coarser than the (cold) seaweed salad I used to get at the sushi place, but the same line of country. Doubtless a good source of iodine. Will repeat with the other brand they had.

Written by Leisureguy

30 October 2019 at 5:59 pm

Music as a meme

leave a comment »

Music is something that resides in human knowledge, and one person teaches another, in the way that memes reproduce. Sounds are not taught — sounds are physical phenomena — but music must be learned and resides within human culture, not in the world outside human understanding (the world in which sounds exist). Music emerged from cultural evolution and is a cultural construct, which is why the music of different cultures sounds so strange to those who have no learned that music (cf. Japanese music, Navajo music, African music, and Beethoven). People learn to make certain kinds of sounds, and learn to apprehend those as music, just as people learn to make certain sounds with their mouth — the sounds being real phenomena — and learn to apprehend those as language, for language (like music) resides totally within human understanding and has no reall physical existence.

Elena Renken writes in Quanta:

In the lowlands of Bolivia, the most isolated of the Tsimané people live in communities without electricity; they don’t own televisions, computers or phones, and even battery-powered radios are rare. Their minimal exposure to Western culture happens mostly during occasional trips to nearby towns. To the researchers who make their way into Tsimané villages by truck and canoe each summer, that isolation makes the Tsimané an almost uniquely valuable source of insights into the human brain and its processing of music.

Most studies about music perception examine people accustomed to Western music, so only a few enclaves like these remote Tsimané villages allow scientists to make comparisons across cultures. There they can try to tease apart the effects of exposure to music from the brain’s innate comprehension of it — or at least start dissecting the relationship between the two. “We need to understand that interplay between our genes and our experience,” said Josh McDermott, an associate professor of brain and cognitive sciences at the Massachusetts Institute of Technology. He is the senior author of a recent paper involving the Tsimané in the journal Current Biology which suggests that a feature of music most of us might consider to be intrinsic — the perceived organization of musical pitches into octaves — is a cultural artifact.

Musical systems around the world and across historical eras have been diverse, but octaves are commonly a feature of them. The acoustic structure of octaves is always the same: The frequency of a note in one octave is half the frequency of the same note in the octave above. For example, middle C, or C4, is 261.63 hertz, while C5, one octave up, is 523.25 hertz. These physical qualities of sound in the ear have routinely led to assumptions that octave equivalence — the perception of pitches in different octaves as variations on the same note — is universal, according to Elizabeth Margulis, a professor of music at Princeton University.

McDermott and an international team of colleagues have now tested that assumption with their experiments, in which they asked Tsimané volunteers to listen and sing. A machine played two notes, one after the other, and the subject would sing them back into a microphone. The researchers played pairs of notes different distances apart on the scale and in different pitch ranges.

Computer analyses that compared the Tsimané participants with those in the United States found that both groups generally preserved the pitch intervals between the notes played to them — for example, maintaining the difference between a middle C and middle A. Both groups could also discriminate well between pitches only up to about 4,000 hertz, near the highest key on a piano, C8. For pairs of notes higher than that, everyone seemed to have trouble characterizing the differences.

A curious difference emerged, however, in how they sang the notes back. When the notes played were very high or low, U.S. participants accurately shifted the notes into an octave within their vocal range. The Tsimané didn’t. To them, it seemingly wasn’t clear what notes in their range best corresponded to the ones they heard. Their responses didn’t seem to reflect a perception of octave structure at all.

The researchers went so far as to coach the Tsimané to switch octaves. They gave feedback, like “excellent!” (Anic jäm’ in Tsimané) or “OK” (Dam’ jäm’), depending on how close their responses were to the notes of the prompt. The villagers did not get closer, however. It appeared that the same notes in different octaves, like high C and middle C, didn’t sound alike to the Tsimané as they did to people in the U.S.

The researchers acknowledged in the study that the results might reflect differences in how people sing, and not in how people perceive pitch. But they argued that the totality of collected data, including some more direct measures of octave perception, put the weight of evidence on the side of a perceptual explanation. The Tsimané have shown perceptual differences before: In an earlier study by McDermott and his colleagues, the Tsimané rated dissonant combinations of pitches, which Western listeners find grating, to be just as pleasant as more consonant chords.

Moreover, if the Tsimané’s performance in the tests has more to do with singing ability than their perceptual experience, then it would mean that all people have absolute pitch — an idea that is not well supported experimentally, according to McDermott and his colleagues. Instead, the researchers speculate that the brain’s default approach to identifying pitch is based on relative differences between notes, while absolute pitch, which can help with recognizing notes scattered across octaves, is something learned through experience. They suggest that Tsimané have generally not had those experiences. . .

Continue reading.

Written by Leisureguy

30 October 2019 at 3:09 pm

Independent evolution of life (like the independent of evolution of the eye)

with 2 comments

Being able to see provides such a survival advantage, particularly among terrestrial animals, that any light-sensing cell will, through natural section, tend to develop into an eye. Eyes have evolved totally independently in different species of animals at least 40 times and probably as many as 65 times. Obviously the eyes don’t look alike—the eyes of an octopus, a spider, a snake, and a human all look very different from one another, but they all fulfill the same function: seeing things at a distance.

And, it turns out, the same is true (in a way) of life. Even given very different starting conditions and different substances available, evolution seems to move toward the emergence of living things, life obviously being a survival advantage for conglomerations of chemicals, molecules, and membranes. I was rereading this post from a couple of years ago in which I blogged an article about an independent evolution of life itself, not just of an eye: an animal whose basic makeup shows a totally different starting point but for which evolution resulted in similarity of functions—similar to the independent evolution of eyes.

It almost seems that life, given reasonable conditions, will inevitably arise because it lies in the path of least effort.

Written by Leisureguy

30 October 2019 at 1:20 pm

Posted in Evolution, Science

Batch 6 after 48 hours

leave a comment »

I have to take it out of the oven to roast some food for dinner, but that should be okay. The instructions that came with the starter culture in fact advises that after 24 hours the mold is generating enough of its own heat that it can be at room temperature. I think I’ll give it another 24 hours at room temperature and see what happens. As you can see, there are some places where more mold seems desirable.

Written by Leisureguy

30 October 2019 at 1:09 pm

Posted in Daily life, Food, Non-animal diet

Tagged with

Language, generalized

leave a comment »

I was reading this passage from The Hand: How Its Use Shapes the Brain, Language, and Human Culture, by Frank R. Wilson:

While I was in San Diego in 1973 I ran into Ursula Bellugi, a psycholinguist whom I had met before. She took me to her lab, where there were some deaf people signing. While I watched, she translated into English what they were saying. It took me some time to absorb what she had shown me; Ursula explained that sign language is not a code on English—she said, “It seems to be a language. There are rules for making up words and rules for making sentences out of the words, but the rules have to do with space and shape—it’s an entirely different way of doing language.” I was really stunned. It was like being told there’s another ocean that you had never heard of. After a few days of looking into it and digesting it, I began to realize that this meant that language was not about speaking and hearing, which had always been my assumption. It meant that the brain had the capacity for language, and if you can’t put it out through the mouth, you put it out through the hands.

It struck me that language is simple one example of how the brain engages the body to communicate. Take, for example, fencing. Initially one must learn positions, basic movements, simple attacks, just as one in learning language must initially learn to make the right sounds, and to form words, and to string words together into sentences that communicate.

But once the basics are internalized, a conversation can ensue. Two fencers are not thinking in words but are directly expressing their brain activity (“thoughts” in the most general sense) into movement: attacks, parries, sequences of exchanges. They are doing something that each can follow and to which each responds, just as with a conversation in language. The same holds for the play of two experts in Go or chess: they no longer are thinking so much of individual moves as of the flow of the game and the ideas — the specific Go or chess ideas — being expressed in the game. They are playing the game as a language.

And the same in many other fields: in jazz one first must master the instrument and the scale and the way to respond to others, but then there is a kind of conversation of music ideas and conversation. They play as a language is spoken, though the ideas are music not words.

And the same with fashion, and with cooking, and with dance: first learn the basics, and then you no longer think of those but use them to express ideas that inhabit that particular sphere (of fashion or cooking or dance).

The brain has evolved this ability to create a communicative structure within any medium, and use that medium as we use (say) words and language. This is the power of memes: once the meme is taught, it becomes fodder for the brain’s ability to use memes to make new patterns that communicate. All memes can perhaps be used like this: once a person has been taught how to use a hammer and chisel, the possibility of creating sculpture arises.

Written by Leisureguy

30 October 2019 at 10:36 am

Posted in Daily life, Memes

A Vie-Long from a defunct forum and the wonderful Yaqi DOC

leave a comment »

This little brush issued by is a Vie-Long horsehair. This morning I seem not to have fully loaded it—I quit loading just when I should have added a little water and continued—but I still got a reasonably good lather and I love the fragrance (espresso and vanilla) of Stubble Trubble’s Up & Adam.

The Yaqi double-open-comb head is remarkably good and Yaqi razors are well worth considering, IMO. Three passes, perfect result, no damage.

A good splash of Phoenix Artisan’s Spring-Heeled Jack to carry the coffee fragrance forward, and the day begins. Today I’m making fondant potatoes to try for a possible Thanksgiving dish.

Written by Leisureguy

30 October 2019 at 9:00 am

The Politics of Images in Machine Learning Training Sets

leave a comment »

Kate Crawford and Trevor Paglen write at Excavating AI:

You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning.

Something is wrong with this picture.

Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?

In short, how did we get here?

There’s an urban legend about the early days of machine vision, the subfield of artificial intelligence (AI) concerned with teaching machines to detect and interpret images. In 1966, Marvin Minsky was a young professor at MIT, making a name for himself in the emerging field of artificial intelligence.[1] Deciding that the ability to interpret images was a core feature of intelligence, Minsky turned to an undergraduate student, Gerald Sussman, and asked him to “spend the summer linking a camera to a computer and getting the computer to describe what it saw.”[2] This became the Summer Vision Project.[3] Needless to say, the project of getting computers to “see” was much harder than anyone expected, and would take a lot longer than a single summer.

The story we’ve been told goes like this: brilliant men worked for decades on the problem of computer vision, proceeding in fits and starts, until the turn to probabilistic modeling and learning techniques in the 1990s accelerated progress. This led to the current moment, in which challenges such as object detection and facial recognition have been largely solved.[4] This arc of inevitability recurs in many AI narratives, where it is assumed that ongoing technical improvements will resolve all problems and limitations.

But what if the opposite is true? What if the challenge of getting computers to “describe what they see” will always be a problem? In this essay, we will explore why the automated interpretation of images is an inherently social and political project, rather than a purely technical one. Understanding the politics within AI systems matters more than ever, as they are quickly moving into the architecture of social institutions: deciding whom to interview for a job, which students are paying attention in class, which suspects to arrest, and much else.

For the last two years, we have been studying the underlying logic of how images are used to train AI systems to “see” the world. We have looked at hundreds of collections of images used in artificial intelligence, from the first experiments with facial recognition in the early 1960s to contemporary training sets containing millions of images. Methodologically, we could call this project an archeology of datasets: we have been digging through the material layers, cataloguing the principles and values by which something was constructed, and analyzing what normative patterns of life were assumed, supported, and reproduced. By excavating the construction of these training sets and their underlying structures, many unquestioned assumptions are revealed. These assumptions inform the way AI systems work—and fail—to this day.

This essay begins with a deceptively simple question: What work do images do in AI systems? What are computers meant to recognize in an image and what is misrecognized or even completely invisible? Next, we look at the method for introducing images into computer systems and look at how taxonomies order the foundational concepts that will become intelligible to a computer system. Then we turn to the question of labeling: how do humans tell computers which words will relate to a given image? And what is at stake in the way AI systems use these labels to classify humans, including by race, gender, emotions, ability, sexuality, and personality? Finally, we turn to the purposes that computer vision is meant to serve in our society—the judgments, choices, and consequences of providing computers with these capacities.

Training AI

Building AI systems requires data. Supervised machine-learning systems designed for object or facial recognition are trained on vast amounts of data contained within datasets made up of many discrete images. To build a computer vision system that can, for example, recognize the difference between pictures of apples and oranges, a developer has to collect, label, and train a neural network on thousands of labeled images of apples and oranges. On the software side, the algorithms conduct a statistical survey of the images, and develop a model to recognize the difference between the two “classes.” If all goes according to plan, the trained model will be able to distinguish the difference between images of apples and oranges that it has never encountered before.

Training sets, then, are the foundation on which contemporary machine-learning systems are built.[5] They are central to how AI systems recognize and interpret the world. These datasets shape the epistemic boundaries governing how AI systems operate, and thus are an essential part of understanding socially significant questions about AI.

But when we look at the training images widely used in computer-vision systems, we find a bedrock composed of shaky and skewed assumptions. For reasons that are rarely discussed within the field of computer vision, and despite all that institutions like MIT and companies like Google and Facebook have done, the project of interpreting images is a profoundly complex and relational endeavor. Images are remarkably slippery things, laden with multiple potential meanings, irresolvable questions, and contradictions. Entire subfields of philosophy, art history, and media theory are dedicated to teasing out all the nuances of the unstable relationship between images and meanings.[6] . . .

Continue reading.

Written by Leisureguy

29 October 2019 at 3:33 pm

Posted in Software, Technology

What John Rawls Missed

leave a comment »

Jedidiah Britton-Purdy writes in the New Republic:

John Rawls, who died in 2002, was the most influential American philosopher of the twentieth century. His great work, A Theory of Justice, appeared in 1971 and defined the field of political philosophy for generations. It set out standards for a just society in the form of two principles. First, a just society would protect the strongest set of civil liberties and personal rights compatible with everyone else having the same rights. Second, it would tolerate economic inequalities only if they improved the situation of the poorest and most marginalized (for example, by paying doctors well to encourage people to enter a socially necessary profession).

Taken seriously, Rawls’s principles would require a radical transformation: no hedge funds unless allowing them to operate will benefit the homeless? No Silicon Valley IPOs unless they make life better for farmworkers in the Central Valley? A just society would be very different from anything the United States has ever been. Rawls argued that justice would be compatible with either democratic socialism or a “property-owning democracy” of roughly equal smallholders. One thing was clear: America could not remain as it was, on pain of injustice.

It did not remain as it was, but Rawls’s vision did not triumph either. A Theory of Justice was published in 1971, just before economic inequality began its long ascent from its lowest level in history to today’s Second Gilded Age. Richard Nixon’s “Southern strategy” was reorganizing American politics around resistance to equal rights. Within a decade, Ronald Reagan and Margaret Thatcher would lead the English-speaking world sharply away from anything resembling Rawls’s egalitarianism. Yet his philosophical stature only increased. Even his critics—the libertarian Robert Nozick, the feminist Susan Moller Okin, the communitarian Michael Sandel—ended up confirming the central and inescapable place of his thought. By the end of his life, philosophical thinking about equality, war, political authority, dissent and obedience, and global order took place on a terrain that Rawls towered over—in the shadow of justice.

That shadow provides the title of Katrina Forrester’s extraordinary study of Rawls’s thought and its legacy. Over the last 50 years, she argues, Rawls’s centrality has shaped the very idea of what philosophy is. Working in his aftermath, political philosophers have tended to emphasize ideals of consensus-seeking deliberation, legalistic formulations of political problems, and the dilemmas of individual choice in terrible situations such as war. Certain other questions have been quietly kept out: notably, the central place of conflict and collective action in politics, the tendency of capitalist democracy to fall into plutocracy, and the deep role of racism and colonialism in shaping American society and world order.

Yet as Forrester’s book demonstrates, Rawls’s approach to philosophizing about politics was never the only one, however much his influence has made it seem so. Instead, his theory of justice emerged from his distinctive experience of the exceptional decades after World War II. By tracing those historical circumstances—the political and economic assumptions of the postwar years, as well as the ways philosophy was done then—Forrester shows how Rawls’s thinking, with its strengths and blind spots, came to seem natural. Her aim is to open space for problems that Rawls neglected. What would it mean to pursue a just society while grappling with how deeply unjust and divided ours is, with how it got and stays that way?

Although Rawls’s principles of justice were in many ways radical, they were not novel. He is often thought of as the philosopher of 1960s Great Society reformism, because his principles seemed to elaborate on the goals of the civil rights movement and the war on poverty. What was new was Rawls’s mode of argument. He asked a question fundamental in political philosophy: Can any society be justified to all its members, in light of the inequalities it contains, the burdens it imposes (who empties the bedpans and gets up at midnight to make sure the subways keep running?), and the violence it deals out through police, prisons, and wars? If it cannot be just, then some of us are living in a kind of prison, and others are the wardens. If, however, justification is possible, then we might be able to create a world in which we actually approach one another as free and equal persons.

To imagine such a world, we have to shake off the habits of this one and picture ourselves as able to reset all our institutions and social relations, keeping only those that are just—that is, are justifiable to everyone who has to live within them. Rawls proposed a technique for doing this, a thought experiment that he called the “original position.” It invites us to imagine a situation in which people are to choose the world in which they will live. The key is that they choose behind a “veil of ignorance,” that is, they do not know where they would fall in the distribution of privilege and vulnerability in the societies they imagine. Would you choose to live in the United States today if you didn’t know whether you would be Elon Musk or an undocumented immigrant?

Rawls argued that, faced with this uncertainty, people would choose the world that provided the best position for the least advantaged, worst-off class of people. If you don’t know where you will fall, you will want the worst possibility to be as acceptable as possible. Economics-minded critics argued that this was too risk-averse, that one might gamble for the Silicon Valley jackpot at the risk of picking lettuce instead. But this criticism misconstrued the project: Rawls’s argument was a way of setting out exactly what it meant to justify a social world even to the people picking lettuce. If the question is, “Can this world be justified to me as a free and equal person?” Rawls was not prepared to accept, “Yes, because you might have been Elon Musk!” as an answer.

Conservative critics such as the Straussian Allan Bloom (later famous for his polemic The Closing of the American Mind) accused Rawls of cherry-picking principles to suit the liberal prejudices of the moment. In Rawls’s hands, the original position gave philosophy’s imprimatur to the democratic welfare state as well as to the civil disobedience of the civil rights movement and resistance to the Vietnam War. Friendlier readers interpreted Rawls in light of the conflicts of the early 1970s too. Philosopher Marshall Cohen’s New York Times review of A Theory of Justice welcomed a defense of American liberalism “at a time when these principles are persistently being obscured and betrayed”—presumably in Vietnam and at home by the Nixon administration.

Both of these responses, Forrester argues, miss key features of Rawls’s project. Her story begins in the decade after World War II, when Rawls undertook the work that became A Theory of Justice. A watershed event for Rawls was the 1953 publication of Ludwig Wittgenstein’s Philosophical Investigations, which along with Wittgenstein’s other late work helped to inspire a broader philosophical turn to “ordinary language.” When Rawls visited Oxford in the academic year of 1952 to ’53, this approach was richly elaborated there. It was the new philosophical frontier of the age, full of untried possibility.

Ordinary-language philosophers turned away from highly technical questions about the fundamental nature of language (What makes a sentence true? Does every word in a true statement refer to some definite object in the world?). Instead they asked how language works from the point of view of a clear-minded speaker and listener. Everyone lives inside a language, they reasoned, knows how to use its grammar, and recognizes misuse and confusion. We have to get over the philosophical impulse to seize sentences and sweat them, inquisition-style, until they confirm their truth or confess their falsehood. Philosophy is less about achieving a new kind of knowledge, more about making clear what we already know. Philosophers began to think about language and social practices such as law the way we think about games. There is no such thing as hitting a triple outside of baseball: Try as you might, you cannot do it alone, or in a group of people who have never heard of baseball and want you, please, to take your stick off the soccer pitch. But once you are playing baseball, it is clear whether or not a triple has been hit. Even close cases, such as a photo-finish race to beat a throw from an outfielder, just confirm that we know what a triple is.

The legal philosopher H.L.A. Hart argued that law, too, is a game in this way. There is no “natural law” that tells you whether you “really” must obey a law you dislike, as both dissenters and defenders of existing law had long hoped to show. But once you are involved in legal argument, you tacitly accept that certain things count as law. If you start to insist that Leviticus trumps the San Francisco municipal code, then you have become the person waving a baseball bat on the soccer pitch, hoping to get to third base in a game where third base does not exist. Forrester argues that Rawls wanted to elucidate society itself as a “game” of this sort. Social morality, which is the topic of justice, had its own tacit rules, and drawing those out could help to make clear what people already knew when not distracted by self-interest or prejudice. Like the rule book for a well-established sport, the original position and the principles that Rawls drew from it did not dictate some new morality. They helped to spell out the terms of a social practice.

If Rawls’s approach to justice emerged from the philosophical currents of the 1950s, it also formed in response to political concerns. Born in Baltimore in 1921, Rawls saw the rise of the administrative state through the 1930s and ’40s, as New Deal programs led to the establishment of an alphabet soup of government agencies to implement them: the SEC, the FHA, the PWA, the NLRB, and many more. Although Rawls was not an anti-New Deal reactionary, he shared the worries of some liberals and centrists that the expanded American state would end up interfering with personal autonomy through perennial supervision of the economy. He preferred to think that if the state established the right set of operating principles and guardrails, people would be able to get along on their own, with no more than modest political intrusion or contest.

It was bold, if not implausible, to posit a neutral and abiding set of principles in American society, which was torn by bloody labor conflict in the ’30s and ’40s, and sent its pacifists and revolutionaries to prison or worse. But Rawls wasn’t alone in doing so: The decades in which he developed his theory formed the high-water mark of the “consensus” schools of American political science and history. It became conventional to say that Americans had mostly agreed on the essential principles of liberty, equality, and democracy—and, less abstractly, private property, regulated markets, and courts of law. Conflict was the exception. Radical dissenters were outliers. The idea of consensus was essential to Rawls’s project: If Americans deeply agreed on justice, then the hidden logic of that agreement, drawn out through the original position, could both guide and limit the state.

A Theory of Justice was both radical and conservative. Yes, it proposed a sweeping . . .

Continue reading.

Written by Leisureguy

29 October 2019 at 2:14 pm

Tempeh Batch 6 at 24 hours

leave a comment »

It’s interesting that the mold seems to favor beans over grain—and substantially so thus far. I was wishing I had cooked 1.5 cups beans and 1.5 cups kamut instead of 2 cups beans and 1 cup kamut, but now I’m not so sure. We’ll see how it fares. I certainly hope kamut is not an antifungal.

But it’s early hours. We’ll see what it looks like tomorrow.

Written by Leisureguy

29 October 2019 at 2:06 pm

Boeing Is MAXed Out on Smoking Guns

leave a comment »

Maureen Tkacik writes in the New Republic:

The name Mark Forkner is by now familiar to even relatively casual followers of the Boeing 737 MAX saga. Forkner is the former chief technical pilot who conducted a series of Grey Goose–addled Skype chats with a colleague about the new 737’s now infamous self-hijacking software MCAS. The supposed piloting-correction device was, Forkner wrote, “running rampant” in the simulator; for good measure, he also noted that he had “lied to the regulators (unknowingly)” about it. These revelations sent Boeing’s stock plunging over news the feds had finally unearthed a “smoking gun” proving the company knew in advance that its badly flawed piece of software could go “crazy” and cause a crash.

Naturally, Boeing pushed back against the headlines, arguing that the chats did not constitute a “smoking gun” because they concerned a pilot operating in a simulator and not behind a real-life cockpit. For once, many of the best chroniclers of the MAX catastrophe agreed with the company.

Still, we should not be led by this fleeting moment of consensus: Most everyone weighing in on the Forkner chats was wrong, or missing the point. At the end of the day, we don’t need a smoking gun to determine exactly what Boeing knew before the first crash, for the simple reason that we saw in real time how company officials responded after the first crash—i.e., with a veritable arsenal of smoking guns in the form of obvious lies and easily contradicted misinformation. This is also the most crucial lesson to fix in mind as Congress renews a round of hearings on the MAX fiasco on Tuesday.

Here’s one extremely revealing diptych from last fall, for example: When Boeing reps were discussing the crash in conference calls and town hall meetings with airlines and commercial pilots in November, they vehemently blamed incompetent pilots, according to a lawsuit from the Southwest Airlines pilots’ union. In talks with the Federal Aviation Administration, however, Boeing advised issuing an emergency directive “reminding” those same pilots of an obscure protocol they might need to employ “in the event of uncommanded nose down trim”—i.e., in case the plane randomly started to nosedive in unprompted fashion.

The original text of the directive contained a damning reference to the now-notorious MCAS system but at the last minute someone removed all references to the software from the directive. This disappearing act was, in turn, an eerie repeat of the program’s mysterious near-total omission from the plane’s official 1,600 page manual.

So when prosecutors started digging into the mystery of why the murderous software had been omitted from so much of the official MAX literature and test requirements in early 2019, Boeing handed over the inboxes of Mark Forkner, who had spearheaded certain aspects of the plane’s FAA certification and left the company in rather abrupt fashion a few months before the crashes.

Seen in this context, Forkner’s improbable advancement to center stage in the MCAS narrative is another smoking gun of sorts—that is, once you realize that David Gerger, the attorney he has retained to represent him in the probe, is world famous in white collar criminal defense for being, inter alia, the lawyer who got his client Andy Fastow, Enron’s chief financial officer/debt concealment mastermind, off with a six-year sentence the same year Enron CEO Jeff Skilling got 24. The FBI has interviewed scores of former and current Boeing employees in conjunction with the various investigations into the MAX, but Forkner is the only one known to have retained a criminal attorney. In any event, once you read the chat transcript, it’s abundantly clear that the guy is just about the opposite of Fastow. The chat, which occurred between Forkner and his fellow technical pilot Patrik Gustavsson, took place on November 15, 2017, and happened to coincide with the very moment both pilots learn, via someone named “Vince,” that the MCAS software has been drastically changed from its previous iteration, wherein it operated only at extremely high speeds and existed primarily to enable the plane to properly execute military maneuvers required by the FAA testing regime:

Forkner: Oh shocker alerT!

MCAS is now active down to M .2 It’s running rampant in the sim on me at least that’s what Vince thinks is happening

Gustavsson: Oh great, that means we have to update the speed trim descritption [sic] in vol 2

Forkner: so I basically lied to the regulators (unknowingly)

Gustavsson: it wasnt [sic] a lie, no one told us that was the case

Forkner: Vince is going to get me some spreadsheet table that shows when it’s supposed to kick in. why are we just now hearing about this?

Gustavsson: I don’t know, the test pilots have kept us out of the loop

It’s really only christine that is trying to work with us, but she has been too busy

Forkner: they’re all so damn busy, and getting pressure from the program

“Out of the loop” was an understatement: The MCAS changes, which had the software operating at speeds as low as Mach 0.2 (or about 150 miles per hour), were just the tip of the iceberg. The change in MCAS’s speed capability involved, in fact, a whole host of cascading implications that were hardly self-evident; they’d all been approved nearly a year earlier, and finalized in March—the same month, in other words, that Forkner convinced the FAA to remove mention of the program from the flight manual. Forkner had spent the summer meeting with airlines and regulators easing anxieties about the new plane without any notion that any of this was going on. His one crime appears to have been reiterating his unknowing lie to the FAA once more in early 2017, in the process of confirming the omission of MCAS from the official flight manual. His logic was by then completely obsolesced by events: MCAS didn’t need a manual entry, he argued, because it only ran “well outside the normal operating envelope” of the plane.  Indeed, it seems highly doubtful, given Forkner’s self-professed and demonstrated inability to extract information out of his colleagues, that he had any grasp of how MCAS actually worked. That’s especially the case, one can assume, since his colleagues seemed relatively committed to keeping him in the dark about the software and he was busy trying to get the simulator to calm down. Say what you will about previous corporate scandal fall guys; at least Goldman Sachs’s notorious derivatives trader Fabrice Tourre basically understood how a synthetic credit default swap worked. I strongly doubt the same can be said for Mark Forkner and MCAS.

Moreover, lying (especially this brand of regulatory white-lying-by-omission-of-actionable knowledge) to aviation regulators is simply part of staying employed at Boeing. Consider in this regard a recently publicized whistleblower complaint filed internally by current Boeing engineer Curtis Ewbank, who mostly discusses his attempts to equip the MAX with an additional safeguard that arguably could have prevented the crashes, also describes a separate episode in which he was assigned at one point to research in-flight 737 auto-throttle malfunctions following an inquiry from the European Union’s aviation regulator. Ewbank says he was at this point directed to withhold information about the five additional malfunctions he uncovered on grounds that Boeing would “fix it” internally. And yet, for a sin of omission that surely seemed far more benign to someone deliberately unacquainted with the plane’s engineering, Forkner appears to have been Boeing’s designated in-house scapegoat—at least among those who don’t happen to be dead pilots.

More recently, Boeing chairman David Calhoun has also . . .

Continue reading.

Written by Leisureguy

29 October 2019 at 2:03 pm

The Market for Voting Machines Is Broken. This Company Has Thrived in It.

leave a comment »

Jessica Huseman reports in ProPublica:

In the glare of the hotly contested 2018 elections, things did not go ideally for ES&S, the nation’s largest manufacturer of voting technology.

In Georgia, where the race for governor had drawn national interest amid concerns about election integrity, ES&S-owned technology was in use when more than 150,000 voters inexplicably did not cast a vote for lieutenant governor. In part because the aged ES&S-managed machines did not produce paper backups, it wasn’t clear whether mechanical or human errors were to blame. Litigation surrounding the vote endures to this day.

In Indiana, ES&S’ systems were plagued by mishaps at the local level. In Johnson County, for instance, the company’s brand-new machines faltered in ways that made it difficult to know whether some people had voted more than once.

“ES&S misjudged the need for appropriate resources to serve Johnson County on Election Day 2018,” a report issued by state election officials later concluded. Johnson County subsequently terminated its contract with ES&S and, this September, paid more than $1.5 million to purchase an entirely new set of equipment.

The uneven performance by ES&S in 2018, however, did little to dent its position as one of the most popular and powerful voting technology companies in the U.S. Any number of prior controversies hadn’t either.

The vote in 2006 in Sarasota, Florida, was just one. There, ES&S machines lost around 18,000 votes; it is still unclear why. The loss was far more than the margin of victory, and a lawsuit followed that ultimately resolved little. The company said in a statement that a variety of testing done on its machines supports its claim that the devices were not at fault, but the county wound up canceling its dealings with the firm shortly afterward.

Despite such stumbles, ES&S — based in Omaha, Nebraska, and employing roughly 500 people — controls around 50% of the country’s election system market, the company says, meaning that some 70 million Americans vote using the company’s equipment.

The question of the nation’s election integrity has rarely been more urgent. President Donald Trump has repeatedly made baseless claims of voter fraud. The special counsel investigation of Russian meddling in the 2016 vote produced indictments of more than a dozen foreign nationals. How and what federal authorities are doing to prevent future scandals — incidents of outside interference or basic breakdowns in accurately counting votes — has become yet one more partisan issue in a bitterly divided Congress.

A ProPublica examination of ES&S shows it has fought hard to keep its dominance in the face of repeated controversies. The company has a reputation among both its competitors and election officials for routinely going to court when it fails to win contracts or has them taken away, suing voting jurisdictions, rivals, advocates for greater election security and others.

In September 2018, ES&S filed a federal lawsuit against Cook County, Illinois, after the county awarded a $30 million voting machine contract to another company. ES&S later dropped the lawsuit, but the dispute delayed the implementation of Cook County’s new machines, and the Chicago mayoral election this spring ultimately was conducted using the same machines that were meant to be replaced.

ES&S’ lawsuits and threats of lawsuits have helped delay or thwart progress toward better voting technology even when the litigation is unsuccessful, more than two dozen election officials and voting technology experts said in interviews.

“I’d love to see a vibrant marketplace of voting technologies, with companies competing to make better products at lower prices. We have that in mobile phones, personal computers and automobiles, but we don’t have it at all with voting technologies,” said Dan Wallach, a computer science professor at Rice University who studies election systems and has testified in many such lawsuits. “These companies’ litigiousness creates a barrier to competition that becomes a barrier to improving our elections.”

ES&S is owned by the McCarthy Group, a private equity firm, and thus its financial records — revenue, profits, salaries — are not public. A spokesperson declined to make ES&S officials available for an interview for this article, and the company instead issued written responses to questions. It said its machinery and technology were ultimately found blameless in some of the election controversies over the years. In Georgia, it said that while the company was responsible for the maintenance of the machines, it was unable to make updates to the software. The state of Georgia, it said, was ultimately responsible for the mishaps.

“ES&S believes strongly in the security, accuracy and reliability of our machines,” the company said. “We’re committed to our role in democracy and protecting the right to vote.”

Asked about its history of litigation, the company said it sued only as a way to keep competitors honest: “We believe American voters deserve accountability in this industry.”

ES&S’ chief competitors — Hart InterCivic and Dominion — are not strangers to mishaps and aggressive tactics. Dominion’s most recent voting technology failed certification in Texas, and the state also saw hundreds of aging Hart machines confusing voters and leading to accusations of vote flipping. Both companies have sued customers and each other. But industry experts insist ES&S, in part because it has been around and dominant for so long, is in a category of its own.

Whatever the merits of ES&S’ performance, election officials and experts interviewed by ProPublica say the problems with the country’s voting systems go well beyond one company. They say the very nature of the industry and the way it’s regulated work against innovation and reward the tiny handful of often trouble-plagued companies that have been around for decades.

Along with going to court, ES&S had held onto the lion’s share of the nation’s election technology business by using a variety of controversial tactics, its critics say. For years, ES&S has required states and counties that buy its machines to sign long-term deals that often obligate them to purchase a vast array of other equipment and supplies from the company. ES&S also has made it a practice to hire former election officials as lobbyists in statehouses around the country. And it has donated to individual campaigns and spent money to lobby local and federal politicians at levels far higher than its competitors.

ES&S defended its practices, saying that lobbyists play an important role in monitoring new legislation, and that ES&S is in compliance with all state and federal laws related to lobbying. “Lobbyists help ensure legislators have accurate information for use in serving the best interest of the citizens they represent,” it said. “Lobbying is a normal practice employed by all election vendors and is common across all industries.”

In August 2018, Louisiana announced it would replace its old voting machines and awarded a $95 million contract to a rival of ES&S, which was the lowest bidder. ES&S filed a complaint that accused the state of writing its request for proposals so that only the other company’s machines would satisfy the terms. Shortly after, Gov. John Bel Edwards canceled the deal, effectively siding with ES&S and forcing the state to start the process over again.

“The governor’s administration just sided with a company that was $40 million more expensive,” Louisiana Secretary of State Kyle Ardoin said in a statement after the cancelation.

In a statement, the governor’s office said that the cancelation was justified. The office also laid the blame at the feet of the secretary of state’s office, which it said added “additional requirements” to the bid “just days before the responses were due.”

Louisiana campaign finance records show that an ES&S lobbyist in Baton Rouge has donated $13,250 to Edwards’ campaigns since 2014.

Louisiana still has not obtained new voting machines, and the state is expected to cast ballots in 2020 on the same machines it’s had since 2005.

Sen. Ron Wyden, D-Ore., is among a number of legislators pushing the federal government to better oversee the quality of local election technology, from voting machines to voter registration databases.

To begin the process of assessing the security and reliability of voting machines across the country, Wyden this year sent a detailed  . . .

Continue reading. There’s much more.

Written by Leisureguy

29 October 2019 at 1:40 pm

%d bloggers like this: