Later On

A blog written for those whose interests more or less match mine.

Archive for December 28th, 2020

How to tell whether you’re a jerk

leave a comment »

Just in time for New Year’s resolutions, Eric Schwitzgebel has a useful article in Nautilus. (Jerkitude might be related to the Dunning-Kruger effect in that (in my experience) people who are jerks are confident that they are not, and those who wonder whether they are jerks are not.)

Here’s something you probably didn’t do this morning: Look in the mirror and ask, am I a jerk?

It seems like a reasonable question. There are, presumably, genuine jerks in the world. And many of those jerks, presumably, have a pretty high moral opinion of themselves, or at least a moderate opinion of themselves. They don’t think of themselves as jerks, because jerk self-knowledge is hard to come by.

Psychologist Simine Vazire at the University of California, Davis argues that we tend to have good self-knowledge of our own traits when those traits are both evaluatively neutral (in the sense that it’s not especially good or bad to have those traits), and straightforwardly observable.

For example, people tend to know whether they are talkative. It’s more or less okay to be talkative and more or less okay to be quiet, and in any case your degree of talkativeness is pretty much out there for everyone to see. Self-ratings of talkativeness tend to correlate fairly well with peer ratings and objective measures. Creativity, on the other hand, is a much more evaluatively loaded trait—who doesn’t want to think of themselves as creative?—and much less straightforward to assess. In keeping with Vazire’s model, we find poor correlations among self-ratings, peer ratings, and psychologists’ attempts at objective measures of creativity.

The question “am I really, truly a self-important jerk?” is highly evaluatively loaded, so you will be highly motivated to reach a favored answer: “No, of course not!” Being a jerk is also not straightforwardly observable, so you will have plenty of room to reinterpret evidence to suit: “Sure, maybe I was a little grumpy with that cashier, but she deserved it for forgetting to put my double shot in a tall cup.”

Academically intelligent people, by the way, aren’t immune to motivated reasoning. On the contrary, recent research by Dan M. Kahan of Yale University suggests that reflective and educated people might be especially skilled at rationalizing their preexisting beliefs—for example, interpreting complicated evidence about gun control in a manner that fits their political preferences.

I suspect there is a zero correlation between people’s self-opinion about their degree of jerkitude and their true overall degree of jerkitude. Some recalcitrant jerks might recognize that they are so, but others might think themselves quite dandy. Some genuine sweethearts might fully recognize how sweet they are, while others might have far too low an opinion of their own moral character.

There’s another obstacle to jerk self-knowledge, too: We don’t yet have a good understanding of the essence of jerkitude—not yet, at least. There is no official scientific designation that matches the full range of ordinary application of the term “jerk” to the guy who rudely cuts you off in line, the teacher who casually humiliates the students, and the co-worker who turns every staff meeting into a battle.

The scientifically recognized personality categories closest to “jerk” are the “dark triad” of narcissism, Machiavellianism, and psychopathic personality. Narcissists regard themselves as more important than the people around them, which jerks also implicitly or explicitly do. And yet narcissism is not quite jerkitude, since it also involves a desire to be the center of attention, a desire that jerks don’t always have. Machiavellian personalities tend to treat people as tools they can exploit for their own ends, which jerks also do. And yet this too is not quite jerkitude, since Machivellianism involves self-conscious cynicism, while jerks can often be ignorant of their self-serving tendencies. People with psychopathic personalities are selfish and callous, as is the jerk, but they also incline toward impulsive risk-taking, while jerks can be calculating and risk-averse.

Another related concept is the concept of the asshole, as explored recently by the philosopher Aaron James of the University of California, Irvine. On James’s theory, assholes are people who allow themselves to enjoy special advantages over others out of an entrenched sense of entitlement. Although this is closely related to jerkitude, again it’s not quite the same thing. One can be a jerk through arrogant and insulting behavior even if one helps oneself to no special advantages.

Given the many roadblocks standing in the way, what is a potential jerk interested in self-evaluation to do?

The first step to the solution is to . . .

Continue reading.

Written by Leisureguy

28 December 2020 at 10:54 am

Dr. David Suzuki on the covid-19 vaccine

leave a comment »

An interesting comment posted in Facebook:

Recently the Suzuki Elders received an email asking if we knew what Dr David Suzuki thought about the Covid 19 vaccine(s). The person asked “My husband and I are debating whether or not to have the corona virus vaccine administered to our family. We wondered “What would David Suzuki do?” Here, written in his usual fulsome manner, is David Suzuki’s response. We then asked for permission to post this letter to the larger public through our Elder Facebook page and Dr Suzuki agreed.

December 10 2020

I have a couple of responses to your query about the COVID vaccine. Vaccination, like antibiotics, is one of the great innovations of medicine and the story of how it came to be is a wonderful one. You may know it, but basically smallpox has been a terrible disease that practically wiped-out Indigenous people who had not encountered it before. In the 1700s it had been reported that milkmaids contracted cowpox from milking cows. They would get lesions on their hands and arms but would recover but never contracted smallpox that was a deadly disease, killing between 20 – 60% of its victims while 1/3 of the survivors went blind and almost all had disfiguring scars from the pox. Edward Jenner deliberately infected a boy with cowpox and when he recovered, Jenner injected smallpox (something that would never be done today) and the boy was immune.

That began vaccination that has saved millions of lives and in 1980 smallpox was eradicated worldwide. It’s now extinct. Now a big push is on to do the same with polio.

So, I am a big admirer of vaccination. It involves using the body’s own mechanism of immunity by injecting an antigen, usually a coat protein of a virus or sometimes a heat killed virus itself. The body recognizes a foreign material and creates antibodies to eliminate it. So, we have inbuilt defenses that vaccination accelerates. There have been contaminants in the past resulting from the way antigens are processed chemically. After widespread use, the Salk vaccine was found to carry a live virus that was ultimately found to be harmless. And there have been trace amounts of chemicals like mercury. But the whole basis of the anti-vax movement was a report that has been proved to be bogus, yet it is repeated over and over.

The speed with which the new vaccines have been developed is astounding. After more than 40 years, there is still no vaccine for HIV. The reason it has taken so long to get approval for the new ones is that there is a very elaborate assessment process to ensure safety.

Now the Pfizer and Moderna vaccines are radically different from the traditional antigen injection. It involves injecting the gene (mRNA) specifying the coat protein (spike) and the gene gets into our cells where they produce the spike antigen and that, it turns out, is a very powerful way of getting our immune system to respond. The efficacy of this method is amazingly high. There might be some consequences that we can’t find until the treatment has gone on for years (esoteric issues like what happens to the mRNA, can it get into the nucleus of a cell and integrate into its DNA). What excites me is that this new approach could allow us to create vaccines very rapidly for any new viruses that emerge in future.

I’m sorry I’ve gone on so long. Most of medicine is about relieving symptoms when we are sick and depending on the healing capacity of the body, but vaccination is really a medical intervention that works. Would I take the new vaccine of Pfizer or Moderna? In a flash. I’m in a high-risk category and while I know I’m in the last part of my life, I don’t want to risk hurrying the end. Would I have any concerns about unexpected deleterious effect? Nothing is absolutely sure in medicine but I have no worries at all. Get it to me quick.

There is an aspect of anti-vaxxers (I know you’re not coming at it from conspiracy) that I have to rant about. A lot of folks are saying it’s their right to decide whether or not to get a shot. It’s all about freedom. The thing that bugs me is that freedom comes with responsibility otherwise it’s just license to do anything. If people resist mandated vaccination as a constitutional right, what about the right of everyone else who is sharing the same air? I hope they have a complete airtight case around them so they only breathe their own air. And they should not be allowed to use public medical facilities if they do get sick because they’ve opted out of the system by abrogating their responsibilities.

Thank you for your query. Please know I am not a medical doctor.

– – – David Suzuki

Written by Leisureguy

28 December 2020 at 10:41 am

The way we train AI is fundamentally flawed

leave a comment »

Will Douglas Heaven writes in MIT Technology Review:

It’s no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings. This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world, a problem known as data shift. For example, an AI trained to spot signs of disease in high-quality medical images will struggle with blurry or cropped images captured by a cheap camera in a busy clinic.

Now a group of 40 researchers across seven different teams at Google have identified another major cause for the common failure of machine-learning models. Called “underspecification,” it could be an even bigger problem than data shift. “We are asking more of machine-learning models than we are able to guarantee with our current approach,” says Alex D’Amour, who led the study.

Underspecification is a known issue in statistics, where observed effects can have many possible causes. D’Amour, who has a background in causal reasoning, wanted to know why his own machine-learning models often failed in practice. He wondered if underspecification might be the problem here too. D’Amour soon realized that many of his colleagues were noticing the same problem in their own models. “It’s actually a phenomenon that happens all over the place,” he says.

D’Amour’s initial investigation snowballed and dozens of Google researchers ended up looking at a range of different AI applications, from image recognition to natural language processing (NLP) to disease prediction. They found that underspecification was to blame for poor performance in all of them. The problem lies in the way that machine-learning models are trained and tested, and there’s no easy fix.

The paper is a “wrecking ball,” says Brandon Rohrer, a machine-learning engineer at iRobot, who previously worked at Facebook and Microsoft and was not involved in the work.

Same but different

To understand exactly what’s going on, we need to back up a bit. Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you’re done.

What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but—and this is the crucial part—these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don’t affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world.

In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.

This is not the same as data shift, where training fails to produce a good model because the training data does not match real-world examples. Underspecification means something different: even if a training process can produce a good model, it could still spit out a bad one because it won’t know the difference. Neither would we.

The researchers looked at the impact of underspecification on a number of different applications. In each case they used the same training processes to produce multiple machine-learning models and then ran those models through stress tests designed to highlight specific differences in their performance.

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training test—suggesting that they were equally accurate—their performance varied wildly in the stress test.

The stress test used . . .

Continue reading. There’s more.

Later in the article:

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. “It pokes some significant holes in the fundamental assumptions we’ve been making.”

Written by Leisureguy

28 December 2020 at 10:04 am

Augmented Hoppin’ John post

leave a comment »

I learned more about how and why Hoppin’ John has changed from what it was back in the day (late 19th century), and about how to replicate the earlier version, so I updated my Hoppin’ John post with the new information.

Written by Leisureguy

28 December 2020 at 9:55 am

Posted in Food, Recipes & Cooking

The very excellent Fine slant and an excellent collaboration between Declaration Grooming and Chatillon Lux

with 8 comments

That Omega 21762 boar brush is a wonderfully soft and gentle brush, and well-loaded with Declaration Grooming’s bison-tallow formula of Unconditional Surrender made a fine fragrant lather.

Fine’s slant is one of the best slants available and a total joy if you use the correct angle (handle far from the face so the razor rides the cap and not the guard) and light pressure (the razor itself is light, so a good reminder) and a brand of blade good for you in that razor. With three easy passes, I had a totally smooth face.

Chatillon Lux is responsible for the fragrance,, and their aftershave lotion/splash is not only fragrant but also very nice for the skin. A great finish to a fine shave.

Written by Leisureguy

28 December 2020 at 9:48 am

Posted in Shaving

%d bloggers like this: