Later On

A blog written for those whose interests more or less match mine.

Archive for January 10th, 2019

This is a Big Deal: Despite past denials, D.C. Cardinal Donald Wuerl knew of sexual misconduct allegations against ex-cardinal Theodore McCarrick and reported them to Vatican

leave a comment »

You really need to read the entire, factually stated, totally damning article by Michelle Boorstein in the Washington Post, but let me just get you started:

D.C. Cardinal Donald Wuerl knew of sexual misconduct allegations against ex-cardinal Theodore McCarrick and reported them to the Vatican in 2004, church officials confirmed Thursday evening, despite portraying himself as unaware of McCarrick complaints since last summer.

Robert Ciolek, a former priest who reached a settlement with the church in 2005 after reporting abuse and misconduct by clerics including McCarrick, told The Post he recently learned that the Pittsburgh Diocese has a file that shows that Wuerl was aware of his allegations against McCarrick. The file includes documentation that Wuerl, who was bishop of Pittsburgh at the time, shared the information with then-Vatican ambassador, Gabriel Montalvo.

The content of the document, which Ciolek told the Post he saw in December, clashes sharply with Wuerl’s public statements about McCarrick since the older cleric was suspended in June due to a complaint he groped an altar boy decades ago.

The explosive allegations against McCarrick, which includes two other instances of abusing minors as well as harassment of seminarians, tipped off a full-blown crisis in the American Catholic Church. All along, Wuerl has largely rejected charges that he played a role in it.

McCarrick’s case is reportedly now about to be decided in one of the highest-profile clergy sex abuse trial processes to come before Rome.

Both the archdiocese of D.C. and the Pittsburgh diocese Thursday night acknowledged Wuerl knew and told the Vatican, and said they were simply trying to protect Ciolek’s confidentiality.

Ciolek dismissed that Thursday. “There was nothing that precluded them from talking to anyone” about his case. Wuerl at worst could have said: ‘I am aware but I can’t name that person.’”

Wuerl himself was forced to retire early in the fall due to his alleged mishandling of clergy sexual misconduct when he was bishop of Pitsburgh and his replacement is rumored to be named in coming weeks. A grand jury report by Pennsylvania prosecutors detailing extensive clergy abuse and cover-up and criticizing Wuerl for not doing more has heaped fuel on the belief that the Catholic Church hasn’t reformed itself at its highest levels.

Many Catholics have been angered and hurt by reports that the popular, prominent McCarrick regularly engaged in misconduct and that it was an open secret among the hierarchy.

Wuerl, a close ally of Pope Francis . . .

Continue reading.

Written by LeisureGuy

10 January 2019 at 6:41 pm

A New Approach to Understanding How Machines Think

leave a comment »

John Pavlus writes in Quanta:

f a doctor told that you needed surgery, you would want to know why — and you’d expect the explanation to make sense to you, even if you’d never gone to medical school. Been Kim, a research scientist at Google Brain, believes that we should expect nothing less from artificial intelligence. As a specialist in “interpretable” machine learning, she wants to build AI software that can explain itself to anyone.

Since its ascendance roughly a decade ago, the neural-network technology behind artificial intelligence has transformed everything from email to drug discovery with its increasingly powerful ability to learn from and identify patterns in data. But that power has come with an uncanny caveat: The very complexity that lets modern deep-learning networks successfully teach themselves how to drive cars and spot insurance fraud also makes their inner workings nearly impossible to make sense of, even by AI experts. If a neural network is trained to identify patients at risk for conditions like liver cancer and schizophrenia — as a system called “Deep Patient” was in 2015, at Mount Sinai Hospital in New York — there’s no way to discern exactly which features in the data the network is paying attention to. That “knowledge” is smeared across many layers of artificial neurons, each with hundreds or thousands of connections.

As ever more industries attempt to automate or enhance their decision-making with AI, this so-called black box problem seems less like a technological quirk than a fundamental flaw. DARPA’s “XAI” project (for “explainable AI”) is actively researching the problem, and interpretability has moved from the fringes of machine-learning research to its center. “AI is in this critical moment where humankind is trying to decide whether this technology is good for us or not,” Kim says. “If we don’t solve this problem of interpretability, I don’t think we’re going to move forward with this technology. We might just drop it.”

Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.

TCAV was originally tested on machine-learning models trained to recognize images, but it also works with models trained on text and certain kinds of data visualizations, like EEG waveforms. “It’s generic and simple — you can plug it into many different models,” Kim says.

Quanta Magazine spoke with Kim about what interpretability means, who it’s for, and why it matters. An edited and condensed version of the interview follows.

You’ve focused your career on “interpretability” for machine learning. But what does that term mean, exactly?

There are two branches of interpretability. One branch is interpretability for science: If you consider a neural network as an object of study, then you can conduct scientific experiments to really understand the gory details about the model, how it reacts, and that sort of thing.

The second branch of interpretability, which I’ve been mostly focused on, is interpretability for responsible AI. You don’t have to understand every single thing about the model. But as long as you can understand just enough to safely use the tool, then that’s our goal.

But how can you have confidence in a system that you don’t fully understand the workings of?

I’ll give you an analogy. Let’s say I have a tree in my backyard that I want to cut down. I might have a chain saw to do the job. Now, I don’t fully understand how the chain saw works. But the manual says, “These are the things you need to be careful of, so as to not cut your finger.” So, given this manual, I’d much rather use the chainsaw than a handsaw, which is easier to understand, but would make me spend five hours cutting down the tree.

You understand what “cutting” is, even if you don’t exactly know everything about how the mechanism accomplishes that.

Yes. The goal of the second branch of interpretability is: Can we understand a tool enough so that we can safely use it? And we can create that understanding by confirming that useful human knowledge is reflected in the tool.

How does “reflecting human knowledge” make something like a black box AI more understandable?

Here’s another example. If a doctor is using a machine-learning model to make a cancer diagnosis, the doctor will want to know that the model isn’t picking up on some random correlation in the data that we don’t want to pick up. One way to make sure of that is to confirm that the machine-learning model is doing something that the doctor would have done. In other words, to show that the doctor’s own diagnostic knowledge is reflected in the model.

So if doctors were looking at a cell specimen to diagnose cancer, they might look for something called “fused glands” in the specimen. They might also consider the age of the patient, as well as whether the patient has had chemotherapy in the past. These are factors or concepts that the doctors trying to diagnose cancer would care about. If we can show that the machine-learning model is also paying attention to these factors, the model is more understandable, because it reflects the human knowledge of the doctors.

Is this what TCAV does — reveal which high-level concepts a machine-learning model is using to make its decisions?

Yes. Prior to this, interpretability methods only explained what neural networks were doing in terms of “input features.” What do I mean by that? If you have an image, every single pixel is an input feature. In fact, Yann LeCun [an early pioneer in deep learning and currently the director of AI research at Facebook] has said that he believes these models are already superinterpretable because you can look at every single node in the neural network and see numerical values for each of these input features. That’s fine for computers, but humans don’t think that way. I don’t tell you, “Oh, look at pixels 100 to 200, the RGB values are 0.2 and 0.3.” I say, “There’s a picture of a dog with really puffy hair.” That’s how humans communicate — with concepts.

How does TCAV perform this translation between input features and concepts?

Let’s return to the example of a doctor using a machine-learning model that has already been trained to classify images of cell specimens as potentially cancerous. You, as the doctor, may want to know how much the concept of “fused glands” mattered to the model in making positive predictions of cancer. First you collect some images — say, 20 — that have examples of fused glands. Now you plug those labeled examples into the model.

Then what TCAV does internally is called “sensitivity testing.” When we add in these labeled pictures of fused glands, how much does the probability of a positive prediction for cancer increase? You can output that as a number between zero and one. And that’s it. That’s your TCAV score. If the probability increased, it was an important concept to the model. If it didn’t, it’s not an important concept.

“Concept” is a fuzzy term. Are there any that won’t work with TCAV?

If you can’t express your concept using some subset of your [dataset’s] medium, then it won’t work. If your machine-learning model is trained on images, then the concept has to be visually expressible. Let’s say I want to visually express the concept of “love.” That’s really hard.

We also carefully validate the concept. We have a statistical testing procedure that rejects the concept vector if it has the same effect on the model as a random vector. If your concept doesn’t pass this test, then the TCAV will say, “I don’t know. This concept doesn’t look like something that was important to the model.”

Is TCAV essentially about creating trust in AI, rather than a genuine understanding of it?

It is not — and I’ll explain why, because it’s a fine distinction to make.

We know from repeated studies in cognitive science and psychology that humans are very gullible. What that means is  . . .

Continue reading. There’s much more.

Written by LeisureGuy

10 January 2019 at 5:37 pm

A local gin with a sense of humor

leave a comment »

From the label on the bottle:

[This gin] is crafted using hand-foraged herbs and unique botanicals grown in British Columbia. It’s a gin that reflects the origin of its ingredients; with bold flavours as undomesticated as the rugged BC coastline, delicately balanced with the bright freshness of an old growth forest.

Created entirely in-house, the low wines are first distilled on a 1920s British-made still affectionately christened ‘Old George’ before being refined in a new German-built refractory still. Finally, the gin is redistilled with BC-grown botanicals creating a spirit featuring a rich velvety mouth feel and punctuated with coastal rainforest flavours. It’s a gin that is not only influenced by our BC environment, but actually made from it!

The interesting thing about those old-growth BC forests is that the universal logging method used in BC was clear-cutting: minimize cost of getting timber with no regard for reforestation, aesthetics and protection of fish and wildlife habitat.

With that in mind, this gin (from Phillips Fermentorium Distilling Co. here in Victoria) has an apt albeit somewhat humorous name:

Written by LeisureGuy

10 January 2019 at 4:44 pm

Posted in Business, Drinks

My aluminum Phoenix DOC with Dr. Jon’s Propaganda

leave a comment »

The Bruce synthetic from RazoRock is quite nice in the hand (the handle is a cylinder with concave sides) and on the face, and it quite easily made a fine lather from Dr. Jon’s Propaganda, whose fragrance notes are vanilla, sandalwood, mandarin orange, patchouli, and musk. I have the earlier version of the soap;  the current version adds slippery elm bark, aloe, soy wax, jojoba oil, and sunflower oil to the list of ingredients. Complete ingredients now:

Stearic Acid, Water, Castor Oil, Potassium Hydroxide, Shea Butter, Mango Butter, Babassu Oil, Essential/Fragrance Oils, Sodium Hydroxide, Sunflower Oil, Avocado Oil, Evening Primrose Oil, Meadowfoam Oil, Aloe Vera, Soy Wax, Slippery Elm Bark, Citric Acid.

I imagine the citric acid is to help the soap work better in hard water.

Three passes with Phoenix Artisan’s aluminum version of its double-open-comb razor, loosened just an itsy amount to make it more efficient. With that adjustment it’s a great razor in both efficiency and comfort.

A good splash of Stetson Classic (somehow expecting Paul Sebastian) and the day begins.

Written by LeisureGuy

10 January 2019 at 8:56 am

Posted in Shaving

%d bloggers like this: