Later On

A blog written for those whose interests more or less match mine.

Archive for July 23rd, 2021

A cockatoo meme: Opening rubbish bins

leave a comment »

A cockatoo did the work while others observed.Credit…Barbara Klump/Max Planck Institute of Animal Behavior

James Gorman reports in the NY Times:

Sulfur-crested cockatoos, which may sound exotic to Americans and Europeans, are everywhere in suburban areas of Sydney. They have adapted to the human environment, and since they are known to be clever at manipulating objects it’s not entirely surprising that they went after a rich food source. But you might say that the spread of their latest trick, to open trash cans, blows the lid off social learning and cultural evolution in animals.

 

Not only do the birds acquire the skill by imitating others, which is social learning. But the details of technique evolve to differ in different groups as the innovation spreads, a mark of animal culture.

Barbara C. Klump, a behavioral ecologist at the Max Planck Institute of Animal Behavior in Germany, and the first author of a report on the cockatoo research in the journal Science, said, “It’s actually quite a complex behavior because it has multiple steps.”

. . . Dr. Klump and her colleagues in Germany and Australia plotted the spread of the behavior in greater Sydney over the course of two years. The behavior became more common, but it didn’t pop up in random locations as it might if different birds were figuring out the trash bin technique on their own. It spread outward from its origin, indicating that the cockatoos were learning how to do it from each other.

he cockatoos’ new skill opens up a whole new resource for the birds. This is adaptive cultural evolution, spreading at lightning speed compared to biological evolution. Dr. Klump noted that culture has been called a second inheritance system and that applies to both humans and animals, allowing us and them to quickly adapt and change our behavior.

It’s impossible to know which bird or birds first developed the trash bin technique, but apparently there is not one lone cockatoo genius. During the course of the study, the behavior popped up a second time in a suburb too far away from the first for the spread to be by social learning, Dr. Klump said. The technique was invented again.

Scientists have observed social learning and what they term culture in primates, song birds and other animals. Different groups of chimpanzees show slightly different patterns of tool use, for example, as the cockatoos did. . .

Continue reading.

In the future we will doubtless see a report of a new improved clamp being invented for rubbish bins, and then of some unknown cockatoo inventing a crude and simplified oxy-acetylene torch. One these sort of races start, they tend to persist.

Written by Leisureguy

23 July 2021 at 4:31 pm

Posted in Daily life, Memes, Science

Lifting Atlas Stones: A Strongman Competition

leave a comment »

The five stones usually range in weight from 160kg to 200kg, though Tom Stollman in May 2020, in the Castle Stone world records attempt, managed to conquer a 286Kg Stone. In this video he also sets a speed record.

Written by Leisureguy

23 July 2021 at 4:11 pm

Posted in Games

The Chatbot Problem

leave a comment »

Stephen Marche writes in the New Yorker:

In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, “To eliminate it.” Shortly after, another Italian journalist, Luca Sambucci, at Notizie, tried Replika, and, within minutes, found the machine encouraging him to commit suicide. Replika was created to decrease loneliness, but it can do nihilism if you push it in the wrong direction.

In his 1950 science-fiction collection, “I, Robot,” Isaac Asimov outlined his three laws of robotics. They were intended to provide a basis for moral clarity in an artificial world. “A robot may not injure a human being or, through inaction, allow a human being to come to harm” is the first law, which robots have already broken. During the recent war in Libya, Turkey’s autonomous drones attacked General Khalifa Haftar’s forces, selecting targets without any human involvement. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” a report from the United Nations read. Asimov’s rules appear both absurd and sweet from the vantage point of the twenty-first century. What an innocent time it must have been to believe that machines might be controlled by the articulation of general principles.

Artificial intelligence is an ethical quagmire. Its power can be more than a little nauseating. But there’s a kind of unique horror to the capabilities of natural language processing. In 2016, a Microsoft chatbot called Tay lasted sixteen hours before launching into a series of racist and misogynistic tweets that forced the company to take it down. Natural language processing brings a series of profoundly uncomfortable questions to the fore, questions that transcend technology: What is an ethical framework for the distribution of language? What does language do to people?

Ethics has never been a strong suit of Silicon Valley, to put the matter mildly, but, in the case of A.I., the ethical questions will affect the development of the technology. When Lemonade, an insurance app, announced that its A.I. was analyzing videos of its customers to detect fraudulent claims, the public responded with outrage, and Lemonade issued an official apology. Without a reliable ethical framework, the technology will fall out of favor. If users fear artificial intelligence as a force for dehumanization, they’ll be far less likely to engage with it and accept it.

Brian Christian’s recent book, “The Alignment Problem,” wrangles some of the initial attempts to reconcile artificial intelligence with human values. The crisis, as it’s arriving, possesses aspects of a horror film. “As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice,’ ” Christian writes. “We conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for.” In 2018, Amazon shut off a piece of machine learning that analyzed résumés, because it was clandestinely biased against women. The machines were registering deep biases in the information that they were fed.

Language is a thornier problem than other A.I. applications. For one thing, the stakes are higher. Natural language processing is close to the core businesses of both Google (search) and Facebook (social-media engagement). Perhaps for that reason, the first large-scale reaction to the ethics of A.I. natural language processing could not have gone worse. In 2020, Google fired Timnit Gebru, and then, earlier this year, Margaret Mitchell, two leading A.I.-ethics researchers. Waves of protest from their colleagues followed. Two engineers at Google quit. Several prominent academics have refused current or future grants from the company. Gebru claims that she was fired after being asked to retract a paper that she co-wrote with Mitchell and two others called “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?” (Google disputes her claim.) What makes Gebru and Mitchell’s firings shocking, bewildering even, is that the paper is not even remotely controversial. Most of it isn’t even debatable.

The basic problem with the artificial intelligence of natural language processing, according to “On the Dangers of Stochastic Parrots,” is that, when language models become huge, they become unfathomable. The data set is simply too large to be comprehended by a human brain. And without being able to comprehend the data, you risk manifesting the prejudices and even the violence of the language that you’re training your models on. “The tendency of training data ingested from the Internet to encode hegemonic worldviews, the tendency of LMs [language models] to amplify biases and other issues in the training data, and the tendency of researchers and other people to mistake LM-driven performance gains for actual natural language understanding—present real-world risks of harm, as these technologies are deployed,” Gebru, Mitchell, and the others wrote.

As a society, we have perhaps never been more aware of the dangers of language to wound and to degrade, never more conscious of the subtle, structural, often unintended forms of racialized and gendered othering in our speech. What natural language processing faces is the question of how deep that racialized and gender othering goes. “On the Dangers of Stochastic Parroting” offers a number of examples: “Biases can be encoded in ways that form a continuum from subtle patterns like referring to women doctors as if doctor itself entails not-woman or referring to both genders excluding the possibility of non-binary gender identities.” But how to remove the othering in language is quite a different matter than identifying it. Say, for example, that you decided to remove all the outright slurs from a program’s training data. “If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light,” Gebru and the others write. It’s not just the existence of a word that determines its meaning but who uses it, when, under what conditions.

The evidence for stochastic parroting is fundamentally incontrovertible, rooted in the very nature of the technology. The tool applied to solve many natural language processing problems is called a transformer, which uses techniques called positioning and self-attention to achieve linguistic miracles. Every token (a term for a quantum of language, think of it as a “word,” or “letters,” if you’re old-fashioned) is affixed a value, which establishes its position in a sequence. The positioning allows for “self-attention”—the machine learns not just what a token is and where and when it is but how it relates to all the other tokens in a sequence. Any word has meaning only insofar as it relates to the position of every other word. Context registers as mathematics. This is the splitting of the linguistic atom.

Transformers figure out the deep structures of language, well above and below the level of anything people can understand about their own language. That is exactly what is so troubling. What will we find out about how we mean things? I remember a fact that I learned when I was forced to study Old English for my Ph.D.: in English, the terms for food eaten at the table derive from French—beef, mutton—while the terms for animals in the field derive from Anglo-Saxon—cow, sheep. That difference registers ethnicity and class: the Norman conquerors ate what the Saxon peons tended. So every time you use those most basic words—cow, beef—you express a fundamental caste structure that differentiates consumer from worker. Progressive elements in the United States have made extensive attempts to remove gender duality from pronouns. But it’s worth noting that, in French or in Spanish, all nouns are gendered. A desk, in French, is masculine, and a chair is feminine. The sky itself is gendered: the sun is male, the moon female. Ultimately, what we can fix in language is parochial. Caste and gender are baked into every word. Eloquence is always a form of dominance. Government is currently offering no solutions. Sam Altman, the C.E.O. of OpenAI, which created the deep-learning network GPT-3, has been very open about his pursuit of any kind of governance whatsoever. In Washington, he has found, discussing the long-term consequences of artificial intelligence leads to “a real eyes-glazed-over look.” The average age of a U.S. senator is sixty-three. They are missing in action.

Let’s imagine an A.I. engineer who wants to create a chatbot that aligns with human values. Where is she supposed to go to determine a reliable metric of “human values”?. . .

Continue reading.

Written by Leisureguy

23 July 2021 at 1:23 pm

The Leakage Problem

leave a comment »

Pedestrian Observations has an interesting post:

I’ve spent more than ten years talking about the cost of construction of physical infrastructure, starting with subways and then branching on to other things, most.

And yet there’s a problem of comparable size when discussing infrastructure waste, which, lacking any better term for it, I am going to call leakage. The definition of leakage is any project that is bundled into an infrastructure package that is not useful to the project under discussion and is not costed together with it. A package, in turn, is any program that considers multiple projects together, such as a stimulus bill, a regular transport investment budget, or a referendum. The motivation for the term leakage is that money deeded to megaprojects leaks to unrelated or semi-related priorities. This often occurs for political reasons but apolitical examples exist as well.

Before going over some examples, I want to clarify that the distinction between leakage and high costs is not ironclad. Sometimes, high costs come from bundled projects that are costed together with the project at hand; in the US they’re called betterments, for example the $100 million 3 km bike lane called the Somerville Community Path for the first, aborted iteration of the Green Line Extension in Boston. This blur is endemic to general improvement projects, such as rail electrification, and also to Northeast Corridor high-speed rail plans, but elsewhere, the distinction is clearer.

Finally, while normally I focus on construction costs for public transport, leakage is a big problem in the United States for highway investment, for political reasons. As I will explain below, I believe that nearly all highway investment in the US is waste thanks to leakage, even ignoring the elevated costs of urban road tunnels.

State of good repair

A month ago, I uploaded a video about the state of good repair grift in the United States. The grift is that SOGR is maintenance spending funded out of other people’s money – namely, a multiyear capital budget – and therefore the agency can spend it with little public oversight. The construction of an expansion may be overly expensive, but at the end of the day, the line opens and the public can verify that it works, even for a legendarily delayed project like Second Avenue Subway, the Berlin-Brandenburg Airport, or the soon-to-open Tel Aviv Subway. It’s a crude mechanism, since the public can’t verify safety or efficiency, but it’s impossible to fake: if nothing opens, it embarrasses all involved publicly, as is the case for California High-Speed Rail. No such mechanism exists for maintenance, and therefore, incompetent agencies have free reins to spend money with nothing to show for it. I recently gave an example of unusually high track renewal costs in Connecticut.

The connection with leakage is that capital plans include renewal and long-term repairs and not just expansion. Thus, SOGR is leakage, and when its costs go out of control, they displace funding that could be used for expansion. The NEC Commission proposal for high-speed rail on the Northeast Corridor calls for a budget of $117 billion in 2020 dollars, but there is extensive leakage to SOGR in the New York area, especially the aforementioned Connecticut plan, and thus for such a high budget the target average speed is about 140 km/h, in line with the upgraded legacy trains that high-speed lines in Europe replace.

Regionally, too, the monetary bonfire that is SOGR sucks the oxygen out of the room. The vast majority of the funds for MTA capital plans in New York is either normal replacement or SOGR, a neverending program whose backlog never shrinks despite billions of dollars in annual funding. The MTA wants to spend $50 billion in the next 5 years on capital improvements; visible expansion, such as Second Avenue Subway phase 2, moving block signaling on more lines, and wheelchair accessibility upgrades at a few stations, consists of only a few billion dollars of this package.

This is not purely an American issue. Germany’s federal plan for transport investment calls for 269.6 billion euros in project capital funding from 2016 to 2030, including a small proportion for projects planned now to be completed after 2031; as detailed on page 14, about half of the funds for both road and rail are to go to maintenance and renewal and only 40% to expansion. But 40% for expansion is still substantially less leakage than seen in American plans like that for New York.

Betterments and other irrelevant projects

Betterments straddle the boundary between high costs and leakage. They can be bundled with . . .

Continue reading.

Written by Leisureguy

23 July 2021 at 12:56 pm

Thinking about a summer stew of white beans, lacinato kale, and ‘nduja: Recipe considerations

leave a comment »

It seemed as though I was reading about ‘nduja everywhere, so when I saw some in a local market I immediately bought it to try. It’s a soft, spreadable salume, and as the article at the link notes,

‘nduja’s special spreadable texture comes from its meat-to-fat makeup. Dry-cured salumi, like soppressata, typically contain a ratio of roughly three parts lean muscle to one part fat. ‘Nduja inverts that ratio: Its composition is three parts fat for every one part lean muscle.

With such a high fat content, it is very rich indeed — and it also melts if used in cooking, as in a stew, for example.

I also bought some quite handsome spring shallots — they look like spring onions and can be used in much the same way — so I was thinking of how I might use them. I got the idea of making a stew: spring shallots and garlic (I had also picked up some new red Russian garlic) along with beans, greens, and grains — and tomatoes, since Italian is the theme.

The photo above shows the spring shallots and red Russian garlic I’ll be using, along with my Bulat chef’s knife, which I find myself using more and more. It came with a 17º angle, but I redid the edge to 15º and I love using it.

I thought of lacinato kale (or green kale if it looks fresher) and white beans, which with the tomatoes will give me the colors of the Italian flag. For grain I have on hand some cooked unpolished barnyard millet, so a cup of that. Say, two cans of white beans, 1 bunch of lacinato kale, a couple of the spring shallots, cloves from a head of the garlic — maybe two because the heads are small — a few Roma tomatoes, and a chunk of ‘nduja. I did a recipe search for ideas and found this recipe in Serious Eats, a site I trust. I adapted the recipe to be more what I had in mind, and I got this list of ingredients for my version:

• 2 tablespoons extra-virgin olive oil
• 2 spring shallots, chopped
• cloves from 1 head of red Russian garlic
• salt
• black pepper
• dried marjoram
• dried basil
• dried spearmint
• 1/2 cup ‘nduja
• 2 18-oz cans white beans, drained and rinsed (that size is common here)
• 1 cup cooked intact whole grain — unpolished millet this time
• 1 quart no-salt-added vegetable broth
• 3-4 Roma tomatoes
• 1 bunch lacinato kale, stems minced (and cooked with shallots), leaves chopped
• juice of 1 lemon

So my plan is to use the Serious Eats recipe as a guide, but replace 1 can of the beans with intact whole grain, use spring shallots instead of the usual shallots, and add tomatoes. I think to the gremolata the recipe includes I’ll add some finely chopped red cabbage to make sure I get all the sulforaphane I can. Sulforaphane is one reason cruciferous vegetables have numerous health benefits.

To obtain the sulforaphane, I’ll first chop the kale right away and let it rest for about 45 minutes before I cook it so that (as explained in this video) the sulforaphane will have time to form. Sulforaphane is not heat sensitive and so withstands cooking, but — and here’s the tricky part — suforaphane is not available in the intact fresh vegetable/greens. What those have instead are the sulforaphane precursor glucoraphanin (also not heat sensitive) and the enzyme myrosinase. It’s the myrosinase that converts glucoraphanin to sulforaphane once the vegetable is chopped (or shredded or run through a food processed or blended or crushed or chewed.

Unfortunately, myrosinase is heat sensitive, so if you chop/shred/process/etc. a cruciferous vegetable and immediately heat it (by cooking it), you destroy the myrosinase before it has a chance to convert the glucoraphanin into sulforaphane, so you get no sulforaphane and thus lose its benefits.

One solution is to chop or shred or or process or crush/massage the cruciferous vegetable or greens to start the conversion reaction that transforms glucoraphanin into suforaphane, and let the reaction run to completion before heating/cooking the vegetable. Once myrosinase has done its job and the sulforaphane’s been made, heat is not a problem because heat doesn’t hurt sulforaphane.

The video includes another way to get the sulforaphane you want without the wait, but I don’t mind waiting. I practice patience.

But if you can wait and must cook immediately, then you can add to the cooked cruciferous vegetables/greens some uncooked cruciferous food that thus still has active myrosinase to carry through the transformation: powdered mustard seeds, horseradish, chopped daikon radish, or fresh (uncooked) shredded red cabbage as a topping. It doesn’t take much (1/2 teaspoon of ground mustard is plenty), and you get the suforaphane transformed with no wait. Note that prepared mustard won’t do it: that has been cooked/heated and so has lost it myrosinase. Mustard powder, though, is simply the result of grinding mustard seeds, so the myrosinase is still active.

Using one of those myrosinase-rich toppings is particularly important if you’re use frozen broccoli, kale, brussels sprouts, etc., because immediately before the vegetables are frozen, they are blanched, and that heat destroys the myrosinase in them.

Of course, you can finesse the entire issue by not cooking a cruciferous vegetable (so that it’s never exposed to heat). That’s why I eat red cabbage as a slaw or salad: no wait for the conversion reaction to run to completion. But I often cook broccoli, and when I do, I chop it and let it rest for 45 minutes before steaming it.

And, as you by now must know, I also chop garlic and let it rest, though not so long. For garlic, a 15-minute rest is ample. The reason for the rest is much the same as for cruciferous vegetables. Garlic contains the molecule alliin and the enzyme alliinase, and when garlic is chopped or crushed, the two combine to produce allicin, and it’s allicin that makes garlic able to improve cholesterol profiles, protect against high blood pressure, and improve immunity, according to studies in the Journal of Atherosclerosis and Thrombosis and Journal of Antimicrobial Chemotherapy.

But, as before, though alliin (the precursor) and allicin (the beneficial substance we want) are both heat stable, the enzyme alliinase is not, and heating destroys it. So chop/mince your garlic and let it sit until alliinase has done its job and is no longer needed.

And, of course, I consider Dr. Greger’s Daily Dozen. This recipe checks several categories: ✓beans, ✓greens, ✓cruciferous vegetable, ✓grain, ✓other vegetable, ✓herbs/spices. And I might sprinkle pepitas on a bowl of stew: ✓nuts/seeds.

That’s the sort of thing I consider when I make a recipe: balance of foods and how best to prepare them to gain the nutritional benefit. Tomatoes, for example, are a good source of lycopene, but the lycopene in tomatoes is not available unless the tomatoes are cooked. (Watermelon is an even better source of lycopene, and watermelon’s lycopene is available without cooking. Lycopene is what gives watermelon and tomatoes their red color.)

Today I’ll pick up the beans, kale, and tomatoes (and a watermelon — why not?), and tomorrow I’ll do the cooking. I’m looking forward to it.

Update: Soup has been made. Lessons learned.

Written by Leisureguy

23 July 2021 at 11:47 am

Autonomous Autos Are Already Around

leave a comment »

For an automobile to be autonomous seems a natural. Here’s a good video on the topic from Verisatum that was just posted on YouTube:

Written by Leisureguy

23 July 2021 at 10:29 am

Wholly Kaw’s La Supérieure Dulci Tobacco shaving cream with Southern Witchcraft Valley of Ashes aftershave

leave a comment »

Quite a mix of languages: “la supérieure” is French, “dulci” is Italian, and “tobacco” is English. (“Tobacco” in Italian is “tabacco.”) I think “Dulci Tabacco” would have been better than switching languages mid-phrase.

But what’s in a name? That which we call a rose by any other name would smell as sweet, and you could call this shaving cream anything you want and it would still make an excellent lather and smell very nice indeed. I’m going to have to use this more often — I really liked lather and fragrance, and for me it’s a novelty to again have a soft shaving cream.

The brush is a Maggard Razors 22mm synthetic, and with it I continue my week of desert-island brushes. I like it a lot: handle, knot, performance, and price ($10).

The razor handle that carries the Edwin Jagger head today probably looks familiar: it’s the same handle you saw in yesterday’s shave, when it supported the iKon OG-1 head. That head has been retired (with prejudice) and the handle is finding a new role.

And the shave was truly excellent: extremely smooth result, and thanks to both Grooming Dept Moisturizing Pre-Shave and Wholly Kaw’s superior shaving cream, my skin feels astonishingly good.

The aftershave I used, Southern Witchcraft’s Valley of Ashes, no doubt contributed to the fine skin feel as well. I didn’t use Grooming Dept Hydrating Lotion today because the splash does very well on its own. Its ingredients:

Water, stearic acid, shea butter, castor oil, potassium hydroxide, glycerine, coconut oil, sodium lactate, sodium hydroxide, jojoba oil, aloe powder, liquid aloe, isopropyl myristate, kaolin clay, fragrance

The scent notes of this aftershave:

Coal, Tar, Bourbon, Tobacco, Bitter Citrus, Smoke, Leather, Motor Oil, Burning Rubber, Diesel, Clove, Birch Tar, Bergamot

The combination reminds me strongly of Midnight Stag, by Chiseled Face:

What a great shave with which to end the week!

Written by Leisureguy

23 July 2021 at 10:06 am

Posted in Shaving

%d bloggers like this: