Later On

A blog written for those whose interests more or less match mine.

Archive for October 17th, 2020

‘Machines set loose to slaughter’: the dangerous rise of military AI

leave a comment »

Anyone who’s discovered that “autocorrect” often means “autoerror” and who has seen the Terminator movies should be at least mildly uncomfortable about releasing into the wild autonomous machines programmed to kill humans (and “mildly uncomfortable” is much better than being on the receiving end of a bug in the latest release of such a machine’s OS).

Highly recommend science-fiction novel: Kill Decision, by Daniel Suarez.

Frank Pasquale writes in the Guardian:

The video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van’s back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. Terrorists could easily deploy them. And existing defences are weak or nonexistent.

Some military experts argued that Slaughterbots – which was made by the Future of Life Institute, an organisation researching existential threats to humanity – sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”. One “microsystems collaborative” has already released Octoroach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.

Who knows how many other noxious creatures are now models for avant garde military theorists. A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.

Advances in the modelling of robotic killing machines are no less disturbing. A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines. When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realise that such an experiment has the potential to go wildly out of control. Expense is the chief impediment to a great power experimenting with such potentially destructive machines. Software modelling may eliminate even that barrier, allowing virtual battle-tested simulations to inspire future military investments.

In the past, nation states have come together to prohibit particularly gruesome or terrifying new weapons. By the mid-20th century, international conventions banned biological and chemical weapons. The community of nations has forbidden the use of blinding-laser technology, too. A robust network of NGOs has successfully urged the UN to convene member states to agree to a similar ban on killer robots and other weapons that can act on their own, without direct human control, to destroy a target (also known as lethal autonomous weapon systems, or Laws). And while there has been debate about the definition of such technology, we can all imagine some particularly terrifying kinds of weapons that all states should agree never to make or deploy. A drone that gradually heated enemy soldiers to death would violate international conventions against torture; sonic weapons designed to wreck an enemy’s hearing or balance should merit similar treatment. A country that designed and used such weapons should be exiled from the international community.

In the abstract, we can probably agree that ostracism – and more severe punishment – is also merited for the designers and users of killer robots. The very idea of a machine set loose to slaughter is chilling. And yet some of the world’s largest militaries seem to be creeping toward developing such weapons, by pursuing a logic of deterrence: they fear being crushed by rivals’ AI if they can’t unleash an equally potent force. The key to solving such an intractable arms race may lie less in global treaties than in a cautionary rethinking of what martial AI may be used for. As “war comes home”, deployment of military-grade force within countries such as the US and China is a stark warning to their citizens: whatever technologies of control and destruction you allow your government to buy for use abroad now may well be used against you in the future.

re killer robots as horrific as biological weapons? Not necessarily, argue some establishment military theorists and computer scientists. According to Michael Schmitt of the US Naval War College, military robots could police the skies to ensure that a slaughter like Saddam Hussein’s killing of Kurds and Marsh Arabs could not happen again. Ronald Arkin of the Georgia Institute of Technology believes that autonomous weapon systems may “reduce man’s inhumanity to man through technology”, since a robot will not be subject to all-too-human fits of anger, sadism or cruelty. He has proposed taking humans out of the loop of decisions about targeting, while coding ethical constraints into robots. Arkin has also developed target classification to protect sites such as hospitals and schools.

In theory, a preference for controlled machine violence rather than unpredictable human violence might seem reasonable. Massacres that take place during war often seem to be rooted in irrational emotion. Yet we often reserve our deepest condemnation not for violence done in the heat of passion, but for the premeditated murderer who coolly planned his attack. The history of warfare offers many examples of more carefully planned massacres. And surely any robotic weapons system is likely to be designed with some kind of override feature, which would be controlled by human operators, subject to all the normal human passions and irrationality.

Any attempt to code law and ethics into killer robots raises enormous practical difficulties. Computer science professor Noel Sharkey has argued that it is impossible to programme a robot warrior with reactions to the infinite array of situations that could arise in the heat of conflict. Like an autonomous car rendered helpless by snow interfering with its sensors, an autonomous weapon system in the fog of war is dangerous.

Most soldiers would testify that the everyday experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardising accounts of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked best where  . . .

Continue reading. There’s much more. It’s a long read.

Written by Leisureguy

17 October 2020 at 2:55 pm

Mike Pence’s fly: From Renaissance portraits to Salvador Dalí, artists used flies to make a point about appearances

leave a comment »

Sally Hickson, Associate Professor of Art History, University of Guelph, writes in The Conversation:

After this week’s vice-presidential debate in the United States, the fly that landed on Vice-President Mike Pence’s head was more of a sensation than the details of the debate — at least on social media. The fly has already been immortalized as a Biden/Harris fly swatter (sorry, they’re all sold out) and sparked a Halloween costume.

In many circumstances, flies are unremarkable. That’s probably why a French word for spy is connected to the same word for fly, mouche. When a fly becomes famous, it’s worth wondering why.

Flies have long held symbolic meaning in the history of art. In portraits made in Renaissance Europe, the presence of a fly symbolizes the transience of human life (buzzbuzzpfft!). In the great scheme of things, our lives are no longer than that of a fly. For me as an art historian, the fly was a moment to reflect not only on the history of flies in western painting, but to begin considering what the long history of this symbolism may reveal about why the fly generated so much buzz.

Humility, impermanence, illusion

Take, for example, an extraordinary little painting known today as Portrait of a Woman of the Hofer Family, painted in about 1470 by an artist from the German (Swabian) School, now in the National Gallery in London. Her elaborate white head covering highlights a perfect little fly, that’s settled on her just to remind us that our life, like hers, is impermanent.

The corollary is that we’re supposed to do the best we can with the time we’ve got. When it comes to time and eternity, as painter and poet William Blake wrote: “Am not I / A fly like thee? / Or art not thou / A man like me?” The fly is a little reminder of humility.

Painters could also include a fly to draw attention to themselves, demonstrating with their “trompe-l’oeil” (deceiving the eye) tricks that they could paint in a manner that seemed so real, a viewer of the portrait would be tempted to try to swat the fly away. The 16th-century Italian painter Giorgio Vasari, biographer of Italian Renaissance artists, tells a story about the painter Giotto fooling his teacher Cimabue by adding a realistic-looking fly to a painting.

Salvador Dalí, who was pretty much the lord of the flies (he painted them a lot) included a fly on the watch face of his painting The Persistence of Memory (now housed at the Museum of Modern Art in New York). He also used an army of ants to signify the decay of time and life’s impermanence.

All is not not what it appears

Portrait of a Carthusian, the most famous portrait featuring a fly, now in the Metropolitan Museum in New York, was painted by Petrus Christus in 1446. It depicts a bearded monk. The fly . . .

Continue reading.

Written by Leisureguy

17 October 2020 at 2:38 pm

Posted in Art, Daily life

Choosing the wrong alternative: Examples

leave a comment »

I previously blogged an Atlantic article by Ed Yong about how America has trapped itself in a pandemic death spiral through 9 conceptual errors. It’s a very good (and somewhat lengthy) article — and available without a paywall — and I was just rereading it again in the light of our recent experience.

The article was published September 9, and it’s now October 17. Among other things, the article noted that be best models predicted a total of 220,000 US deaths from Covid-19 (that is, about 73 times the number killed in the 9/11 attacks) as we reach November.

The most accurate model to date predicts that the U.S. will head into November with 220,000 confirmed deaths. More than 1,000 health-care workers have died. One in every 1,125 Black Americans has died, along with similarly disproportionate numbers of Indigenous people, Pacific Islanders, and Latinos. And yet, a recent poll found that 57 percent of Republican voters and 33 percent of independents think the number of deaths is acceptable. “In order for us to mobilize around a social problem, we all have to agree that it’s a problem,” Lori Peek says. “It’s shocking that we haven’t, because you really would have thought that with a pandemic it would be easy.” This is the final and perhaps most costly intuitive error …

The count as of today (October 17) is 219,909, so I would say the model is pretty much on target. (I think that everyone knows the actual death toll from Covid-19 is substantially higher: some states and some agencies will attribute a Covid-19 death to another concurrent illness so their statistics will look better.)

At any rate, in rereading the article, many things struck me forcibly anew. Here are two:

This problem is not unique to COVID-19. It’s more compelling to hope that drug-resistant bacteria can be beaten with viruses than to stem the overuse of antibiotics, to hack the climate than to curb greenhouse-gas emissions, or to invest in a doomed oceanic plastic-catcher than to reduce the production of waste. Throughout its entire history, and more than any other nation, the U.S. has espoused “an almost blind faith in the power of technology as panacea,” writes the historian Howard Segal.* Instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer. [He notes “A third of Americans already say they would refuse a vaccine, …” – LG]

… The U.S. might stop treating the pandemic as the emergency that it is. Daily tragedy might become ambient noise. The desire for normality might render the unthinkable normal. Like poverty and racismschool shootings and police brutalitymass incarceration and sexual harassmentwidespread extinctions and changing climate, COVID-19 might become yet another unacceptable thing that America comes to accept.

If you haven’t read the article, I highly recommend readily; if you did read the article back then, it’s interesting to reread it to see how what it describes compares to what’s happening now.

Written by Leisureguy

17 October 2020 at 2:05 pm

I have now voted

leave a comment »

Thanks to the Federal Voting Assistance Program (, which makes it easy military and overseas citizens to vote. Pass along the URL to any you know who are military or live overseas. It was surprisingly easy and straightforward.

Written by Leisureguy

17 October 2020 at 12:35 pm

Posted in Election, Government

Another take on the WWBT brush and a good comment on lather

with 7 comments

The RazoRock Keyhole brush shown has a 22mm synthetic knot and is a very pleasant brush at a reasonable price (US$10). Like the brushes in shaves this past week, it also uses a wasp-waist ball-top (WWBT) design, but with angles rather than curves.

I got an excellent lather from Phoenix Artisan’s Dark Chocolate shaving soap, a one-off for Valentine’s Day some years back. The fragrance is spot-on, and the drydown from the aftershave is quite pleasant (and not so identifiably dark chocolate).

I mentioned in the title a comment on lather — it’s this one, which provides a good and detailed description of the use of a lathering bowl, including the all-important brush loading to begin with. Thanks to tony5419 for the write-up. His comment stimulated me to try a lathering bowl again, though this morning my shave was well underway before I remembered. Today I’ll move a bowl into the bathroom so on Monday I’ll remember.

The Fatip Testina Gentile is an excellent little razor, though a recent correspondence mentioned so quality issues. I got three of these for the day my three youngest grandsons begin to shave, but I’m thinking now that the RazoRock Baby Smooth (which also comes in three finishes (black, white, blue) might be a better choice.

Three passes produced an excellent result, albeit with two tiny nicks on my chin — and My Nik Is Sealed again proved its worth.

A splash of Dark Chocolate aftershave and the weekend is off to a good (if somewhat late) start.

Written by Leisureguy

17 October 2020 at 11:30 am

Posted in Shaving

%d bloggers like this: