Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Science fiction’ Category

“The End of History”

leave a comment »

Written by Leisureguy

26 May 2023 at 9:09 pm

Posted in Science fiction, Video

Personal AI you can interact with.

leave a comment »

I would say that we are well beyond the Turing test at this point. The NY Times has an article about Pi, “an AI companion.”

The article is interesting, and so is Pi. Give it a spin.

Written by Leisureguy

4 May 2023 at 5:19 pm

The physical intelligence of ant and robot collectives

leave a comment »

Leah Burrows’s article for the Harvard School of Engineering and Applied Sciences sure did remind me of the science-fiction novel by Daniel Suarez titled Kill Decision. (Great novel.)

Written by Leisureguy

22 December 2022 at 7:56 pm

Was Kurt Vonnegut a nice man?

with 6 comments

Recently I have been thinking about regret — in particular, the statement made by some (Edit Piaf being a prime example), “I regret nothing.”

Nothing? Not one instance of being unkind? even inadvertently? I can think of more examples than I want of hurting someone by failing to be kind, and I regret every one.

But then it struck me that “I regret nothing” is exactly the sentiment of a sociopath, particularly a narcissistic sociopath.

The above came to mind when I read Dorian Lynskey’s profile of Kurt Vonnegut in UnHerd, which begins:

In 1999, the director Baz Luhrmann had a novelty hit with “Everybody’s Free (To Wear Sunscreen)”, a spoken-word litany of whimsical advice for young people: enjoy your youth, keep your old love letters, floss, and so on. The text derived from a column by a journalist called Mary Schmich but it was widely rumoured to be from a commencement address by a celebrated author who was born 100 years ago this week: Kurt Vonnegut. Despite having quit writing two years earlier, he was still delighting students with his witty speeches, of which this appeared to be one. Vonnegut set the record straight but graciously told Schmich: “I would have been proud had the words been mine.”

Nothing illustrates an author’s reputation as clearly as misattributed work. The Sunscreen confusion proved that one of his era’s most scathing satirists had been recast as the cuddly hipster grandpa of American letters. This certainly chimed with one strand of Vonnegut’s work, which is summed up by a famous line from his 1965 novel God Bless You, Mr Rosewater, or Pearls Before Swine (“God damn it babies, you’ve got to be kind”) but that was by no means the whole picture.

Like Dolly Parton, Alan Bennett, George Michael and Anthony Bourdain, Vonnegut has become simplified into an avatar of kindness, his wrinkles ironed flat by the heat of sainthood. This happened long before his death in 2007 and he was a willing conspirator. George Saunders recently spoke about his own reputation as literature’s Mr Nice Guy and gave himself some advice: “one: don’t believe it; two, interrupt it.” The first is easier than the second. One of Vonnegut’s most famous lines is from 1961’s Mother Night: “We are what we pretend to be, so we must be careful about what we pretend to be.” Vonnegut often pretended to be nicer than he was, which was good for both his ego and his income.

If you Google Vonnegut, one of the most-asked questions that comes up is: “Was Vonnegut a nice person?” Tough one. He could certainly be warm, wise and generous, but he could also be a greedy and disloyal business partner, a selfish, unfaithful husband and a crotchety, intimidating father. He suffered from depression and suicidal ideation; his work often flirts with nihilism. Robert B. Weide’s recent documentary Kurt Vonnegut: Unstuck in Time (the title quotes Vonnegut’s masterpiece Slaughterhouse-Five) is candid about the writer’s failings as a family man but Weide, who considered Vonnegut a close friend and mentor, still sands off a lot of rough edges.

In his more objective biography And So It Goes, Charles J. Shields quotes the private notes that  . . .

Continue reading.

Written by Leisureguy

10 November 2022 at 1:06 pm

Science Over Capitalism: Kim Stanley Robinson and the Imperative of Hope

leave a comment »

James Bradley’s interview with Kim Stanley Robinson is excerpted from the book Tomorrow’s Parties: Life in the Anthropocene and appears in The MIT Press Reader:

There is no question Kim Stanley Robinson is one of the most important writers working today. Across almost four decades and more than 20 novels, his scrupulously imagined fiction has consistently explored questions of social justice, political and environmental economy, and utopian possibility.

Robinson is probably best known for his Mars trilogy, which envisions the settlement and transformation of Mars over several centuries, and the ethical and political challenges of building a new society. Yet it is possible his most significant legacy will turn out to be the remarkable sequence of novels that began with “2312.” Published across less than a decade, these six books reimagine both our past and our future in startlingly new ways, emphasizing the indivisibility of ecological and economic systems and placing the climate emergency center stage.

The most recent, “The Ministry for the Future,” published in 2020, is a work of extraordinary scale and ambition. Simultaneously a deeply confronting vision of the true scale of the climate crisis, a future history of the next 50 years, and a manifesto outlining the revolutionary change that will be necessary to avert catastrophe, it is by turns terrifying, exhilarating, and finally, perhaps surprisingly, guardedly hopeful. It is also one of the most important books published in recent years.

This interview was conducted between January and March 2021, beginning in the immediate aftermath of the attack on the United States Capitol and the inauguration of President Biden, and ending as a second wave of the COVID pandemic began to gather pace in many countries around the world. As we bounced questions back and forth across the Pacific, a drumbeat of impending disaster grew louder by the day: atmospheric carbon dioxide reached 417 ppm, a level 50 percent higher than preindustrial levels; a study showed the current system responsible for the relative warmth of the Northern Hemisphere — the Atlantic meridional overturning circulation — at its weakest level in a thousand years; and Kyoto’s cherry blossoms bloomed earlier than they have at any time since records began in the ninth century CE.

James Bradley: In several of your recent novels, you’ve characterized the first few decades of the 21st century as a time of inaction and indecision — in “2312,” for instance, you called them “the Dithering” — but in “The Ministry for the Future,” you talk about the 2030s as “the zombie years,” a moment when “civilization had been killed but it kept walking the Earth, staggering toward some fate even worse than death.” I wonder whether you could talk a little bit about that idea. What’s brought us to this point? And what does it mean for a civilization to be dead?

Kim Stanley Robinson: I’m thinking now that my sense of our global civilization dithering, and also trying to operate on old ideas and systems that are clearly inadequate to the present crisis, has been radically impacted by the COVID pandemic, which I think has been somewhat of a wake-up call for everyone — showing that we are indeed in a global civilization in every important sense (food supply, for instance), and also that we are utterly dependent on science and technology to keep eight billion people alive.

So “2312” was written in 2010. In that novel, I provided a timeline of sorts, looking backward from 2312, that was notional and intended to shock, also to fill the many decades it takes to make three centuries, and in a way that got my story in place the way I wanted it. In other words, it was a literary device, not a prediction. But it’s interesting now to look back and see me describing “the Dithering” as lasting so long. These are all affect states, not chronological predictions; I think it’s very important to emphasize science fiction’s double action, as both prophecy and metaphor for our present. As prophecy, SF is always wrong; as metaphor, it is always right, being an expression of the feeling of the time of writing.

So following that, “The Ministry for the Future” was written in 2019, before the pandemic. It expresses both fears and hopes specific to 2019 — and now, because of the shock of the pandemic, it can serve as an image of “how it felt before.” It’s already a historical artifact. That’s fine, and I think it might be possible that the book can be read better now than it could have been in January 2020 when I finished it.

Now I don’t think there will be a period of “zombie years,” and certainly not the 2030s. The pandemic as a shock has sped up civilization’s awareness of the existential dangers of climate change. Now, post COVID, a fictional future history might speak of the “Trembling Twenties” as it’s described in “The Ministry for the Future,” but it also seems it will be a period of galvanized, spasmodic, intense struggle for control over history, starting right now. With that new feeling, the 2030s seem very far off and impossible to predict at all.

JB: In “The Ministry for the Future,” the thing that finally triggers change is the catastrophic heat wave that opens the book. It’s a profoundly upsetting and very powerful piece of writing, partly because an event of the sort it depicts is likely to be a reality within a decade or so. But as somebody whose country has already experienced catastrophic climate disaster in the form of fire and flood and seen little or no change in our political discourse, I found myself wondering whether the idea such a disaster would trigger change mightn’t be too optimistic. Do you think it will take catastrophe to create real change? Or will the impetus come from elsewhere?

KSR: People are good at . . .

Continue reading.

Written by Leisureguy

28 October 2022 at 6:32 pm

Ursula Le Guin’s “The Ones who Walk Away from Omelas”: Would You Walk Away?

leave a comment »

Spencer Case writes in 1,000-Word Philosophy:

When, if ever, is it right to sacrifice someone for the greater good?

Ursula K. Le Guin’s (1929-2018) fantasy short story, “The Ones Who Walk Away from Omelas,” raises this question, among others.[1]

This essay introduces her story and explores its philosophical implications.

1. The Dark Secret

The story begins with an elaborate description of a summer festival in an exquisitely beautiful and happy city called Omelas. It’s as though we’re being shown a travel brochure for a place that seems too good to be true.

Le Guin says if you can imagine an even better city than the one she describes, then think of that instead.

Of course, there’s a twist.

Somewhere in the city is a closet where an emaciated child, referred to only as “it,” is locked up. It’s smeared with its own feces, covered with sores, and constantly afraid. Occasionally, the door opens and people will look at it, kick it, and make it stand up.

It says, “I will be good,” but the door always shuts without anyone making a reply.


Because the denizens of Omelas made a deal – with what or whom, we aren’t told, but apparently dark magic was involved.

The deal is that Omelas would be a paradise provided that a child’s happiness is sacrificed. Whether this applies to just this one child, or a succession of children, is unspecified. In any event, every adult knows that a single kind word spoken to this child would violate the terms of the deal.

We don’t know what the consequences of breaking the deal would be because we don’t know what things were like before. But certainly Omelas would be a much less happy place overall, even though this child would be happier.

2. Walking Away

When the children of Omelas reach adolescence, they’re told the dark secret, and some see the child. They react with . . ..

Continue reading.

Le Guin’s story seems to owe more than a little to the Grand Inquisitor story Dostoevsky included in The Brothers Karamazov. 

Written by Leisureguy

21 May 2022 at 12:37 pm

“Ruin”: An animated short now, to be part of a feature film

leave a comment »

“Animation” does not mean what it once did — it’s gone far beyond that.

Written by Leisureguy

13 May 2022 at 7:00 pm

The best books on Science Fiction and Philosophy

leave a comment »

Five Books has an expert pick 5 “best books” in some field, and then interviews the expert to learn the reasons for the choices. In this case, they note:

Serious philosophy need not take the form of a journal article or monograph, argues the philosopher and U.C. Riverside professor Eric Schwitzgebel, as he selects five science fiction books that succeed both as novels and provocative thought experiments that push us to consider deep philosophical questions from every angle. Interview by Nigel Warburton.

The interview begins:

Before we get into the five books you have chosen, might you tell us: what’s your special interest in science fiction, and how does it tie in with philosophy?

Well, partly, I just love it. There’s something to be said for pursuing what you love. But I also think science fiction is a great fit for philosophy in a couple of ways. Speculative fiction, generally, I think of as fiction that explores scenarios beyond the ordinary run of human experience. Other types of fiction—for example, ‘literary fiction,’ as it’s sometimes called—don’t much depart from the ordinary run of human experience. But in philosophy we’re often fascinated with scenarios that pull apart things that ordinarily go together.

A wonderful example of this is Derek Parfit’s work on personal identity. Ordinarily, a person has one body that continues in a specific way over time, permanently connected to one mind that also continues in a specific way over time. There’s a familiar body and a consistent personality. You have memories that move from one moment to the next and experiences that overlap. All of these things go together, and we think of them as constitutive or partly constitutive or somehow involved with or correlated with being a person. In the ordinary run of human experience, these things never come apart. But in science fiction-type thought experiments you can wonder, okay, what would happen if you took someone’s mind, just to put it crudely, and put it in someone else’s body? Would it be the same person? Would the person follow the mind or would they follow the body? What if you stepped into a machine and two duplicates of you walked out? What if you lived 800 years, gradually acquiring a new personality every century and entirely forgetting your past? You can separate the pieces, considering what really matters about a person’s identity. You can decouple what ordinarily goes together, aiming to find the essence of the matter in the way that philosophers characteristically like to do.

It’s an interesting conundrum, because some science fiction seems to extrapolate from existing science to a future that’s possible and consistent with what we know about science today. That is, a hypothetical situation that is a plausible, possible future world—or maybe not so plausible, but still could happen. But there’s another kind of science fiction which doesn’t seem to be bound by anything we know about science now—it just allows what you might call magical things to happen. I wonder how the two of them relate to philosophy.

Fantasy just allows magical things to happen. And that can be very useful in thinking through philosophical issues because you might be interested in considering things that aren’t scientifically plausible at all, exploring them as conceptual possibilities. Now, within the constraints of scientific plausibility we can find a second big philosophical value in science fiction: thinking about the future. For example, I think it’s likely that in the next several decades, or maybe the next 100 or 200 years, if humanity continues to exist and continues along its current trajectory, we will eventually create artificial beings who are conscious. Maybe they’ll be robots or maybe they’ll artificial biological organisms. Or they might be a bio-machine hybrid or the result of technology we can’t yet foresee. We might create artificial entities who are people—entities with conscious experiences, self-knowledge, values, who think of themselves as individuals. They might be very much unlike us in other ways—physiologically, physically, maybe in their values, maybe in their styles of thinking.

If that happens, that’s hugely significant. We’d have created a new species of person—people radically different from us, sharing the world with us. Humanity’s children, so to speak. Few things could be more historically momentous than that! But these matters are hard to think about well. Maybe that future is coming. But what might it even look like? What would it do to ethics? To philosophy of mind? To our sense of the purpose and value of humanity itself? Science fiction is a tool for imagining the possible shape of such a future. So that’s just one example of the way in which science fiction can help us think about future possibilities.

What I like about science fiction is there are many different voices, many different styles of experiment, as it were, with a lot of detail. It’s not just a caricature. You have a whole range of imagined worlds and imagined futures to draw on. It’s not as if there’s just one simple trolley problem that you have play variations on. They’re radically different, densely described scenarios. I think that’s a richness that it’d be hard to generate in a one-liner thought experiment.

I want to distinguish two epistemic dimensions here. One is richness of detail. The other is emotional or imaginative engagement. In my conception, you can think philosophically anywhere along a spectrum. On side are extremely abstract, cool-minded general statements like ‘maximise the sum of positive to negative emotional experiences’, or ‘the greatest good for the greatest number’. On the other side are richly imagined fictional worlds, worlds that really draw you in and shake you up, like a great novel or television series. In between are short stories and paragraph-long thought experiments like the famous trolley problem.

There are different, complementary epistemic virtues to the different sides of the spectrum. The abstract strips away irrelevancies, or what you think are irrelevancies. But the human mind is not so good at understanding the implications of abstract views. Even really simple things like the Wason Selection Task defeat us pretty badly. The amazing thing about the Wason Selection Task is that it’s a super simple cognitive puzzle. Really simple! And yet we are incredibly bad at it.

I used to write questions for law school admissions tests, the informal paragraph-long logical reasoning questions. I had a formula for a tricky question. I would write a paragraph in ordinary language on something that people have no intuitions about, like maybe the rise or fall of interest rates and its bearing on Party A and Party B in Bolivian politics. I’d write a paragraph about it. At the end, I’d present five possible answers combining negation and a conditional statement in ordinary language. Option A might be ‘Unless interest rates go down, Party B will not win.’ Option B might be ‘Party A will only lose if interest rates go up.’ Already your brain starts frying, just thinking about these. I can’t immediately tell the options apart, and I just wrote them! A computer could do that with just a few bits of information. It’s just a negation and a conditional.

That’s a long digression. But the human mind is really bad at abstract reasoning. What we’re good at is social cognition. And imagination. So when you have rich stories, and you engage social cognition and imagination, you meet the human mind where it’s strong. We can consider what would a principle like ‘maximize good consequences’ really amount to? It sounds good in the abstract, but does that mean you’re going to push someone in front of a runaway trolley, killing them to save five others? Well, I’m not sure. Once we start to think of it in that way, we’re engaging a different side of our mind. But the complementary problem is that you can get distracted by details irrelevant to your general thesis. Whether you want to push the guy shouldn’t depend on his race, for example. How much you sympathize with a character in a fiction might depend on whether they’re funny, whether they had a happy or sad childhood. You get influenced by all these details you might not want to be influenced by, and it’s not always clear which ones are relevant.

It’s a question of determining what’s noise and what isn’t. And noise might be necessary to make it plausible in some ways. Realism in writing often requires all kinds of extraneous facts. If you only put in things which play a role in the plot, you give the plot away before you mean to.

As a writer, you have these intuitions about which details it makes sense to include and which don’t work. You might not even know why they seem right or wrong.

Yes. When we’re talking about moral psychology, a rich psychological case study is so much more fascinating than a one liner or a caricature of a person. You come to see the complexity of a dilemma for somebody who’s a real person, or seems to be a real person, rather than for person ‘P’ in a schematic account of a situation. But let’s move on to your first book choice, Ted Chiang’s Stories of Your Life and Others.

Ted Chiang is a short story and novella writer who’s not very prolific. He’s published famously few stories. But a large proportion of his stories have a huge impact and win prizes. His stories are so richly philosophical. The book I’ve chosen is his first story collection. He has another collection that came out recently, Exhalation, which is also excellent. But I have an emotional attachment to the first one because that was the one that kindled my love for Ted Chiang’s work.

Though he was certainly well known in the science fiction short story community, his public fame came with Arrival, a blockbuster movie that was based on one of the stories in this collection: ‘Story of Your Life.’ In that story, an alien species arrives at Earth. The story is from the point of view of a linguist who is trying to decipher their language. Their written language is visual and non-temporal in a fascinating way, and one of the wonderful things about the story is how Chiang thinks it through in fascinating detail, what the grammar of a non-temporal, visual language might be, how it might influence cognition, how to build up a language spatially organised in a two dimensional plane rather than linearly and temporally organised, like human languages.

Are there other notable stories in this collection? Could you talk us through one?

‘Liking What You See’ is a fascinating philosophical thought experiment story set in a near future where people can wear helmets that contain a trans-cranial magnetic stimulator that shuts down the region of their brain that is responsible for making human beauty judgments. It’s told as a documentary with lots of

Read the whole thing.

Written by Leisureguy

27 January 2022 at 1:05 pm

Researchers Build AI That Builds AI

leave a comment »

In the Singularity genre of science fiction (for example, the movie Her) the critical moment is when AI starts building AI: AI uses its greater-than-human level of understanding to build a new AI that has a deeper level of understanding than the AI that created it, and then that new AI builds an even better AI — it’s like regenerative feedback. And it happens fast (at least in science fiction).

AlphaZero learns games by being told the rules and then — swiftly — playing millions of games against itself, learning from that experience what constitutes best tactics and best strategy.

Anil Ananthaswamy describes in Quanta what is happening now:

Artificial intelligence is largely a numbers game. When deep neural networks, a form of AI that learns to discern patterns in data, began surpassing traditional algorithms 10 years ago, it was because we finally had enough data and processing power to make full use of them.

Today’s neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons. The goal is to find nearly ideal values for them, a process known as optimization, but training the networks to reach this point isn’t easy. “Training could take days, weeks or even months,” said Petar Veličković, a staff research scientist at DeepMind in London.

That may soon change. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a “hypernetwork” — a kind of overlord of other neural networks — that could speed up the training process. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. Because the hypernetwork learns the extremely complex patterns in the designs of deep neural networks, the work may also have deeper theoretical implications.

For now, the hypernetwork performs surprisingly well in certain settings, but there’s still room for it to grow — which is only natural given the magnitude of the problem. If they can solve it, “this will be pretty impactful across the board for machine learning,” said Veličković.

Getting Hyper

Currently, the best methods for training and optimizing deep neural networks are variations of a technique called stochastic gradient descent (SGD). Training involves minimizing the errors the network makes on a given task, such as image recognition. An SGD algorithm churns through lots of labeled data to adjust the network’s parameters and reduce the errors, or loss. Gradient descent is the iterative process of climbing down from high values of the loss function to some minimum value, which represents good enough (or sometimes even the best possible) parameter values.

But this technique only works once you have a network to optimize. To build the initial neural network, typically made up of multiple layers of artificial neurons that lead from an input to an output, engineers must rely on intuitions and rules of thumb. These architectures can vary in terms of the number of layers of neurons, the number of neurons per layer, and so on.

One can, in theory, start with lots of architectures, then optimize each one and pick the best. “But training [takes] a pretty nontrivial amount of time,” said Mengye Ren, now a visiting researcher at Google Brain. It’d be impossible to train and test every candidate network architecture. “[It doesn’t] scale very well, especially if you consider millions of possible designs.”

So in 2018, Ren, along with his former University of Toronto colleague Chris Zhang and their adviser Raquel Urtasun, tried a different approach. They designed what they called a graph hypernetwork (GHN) to find the best deep neural network architecture to solve some task, given a set of candidate architectures.

The name outlines their approach. “Graph” refers to the idea that the architecture of a deep neural network can be thought of as a mathematical graph — a collection of points, or nodes, connected by lines, or edges. Here the nodes represent computational units (usually, an entire layer of a neural network), and edges represent the way these units are interconnected.

Here’s how it works. A graph hypernetwork starts with  . . .

Continue reading.

Written by Leisureguy

25 January 2022 at 3:38 pm

Aspiring to a Higher Plane: Going Deeper into Abbot’s “Flatland”

leave a comment »

Ian Stewart has a very interesting article in The Public Domain Review (that’s their mis of fonts) about Abbot’s novel Flatland (at the link, available as an ebook in various formats free of charge). The article includes various illustrations from the book, and I’ll not include those, but instead offer the initial text of the article:

Edwin Abbott Abbott, who became Headmaster of the City of London School at the early age of 26, was renowned as a teacher, writer, theologian, Shakespearean scholar, and classicist. He was a religious reformer, a tireless educator, and an advocate of social democracy and improved education for women. Yet his main claim to fame today is none of these: a strange little book, the first and almost the only one of its genre: mathematical fantasy. Abbott called it Flatland, and published it in 1884 under the pseudonym A. Square.

On the surface — and the setting, the imaginary world of Flatland, is a surface, an infinite Euclidean plane — the book is a straightforward narrative about geometrically shaped beings that live in a two-dimensional world. A. Square, an ordinary sort of chap, undergoes a mystical experience: a visitation by the mysterious Sphere from the Third Dimension, who carries him to new worlds and new geometries. Inspired by evangelical zeal, he strives to convince his fellow citizens that the world is not limited to the two dimensions accessible to their senses, falls foul of the religious authorities, and ends up in jail.

The story has a timeless appeal, and has never been out of print since its first publication. It has spawned several sequels and has been the subject of at least one radio programme and two animated films. Not only is the book about hidden dimensions: it has its own hidden dimensions. Its secret mathematical agenda is not the notion of two dimensions, but that of four. Its social agenda pokes fun at the rigid stratification of Victorian society, especially the low status of women, even the wives and daughters of the wealthy.

Flatland’s inhabitants are . . .

Continue reading. There’s more, the illustrations are nifty, and you can get a free copy of the ebook.

Written by Leisureguy

25 January 2022 at 1:27 pm

Mental itch satisfactorily scratched: Novel name found.

leave a comment »

In a shaving post a while back I was trying to recall the specifics of a science-fiction novel I read long ago. This morning the answer has been found, and I’ve updated that post:

(It reminds of a science-fiction short-story or novella or novel that I read decades ago, in which the protagonist, who was meditating by observing the stages water goes through as it comes to a boil, enters the mindset that the alien overlords want, whereupon he is harvested and incorporated into the network of humans the aliens have built. But although he was meditating, he is essential a wolf, not a sheep, and he awakens from the state in which the aliens put him, and leads the usual successful rebellion. I cannot remember enough to find the title, but the story was anthologized for sure, probably in New Tales of Space and Time, or one of Groff Conklin‘s anthologies.)

update: I just discovered Reddit’s r/tipofmytongue, and I used it to ask whether anyone knew the title. Three minutes later, I had the answer, thanks to user Feraffiphar.  The novel is Wolfbane, by Frederik Pohl and C. M. Kornbluth, published in 1959. Available on Kindle for $1.

What a relief.

The context was my own meditative observation of the stages of loading a brush with shaving soap.

Written by Leisureguy

19 December 2021 at 7:42 am

More lather fun, and the blissful Monday-morning shave

with 2 comments

When one has been told how to do a repeated task, it takes some practice and experience to discover the absolutely optimal approach. The discovery experience often feels like an extraordinary leap of insight and understanding, and the learner will exclaim, in italics as it were, the very thing he was told, but with the words feeling fresh and vibrant and electric. “Oh! Light pressure!” from a shaver who has been repeatedly told to use light pressure — and who thought he was using light pressure — spoken after the first time he actually does use light pressure.

I’m experiencing some of that in my lather making. Long ago Mantic59 advised shaking the brush until it was barely damp, then start loading the brush, adding very small amounts of water as needed to complete the loading. Once the brush has been loaded, a little more water is generally added as the brush works up the lather (on the face, in a bowl, in a cupped palm). 

Only recently have I found the right approach — that is, “right” in the sense not having the optimal lather (without it being runny in the slightest) and — equally if not mor eimportant to me — a process that is interesting to do and observe. 

The trick for me was to change the injunction “Wet the knot well under the hot-water tap and then shake the water out until the brush is only slightly damp” to end with “… until the brush is as dray as you can by shaking it.” The loading in this method works best if you have tried to shake all the water out of the brush. (You can’t — the brush will still be slightly damp — but by aiming at “dry” instead of “slightly damp,” I did get all the residual excess water out of the brush.

Then when you brush the soap, nothing much happens. So you add just a tiny amount of water. Note that this water is enjoyed by the tip of the brush, not the inner core. As you brush those few drops of water over the soap, you’ll see a little soap transferred from puck to brush. Add another tiny amount of water and brush again. Wieth another two or three repetitions, you will have gotten and good amount of soap loaded into the top of the knot and no extra water in the interior.

At this point I brush a good coating of lather over all my stubble (which has already been wetted and rubbed with a dot of Grooming Dept Moisturizing Pre-Shave, and then splashed a bit with more water). Once the stubble is fully coated, I added one more small amount of hot water to the tip of the brush and work that in. 

The process doesn’t take so long as this detailed description might make you think, and in any case once you learn it, practice will make you efficient. What I particularly like about this approach, beyond the quality of lather I get, is observing the stages of loading the brush.

(It reminds of a science-fiction short-story or novella or novel that I read decades ago, in which the protagonist, who was meditating by observing the stages water goes through as it comes to a boil, enters the mindset that the alien overlords want, whereupon he is harvested and incorporated into the network of humans the aliens have built. But although he was meditating, he is essential a wolf, not a sheep, and he awakens from the state in which the aliens put him, and leads the usual successful rebellion. I cannot remember enough to find the title, but the story was anthologized for sure, probably in New Tales of Space and Time, or one of Groff Conklin‘s anthologies.)

update: I just discovered Reddit’s r/tipofmytongue, and I used it to ask whether anyone knew the title. Three minutes later, I had the answer, thanks to user Feraffiphar.  The novel is Wolfbane, by Frederik Pohl and C. M. Kornbluth, published in 1959, available on Kindle for $1.

With lather in place my Eros slant did its usual superb job — and this razor enjoys and rewards light pressure (the sort of pressure that you’re told, and you still have trouble, and then when you finally get it right, you say, “Oh! Light pressure. Why didn’t you say so?”)

Three passes left my face perfectly smooth and undamaged, and a splash of Chatillon Lux’s aftershave toner in the matching fragrance left me ready to face the day — a dark day that may well see the fall of snow. Hope you are warm and dry.

Written by Leisureguy

6 December 2021 at 10:15 am

Astray to the spacedge

leave a comment »

Written by Leisureguy

18 November 2021 at 12:20 pm

Posted in Science fiction, Video

Good list of influential science-fiction books

leave a comment »

Take a look. I downloaded a dozen or so samples. (I have a Kindle, and one way to remember a book I might be interested in buying is to download the sample. It’s more effective than putting the book on a list, and surprisingly often just having the sample stifles the impulse to buy.)

One interesting point: the maker of the list (or at least the writer of headline) believes that no science-fiction book more influential than those in the list will ever be written. That’s quite a claim, and I wonder how they know.

Written by Leisureguy

13 October 2021 at 4:09 pm

Posted in Books, Science fiction

QNTM on memes, anti-memes, and knowledge that doesn’t want to be shared

leave a comment »

This is a fascinating interview, and I highly recommend reading it or listening to it. The introductory matter:

QNTM is a software engineer and the author of There Is No Antimemetics Division. Here, QNTM speaks to the Browser’s Uri Bram about collaborative fiction, why people with deep and very specific expertise are often great storytellers, and the surprising subjectivity of finding right answers in software development.

[Listen to this interview as a podcast or on Youtube (audio only)]

The interview proper begins:

Uri Bram: Your latest book—which is wonderful—is called There Is No Antimemetics Division. Can you tell us a little bit conceptually about the idea of antimemes?

What is an anti-meme?

QNTM: So if you’re reading this, you probably have a reasonable idea of what a meme is, but there are a couple of different colliding definitions of meme these days.

For my purposes, a meme is a contagious idea, which is much more of an older definition than today’s conception of “internet meme.” It’s an idea that catches on due to some kind of hook within the idea itself. It’s a piece of information that you have, but there’s also an aspect where you want to share this information with other people, spread this idea to other people.

The canonical example of a contagious idea would be some kind of evangelical religion, where they would say: “Hey, this is the way the universe is structured. This is how the cosmos exists, but also convert other people to this way of thinking, go out and find people and tell them this as well.”

But there’s a way simpler idea of memes: a contagious song, a catch phrase, a political slogan, or even a symbol that’s easy to draw. Wouldn’t that be a meme as well?

So looking at this I thought that some ideas are more contagious than others and some ideas aren’t contagious at all—they just kind of sit there. So what’s at the other end of the scale: what kind of ideas resist being spread? What information would you intrinsically not want anyone else to find out about? Or maybe you do want to spread it, but you can’t for whatever reason?

In real life, there’s a ton of ideas that fall into this class: random wild data is is very difficult to share because it’s just nonsense and it’s not very memorable; just boring things are difficult to share; complicated equations are difficult to share because you can’t remember them properly—because we’re humans and that’s not how we remember things.

But also there’s a category of ideas that are hard to share intrinsically like passwords. I’m motivated to keep my password a secret. There are all kinds of official secrets, like government secrets that you’re motivated to keep secret.

And from there, you move into injunctions and super injunctions and gag orders. Or what kind of journalism is forbidden in the country where you happen to live? What kind of things that you’ve not allowed to say? What is a taboo? What are the things that are true, but we don’t talk about? Although this is orthogonal to the truth. Just because something is mimetic or antiemetic doesn’t mean it’s true or false.

Playing with the idea of anti-memes in science fiction.

QNTM: The truth can be very difficult to share. As they say, a lie can circle the globe before the truth can get its boots on. So a falsehood can be very mimetic, but I looked at this and thought… “anti-meme” is a novel neologism, but it’s mainly just a synonym for things we already know exist. We know what secrets are, we know what taboos are. But I started taking this into a fictional concept and there’s a large amount of science fiction that takes the idea of memes and anti-memes and plays with it.

For instance you could have a concept which exists and is plain as day and is right in front of you, but you can’t remember it and when you turn away, you’ve stopped being able to remember that it was there—even though it was clearly there. An anti-memetic thing could trip you so you fall, but you wouldn’t remember why you fell and then when you stood up again, you wouldn’t even remember that you fell over at all.

So I thought okay, there’s a bit of mileage in there, I can tell a story in this.

If you’ve read the book, chapter one of the book is that concept, but that’s just the start, then then I keep going. Let’s suppose this is a real phenomenon. What kind of organization could dealing with this kind of phenomenon? How would that organization have to operate? What kind of person would work there? And as I just kept digging into those questions, more and more story just showed up and I started writing.

Uri Bram: I was recommended this book with no context. I was told there’s this book, you should just read it and go in knowing as little as you can, which I think in itself is kind of interesting on your terms. Not anti-memetic, but there was hidden knowledge or knowledge that they didn’t want to convey.

QNTM: Oh, absolutely. There’s two aspects of this kind of thing. There’s ideas that you want to know, but you can’t hang onto them, they get away from you and what do you do about that? What kind of systems do you have to develop to handle that?

And then on the flip side of it, the second half of the book is about . . .

Continue reading. There’s much more that’s interesting.

Written by Leisureguy

6 October 2021 at 9:07 pm

The Most Important Device in the Universe Is Powered by a 555 Timer

leave a comment »

I love the kind of technical shop talk exchanged among people familiar with some line of country remote from my knowledge. Such talk is studded with things I don’t know, though I can follow the trend of the conversation. It’s like a stream: I follow the overall flow, but there are occasional boulders sticking up out of the water.

It has some of the same appeal in certain kinds of science fiction, where the writer has begun in media res and uses casually words whose referents the reader is expected to figure out as the story progresses. This is a common technique (cf. William Gibson, Charlie Stross, et al.), and for me it works well, keeping me alert for clues that will explain the terms, which may refer to culture, dress, devices, or whatever.

A recent post at is full of that, but also provides an entertaining look at prop construction and usage in science-fiction movies and TV — the short clip at the end is a must see, and the comments also are worth reading.

Written by Leisureguy

24 September 2021 at 11:08 am

The Singularity approacheth

leave a comment »

As I’ve mentioned previously, the Singularity in science-fiction is when an AI gets good enough to design a better AI, and then recursion leads to exponential improvement, and the story’s off and running.

Well, it’s here. Matthew Hudson mentions it in an article in the New Yorker, which begins:

Deep learning, the artificial-intelligence technology that powers voice assistants, autonomous cars, and Go champions, relies on complicated “neural network” software arranged in layers. A deep-learning system can live on a single computer, but the biggest ones are spread over thousands of machines wired together into “clusters,” which sometimes live at large data centers, like those operated by Google. In a big cluster, as many as forty-eight pizza-box-size servers slide into a rack as tall as a person; these racks stand in rows, filling buildings the size of warehouses. The neural networks in such systems can tackle daunting problems, but they also face clear challenges. A network spread across a cluster is like a brain that’s been scattered around a room and wired together. Electrons move fast, but, even so, cross-chip communication is slow, and uses extravagant amounts of energy.

Eric Vishria, a general partner at Benchmark, a venture-capital firm in San Francisco, first came to understand this problem in the spring of 2016, while listening to a presentation from a new computer-chip company called Cerebras Systems. Benchmark is known for having made early investments in companies such as Twitter, Uber, and eBay—that is, in software, not hardware. The firm looks at about two hundred startup pitches a year, and invests in maybe one. “We’re in this kissing-a-thousand-frogs kind of game,” Vishria told me. As the presentation started, he had already decided to toss the frog back. “I’m, like, Why did I agree to this? We’re not gonna do a hardware investment,” he recalled thinking. “This is so dumb.”

Andrew Feldman, Cerebras’s co-founder, began his slide deck with a cover slide, then a team slide, catching Vishria’s attention: the talent was impressive. Then Feldman compared two kinds of computer chips. First, he looked at graphics-processing units, or G.P.U.s—chips designed for creating 3-D images. For a variety of reasons, today’s machine-learning systems depend on these graphics chips. Next, he looked at central processing units, or C.P.U.s—the general-purpose chips that do most of the work on a typical computer. “Slide 3 was something along the lines of, ‘G.P.U.s actually suck for deep learning—they just happen to be a hundred times better than C.P.U.s,’ ” Vishria recalled. “And, as soon as he said it, I was, like, facepalm. Of course! Of course!” Cerebras was proposing a new kind of chip—one built not for graphics but for A.I. specifically.

Vishria had grown used to hearing pitches from companies that planned to use deep learning for cybersecurity, medical imaging, chatbots, and other applications. After the Cerebras presentation, he talked with engineers at some of the companies that Benchmark had helped fund, including Zillow, Uber, and Stitch Fix; they told him that they were struggling with A.I. because “training” the neural networks took too long. Google had begun using super-fast “tensor-processing units,” or T.P.U.s—special chips it had designed for artificial intelligence. Vishria knew that a gold rush was under way, and that someone had to build the picks and shovels.

That year, Benchmark and Foundation Capital, another venture-capital company, led a twenty-seven-million-dollar round of investment in Cerebras, which has since raised close to half a billion dollars. Other companies are also making so-called A.I. accelerators; Cerebras’s competitors—Groq, Graphcore, and SambaNova—have raised more than two billion dollars in capital combined. But Cerebras’s approach is unique. Instead of making chips in the usual way—by printing dozens of them onto a large wafer of silicon, cutting them out of the wafer, and then wiring them to one another—the company has made one giant “wafer-scale” chip. A typical computer chip is the size of a fingernail. Cerebras’s is the size of a dinner plate. It is the largest computer chip in the world.

Even competitors find this feat impressive. “It’s all new science,” Nigel Toon, the C.E.O. and co-founder of Graphcore, told me. “It’s an incredible piece of engineering—a tour de force.” At the same time, another engineer I spoke with described it, somewhat defensively, as a science project—bigness for bigness’s sake. Companies have tried to build mega-chips in the past and failed; Cerebras’s plan amounted to a bet that surmounting the engineering challenges would be possible, and worth it. “To be totally honest with you, for me, ignorance was an advantage,” Vishra said. “I don’t know that, if I’d understood how difficult it was going to be to do what they did, I would have had the guts to invest.”

Computers get faster and faster—a remarkable fact that’s easy to take for granted. It’s often explained by means of Moore’s Law: the pattern identified in 1965 by the semiconductor pioneer Gordon Moore, according to which the number of transistors on a chip doubles every year or two. Moore’s Law, of course, isn’t really a law. Engineers work tirelessly to shrink transistors—the on-off switches through which chips function—while also refining each chip’s “architecture,” creating more efficient and powerful designs. . .

. . .  Cerebras’s first task was to address the manufacturing difficulties that bedevil bigger chips. A chip begins as a cylindrical ingot of crystallized silicon, about a foot across; the ingot gets sliced into circular wafers a fraction of a millimetre thick. Circuits are then “printed” onto the wafer, through a process called photolithography. Chemicals sensitive to ultraviolet light are carefully deposited on the surface in layers; U.V. beams are then projected through detailed stencils called reticles, and the chemicals react, forming circuits.

Typically, the light projected through the reticle covers an area that will become one chip. The wafer then moves over and the light is projected again. After dozens or hundreds of chips are printed, they’re laser-cut from the wafer. “The simplest way to think about it is, your mom rolls out a round sheet of cookie dough,” Feldman, who is an avid cook, said. “She’s got a cookie cutter, and she carefully stamps out cookies.” It’s impossible, because of the laws of physics and optics, to build a bigger cookie cutter. So, Feldman said, “We invented a technique such that you could communicate across that little bit of cookie dough between the two cookies.”

In Cerebras’s printing system—developed in partnership with T.S.M.C., the company that manufactures its chips—the cookies overlap at their edges, so that their wiring lines up. The result is a single, “wafer-scale” chip, copper-colored and square, which is twenty-one centimetres on a side. (The largest G.P.U. is a little less than three centimetres across.) Cerebras produced its first chip, the Wafer-Scale Engine 1, in 2019. The WSE-2, introduced this year, uses denser circuitry, and contains 2.6 trillion transistors collected into eight hundred and fifty thousand processing units, or “cores.” (The top G.P.U.s have a few thousand cores, and most C.P.U.s have fewer than ten.)

Aart de Geus, the chairman and co-C.E.O. of the company Synopsys, asked me, “2.6 trillion transistors is astounding, right?” Synopsys provides some of the software that Cerebras and other chipmakers use to make and verify their chip designs. In designing a chip, de Geus said, an engineer starts with two central questions: “Where does the data come in? Where is it being processed?” When chips were simpler, designers could answer these questions at drafting tables, with pencils in hand; working on today’s far more complex chips, they type code that describes the architecture they want to create, then move on to using visual and coding tools. “Think of seeing a house from the top,” de Geus said. “Is the garage close to the kitchen? Or is it close to the bedroom? You want it close to the kitchen—otherwise, you will have to carry groceries all through the house.” He explained that, having designed the floor plan, “you might describe what happens inside a room using equations.”

Chip designs are mind-bogglingly intricate. “There’s multiple layers,” de Geus said, with circuits crisscrossing and running on top of one another, like major expressway interchanges. For Cerebras’s engineers, working at wafer scale, that complexity was heightened. Synopsys’s software offered assistance in the form of artificial intelligence: pattern-matching algorithms recognized frequent problems and suggested solutions; optimization routines nudged rooms into faster, more efficient arrangements. If too many lanes of traffic try to squeeze between two blocks of buildings, the software allows engineers to play Robert Moses, shifting the blocks.

In the end, Feldman said, the mega-chip design offers several advantages. Cores communicate faster when they’re on the same chip: instead of being spread around a room, the computer’s brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that’s ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home. In describing the efficiencies of the wafer-scale chip, Feldman offered an analogy: he asked me to imagine groups of roommates (the cores) in a dormitory (a chip) who want to watch a football game (do computing work). To watch the game, Feldman said, the roommates need beer stored in a fridge (data stored in memory); Cerebras puts a fridge in every room, so that the roommates don’t have to venture to the dorm’s common kitchen or the Safeway. This has the added advantage of allowing each core to work more quickly on different data. “So in my dorm room I can have Bud,” Feldman said. “And in your dorm room you can have Schlitz.”

Finally, Cerebras had to surmount the problem of yield. The firm’s engineers use Trilogy’s trick: redundancy. But here they have an advantage over their predecessors. Trilogy was trying to make a general-purpose chip, with many varied components, and so wiring around a single failed element could require connecting to a distant substitute. On Cerebras’s chip, all the cores are identical. If one cookie comes out wrong, the ones surrounding it are just as good.

In June, in a paper published in Nature, Google developers reported that, for the first time, they’d fully automated a process called “chip floorplanning.” A typical chip can contain thousands of memory blocks, tens of millions of logic gates, and tens of kilometres of microscopic wiring. Using the same techniques that their DeepMind colleagues had used to teach a neural network to win at Go, they’d trained an A.I. to floorplan a tensor-processing unit, arranging these elements while preventing data congestion; when they tested the A.I.’s T.P.U. against one that a team of experts had spent several months creating, they found that the computer’s design, drawn up in a matter of hours, matched or exceeded the humans’ in efficient use of area, power, and wire length. Google is currently using the algorithm to design its next T.P.U.

People in A.I. circles speak of the singularity—a point at which technology will begin improving itself at a rate beyond human control. I asked de Geus if his software had helped design any of the chips that his software now uses to design chips. He said that it had, and showed me a slide deck from a recent keynote he’d given; it ended with M. C. Escher’s illustration of two hands drawing each other, which de Geus had labelled “Silicon” and “Smarts.” When I told Feldman that I couldn’t wait to see him use a Cerebras chip to design a Cerebras chip, he laughed. “That’s like feeding chickens chicken nuggets,” he said. “Ewww.”

Designing and manufacturing the chip turned out to be just half of the challenge. Brains use . . .

Continue reading. There’s more.

Emphasis added.

Written by Leisureguy

20 August 2021 at 6:23 pm

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

leave a comment »

Highly recommended: Kill Decision and also influx, by Daniel Suarez (and I would also highly recommend his other books, including the duology Daemon and Freedom™). Joe Hernandez reports for NPR:

Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator. Such weapons are known to be in development, but until recently there were no reported cases of autonomous drones killing fighters on the battlefield.

Now, a United Nations report about a March 2020 skirmish in the military conflict in Libya says such a drone, known as a lethal autonomous weapons system — or LAWS — has made its wartime debut. But the report does not say explicitly that the LAWS killed anyone.

“If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill,” Zachary Kallenborn wrote in Bulletin of the Atomic Scientists.

The assault came during fighting between the U.N.-recognized Government of National Accord and forces aligned with Gen. Khalifa Haftar, according to the report by the U.N. Panel of Experts on Libya.

“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions,” the panel wrote.

he Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use “machine learning” and “real-time image processing” against its targets.

The U.N. report goes on: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

“Fire, forget and find” refers to a weapon that once fired can guide itself to its target.

The idea of a “killer robot” has moved from fantasy to reality

Drone warfare itself is not new. For years, military forces and rebel groups have used remote-controlled aircraft to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets.

Azerbaijan used armed drones to gain a major advantage over Armenia in recent fighting for control of the Nagorno-Karabakh region. Just last month, the Israel Defense Forces reportedly used drones to drop tear gas on protesters in the occupied West Bank, while Hamas launched loitering munitions — so-called kamikaze drones — into Israel.

What’s new about the incident in Libya, if confirmed, is . . .

Continue reading. There’s more.

Written by Leisureguy

2 June 2021 at 3:34 pm

Ingenious and stimulating science-fiction story

leave a comment »

The story, “Lena,” is by qntm, translated from the Russian by Boris Ostanin. It begins:

This article is about the standard test brain image. For the original human, see Miguel Acevedo.

MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Álvarez Acevedo (2010–2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo’s memories.

The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time’s “Persons of the Year” at the end of 2031. The breakthrough was also met with severe opposition from humans rights groups.

Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo’s permission. Acevedo’s attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.

Acevedo died from coronary heart failure in 2073 at the age of 62. . .

Read the whole thing at the link.

Written by Leisureguy

16 April 2021 at 12:54 pm

%d bloggers like this: