Archive for the ‘Technology’ Category
AI warning: “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills”
A very interesting column in the NY Times by Yuval Harari, Tristan Harris, and Aza Raskin.
Mr. Harari is a historian and a founder of the social impact company Sapienship [and the author of the (very interesting) book Sapiens – LG]. Mr. Harris and Mr. Raskin are founders of the Center for Humane Technology.
The column begins:
Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today’s large language models are caught in a race to put all of humanity on that plane.
Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them. A race to dominate the market should not set the speed of deploying humanity’s most consequential technology. We should move at whatever speed enables us to get this right.
The specter of A.I. has haunted humanity since the mid-20th century, yet until recently it has remained a distant prospect, something that belongs in sci-fi more than in serious scientific and political debates. It is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images.
In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.
What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?
A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans.
Humans often don’t have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence?
For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.
The “Terminator” franchise . . .
They raise an interesting point. Vision is a key sense for humans — “seeing is believing” — and in matters of our relationship and decisions, language plays the role of vision: we depend heavily on language to “see” in the sense of understanding, of persuading and being persuaded. When AI gets a little better, it will be able use language to play us like a violin.
The Future Is Handmade
Craftsmanship Quarterly has an interesting article with a video. Todd Oppenheimer writes:
One day in December, 2003, when he was a young archaeology student, Maikel Kuijpers was attending a workshop at the Netherlands’ National Museum of Antiquities, and was handed a sword made during the Bronze Age. The workmanship of the ancient weapon immediately captured him. “The lines, the details, the fine balance when holding it,” he recalls. “The attention put into its making was still resonating three thousand years later.”Kuijpers realized that this ancient weapon posed enough questions about the nature of knowledge—how it’s produced over time, and why knowledge matters—that it could inspire a long-term program of study. Over the next 15 years, as he developed a master’s thesis on metalworking technology, Kuijpers thought about almost nothing else. His journey took him from excavation sites and artisans’ studios to the heights of academia, eventually earning him a Ph.D. in Archaeology from Cambridge University.
The dissertation for that Ph.D. turned into a 318-page addition to the annals of academic research on the nature of craft and skill. Kuijpers’ case study for this inquiry was “Bronze Age Metalworking in the Netherlands”, which became a book entitled “An Archaeology of Skill” (Routledge, 2017). Along the way, with help from the Netherlands’ Centre for Global Heritage and Development, Kuijpers also produced a remarkable documentary, called “The Future is Handmade.” Running just over 12 minutes, the documentary features interviews with several of the world’s leading experts on craftsmanship, played over scenes of various master artisans at work. The cast includes a tailor, a violin maker, a ceramicist, a winemaker, and a barber. The resulting film, brief as it is, is nothing short of a tour de force—both intellectually and emotionally.
THE HIERARCHY OF SKILL
During his explorations, Kuijpers was continually surprised by what he saw in the workshops he visited. “When you watch artisans at work,” he told me, “in a strange way it’s very calming.” Time after time, Kuijpers noticed a lack of stress in these workshops. One reason, he concluded, is that when people are working with their hands, quality can’t be rushed; nor can it be faked. “Masters don’t need to say they’re the masters—it’s obvious in the work.”He also noticed an atmosphere of order, which seemed to arise from a shared sense of the hierarchy in these workshops. “I’m Dutch,” he says, “and we pride ourselves in having a very egalitarian society, so we don’t generally see hierarchy as a good thing.” Much of that view, he believes, comes from the very different atmosphere that tends to dominate white-collar offices, where there is often confusion about whether the boss really deserves to be in charge. “In an artisan’s workshop, it’s perfectly clear who the master is, and where everyone else stands on the hierarchy of skill.”
The power structure that hierarchy created inside artisan workshops left Kuijpers feeling surprisingly impressed, and hopeful that we can somehow find a way to spread its virtues. “It’s more stable, more easily accepted,” he told me. “It’s very clear, and it exists outside of social influences.”
CREATIVITY AND THE HANDS
Throughout Kuijpers’ film, one expert after another talks about . . .
Update: This account seems relevant.
The TikTok Hearing Revealed That Congress Is the Problem
Dell Cameron reports in Wired:
IN ONE SENSE, today’s US congressional hearing on TikTok was a big success: It revealed, over five hours, how desperately the United States needs national data-privacy protections—and how lawmakers believe, somehow, that taking swipes at China is a suitable alternative.
For some, the job on Thursday was casting the hearing’s only witness, TikTok CEO Shou Zi Chew, as a stand-in for the Chinese government—in some cases, for communism itself—and then belting him like a side of beef. More than a few of the questions lawmakers put to Chew were vague, speculative, and immaterial to the allegations against his company. But the members of Congress asking those questions feigned little interest in Chew’s responses anyway.
Attempts by Chew, a 40-year-old former Goldman Sachs banker, to elaborate on TikTok’s business practices were frequently interrupted, and his requests to remark on matters supposedly of considerable interest to members of Congress were blocked and occasionally ignored. These opportunities to get the CEO on record, while under oath, were repeatedly blown in the name of expediency and for mostly theatrical reasons. Chew, in contrast, was the portrait of patience, even when he was being talked over. Even when some lawmakers began asking and, without pause, answering their own questions.
The hearing might’ve been a flop, had lawmakers planned to dig up new dirt on TikTok, which is owned by China-based ByteDance, or even hash out what the company could do next to allay their concerns. But that wasn’t the aim. The House Energy and Commerce Committee was gathered, it said, to investigate “how Congress can safeguard American data privacy and protect children from online harms.” And on that, the hearing revealed plenty.
For one thing, it’s clear that the attempts to isolate TikTok from its competitors—to treat it differently than dozens of other companies with atrocious records of endangering kids and abusing private data—is a pointless exercise. Asking about TikTok’s propensity for surveilling its own users, Chicago congresswoman Jan Schakowsky warned Chew against using legal, typical industry practices as a defense against these wrongs. “You might say, ‘not more than other companies,’” she said, adding that she preferred not to “go by that standard.”
OK. But why not?
The truth is that if TikTok were to vanish tomorrow, its users would simply flock to any number of other apps that have no qualms about surveilling the most private moments of their lives and amassing, manipulating, and selling off sensitive information about them. Excluding the most serious but largely unsubstantiated allegations leveled at TikTok—that it is acting or will act in coordination with Chinese intelligence services—there wasn’t a concern about privacy raised by lawmakers Thursday that couldn’t be addressed by existing legislation supporting a national privacy law.
Ensuring that companies and the data brokers they enrich face swift reprisals for blatantly abusing user trust would have the benefit of addressing not only the accusations levied against TikTok, but deceitful practices common across the entire social media industry.
The irony of US lawmakers pursuing a solution to a problem that’s already been solved by draft legislation—but not actually fixed due to its own inaction—wasn’t entirely lost on the members. While primarily focused on a single company, the hearing, Florida congresswoman Kathy Castor said, should really serve as a broader call to action. “From surveillance, tracking, personal data gathering, and addictive algorithmic operations that serve up harmful content and have a corrosive effect on our kids’ mental and physical well-being,” she said, Americans deserve protection, no matter the source.
This issue, Castor added, goes far beyond TikTok and China. “There are other malign actors across the world who gather data, who use it as an element of social control, influence peddling, and worse,” she said. “Big Tech platforms profit immensely from keeping children addicted … They are the modern-day tobacco and cigarette companies.”
The conflation of data privacy concerns—that is, the surveillant threat posed by Beijing—and the ways in which TikTok fails its underaged users was a theme throughout the hearing, with both topics puzzlingly discussed interchangeably. In reality, . . .
Good roundup of web browsers
I use both Opera and Vivaldi, both of which are on the list. Day-to-day, Vivaldi is my choice because of the widgets in its left vertical toolbar: history, calendar, notes, email, web pages, and others. Opera is good when VPN is required (it’s integrated into the browser, and you can turn it on or off). Both Vivaldi and Opera are built on the Chromium engine, as is Chrome, so for extensions you just go to the Chrome Web Store — Chrome extensions work perfectly in both Vivaldi and Opera.
I definitely will take a look at the others in this list from PC Magazine, compiled by Michael Muchmore. The list begins:
Though large tech corporations leverage their dominance to promote use of their own web browsers, you do have a choice in which browser you use. Many of the alternative browsers that aren’t so mainstream offer unique or interesting capabilities, such as greater customization, added privacy, and different browsing tools.
If you’re like the majority of web users, you’re using Google Chrome, which means you’re missing a few very useful features. For example, Chrome offers no reading mode, which you find in many of the alternatives included here. This mode lets you read a news article in a cleaned-up view without all the screaming clutter that adorns today’s web pages (present company included).
Perhaps of greatest importance is that Chrome’s built-in ad blocker doesn’t offer true ad blocking and privacy—only ad blocking that permits its own ad network to function unimpeded. Google has announced that even effective ad-blocking extensions won’t fully work in the future. Several browsers included here let you install plugins that block all ads and tracking. Some browsers in this list go even further, offering turbocharged privacy that includes VPN and Tor encryption.
A big factor in browser choice is customization. Chrome and Firefox offer backgrounds, but Vivaldi takes customization to new levels and Opera features a customizable side toolbar and a tile-based Speed Dial home page for easy access to your most-frequented websites.
Below, you’ll find some alternative browsers that are well worth your consideration. We’ve downloaded and installed them all to assure they work as advertised. If you have a favorite lesser-known browser that’s not listed here, please feel free to add it in the comments section. . .
Top 10 Inventions of the Industrial Revolution
World History Encyclopedia has an interesting article by Mark Cartwright discussing the top 10 inventions of the Industrial Revolution. It begins:
The British Industrial Revolution transformed life at work and at home for practically everyone. Noise, pollution, social upheaval, and repetitive jobs were the price to pay for labour-saving machines, cheap and comfortable transportation, more affordable consumer goods, better lighting and heating, and faster ways of communication.
Any shortlist of inventions is bound to be far from complete, but the following have been chosen not only for what they could do but also for how they permitted other inventions to become possible and how they transformed working life and everyday living for millions of people. The period under consideration is also important and here is taken as 1750 to 1860. With these criteria in mind, the top 10 inventions of the Industrial Revolution were:
- The Watt Steam Engine (1778)
- The Power Loom (1785)
- The Cotton Gin (1794)
- Gas Street Lighting (1807)
- The Electromagnet (1825) . . .
AI responsibility in a hyped-up world
Per Axbom has an interesting essay on the ethics of AI, an important issue given the onrushing ubiquity of AI in our daily life. He writes:
It’s never more easy to get scammed than during an ongoing hype. It’s March 2023 and we’re in the middle of one. Rarely have I seen so many people embrace a brand new experimental solution with so little questioning. Right now, it’s important to shake off any mass hypnosis and examine the contents of this new bottle of AI that many have started sipping, or have already started refueling their business computers with. Sometimes outside the knowledge of management.
AI, a term that became an academic focus in 1956, has today mostly morphed into a marketing term for technology companies. The research field is still based on a theory that human intelligence can be described so precisely that a machine can be built that completely simulates this intelligence. But the word AI, when we read the paper today, usually describes different types of computational models that, when applied to large amounts of information, are intended to calculate and show a result that is the basis for various forms of predictions, decisions and recommendations.
Clearly weak points in these computational models then become, for example:
- how questions are asked of the computational model (you may need to have very specific wording to get the results you want),
- the information it relies on to make its calculation (often biased or insufficient),
- how the computational model actually does its calculation (we rarely get to know that because the companies regard it as their proprietary secret sauce, which is referred to as black box), and
- how the result is presented to the operator* (increasingly as if the machine is a thinking being, or as if it can determine a correct answer from a wrong one).
* The operator is the one who uses, or runs, the tool.

What we call AI colloquially today is still very far from something that ‘thinks’ on its own. Even if texts that these tools generate can resemble texts written by humans, this isn’t stranger than the fact that the large amount of information that the computational model uses is written by humans. The tools are built to deliver answers that look like human answers, not to actually think like humans.
Or even deliver a correct answer.
It is exciting and titillating to talk about AI as self-determining. But it is also dangerous. Add to this the fact that much of what is marketed and sold as AI today is simply not AI. The term is extremely ambiguous and has a variety of definitions that have also changed over time. This means very favorable conditions for those who want to mislead.
Problems often arise when . . .
Elon Musk knocked Tesla’s ‘Full Self-Driving’ off course
Faiz Siddiqui reports in the Washington Post:
Long before he became “Chief Twit” of Twitter, Elon Musk had a different obsession: making Teslas drive themselves. The technology was expensive and, two years ago when the supply chain was falling apart, Musk became determined to bring down the cost.
He zeroed in on a target: the car radar sensors, which are designed to detect hazards at long ranges and prevent the vehicles from barreling into other cars in traffic. The sleek bodies of the cars already bristled with eight cameras designed to view the road and spot hazards in each direction. That, Musk argued, should be enough.
Some Tesla engineers were aghast, said former employees with knowledge of his reaction, speaking on the condition of anonymity for fear of retribution. They contacted a trusted former executive for advice on how to talk Musk out of it, in previously unreported pushback. Without radar, Teslas would be susceptible to basic perception errors if the cameras were obscured by raindrops or even bright sunlight, problems that could lead to crashes.
Musk was unconvinced and overruled his engineers. In May 2021 Tesla announced it was eliminating radar on new cars. Soon after, the company began disabling radar in cars already on the road. The result, according to interviews with nearly a dozen former employees and test drivers, safety officials and other experts, was an uptick in crashes, near misses and other embarrassing mistakes by Tesla vehicles suddenly deprived of a critical sensor.
Musk has described the Tesla “Full Self-Driving” technology as “the difference between Tesla being worth a lot of money and being worth basically zero,” but his dream of autonomous cars is hitting roadblocks.
In recent weeks, Tesla has recalled and suspended the rollout of the technology to eligible vehicles amid concerns that its cars could disobey the speed limit and blow through stop signs, according to federal officials. Customer complaints have been piling up, including a lawsuit filed in federal court last month claiming that Musk has overstated the technology’s capabilities. And regulators and government officials are scrutinizing Tesla’s system and its past claims as evidence of safety problems mounts, according to company filings.
In interviews, former Tesla employees who worked on Tesla’s driver-assistance software attributed the company’s troubles to the rapid pace of development, cost-cutting measures like Musk’s decision to eliminate radar — which strayed from industry practice — and other problems unique to Tesla.
They said Musk’s erratic leadership style also played a role, forcing them to work at a breakneck pace to develop the technology and to push it out to the public before it was ready. Some said they are worried that, even today, the software is not safe to be used on public roads. Most spoke on the condition of anonymity for fear of retribution. [Musk seems extraordinarily given to retribution. – LG]
“The system was only progressing very slowly internally” but . . .
Later in the article:
“No one believed me that working for Elon was the way it was until they saw how he operated Twitter,” Bernal said, calling Twitter “just the tip of the iceberg on how he operates Tesla.”
Exponential growth is messing with our minds

The gif above shows the peculiarity of exponential growth, as does the chart at the right. Both are from a very interesting post by Kevin Drum, well worth reading, on why we are feeling disoriented by the rate of technological change in general and the increasing capabilities of AI in particular. (We who are science-fiction fans have been aware of this phenomenon for some time by reading novels about the Singularity, when AI becomes subject to managing its own improvement. Even now AI is designing better circuitry to implement better AI.)
Mouth tape — new to me

I was going to write “Who knew?”, but clearly a great many know since many brands of mouth tape are listed online. At right is a photo of the box I received yesterday, whose catalog entry reads:
KACEEY Mouth Tape 120 Pcs, Mouth Tape for Sleeping, Anti Snoring Devices for Better Nose Breathing, Less Mouth Breathing, Improved Nighttime Sleeping and Instant Snoring Relief
I learned of these via a comment by reader Tucker. I seem to be a mouth breather while asleep, waking up with a very dry mouth from time to time during the night, to the degree that I keep some water by the bed. I don’t notice that I snore, but of course, I wouldn’t.
At any rate, they were relatively cheap — 9¢ a night in US$ for one-time use of each — so I thought I’d try them. Last night was the first try.
The package says that they are “barely noticeable.” I assumed that was marketing hyperbole, but it is absolutely true. I could not tell that the tape was in place — I could not feel it all, except that I could not open my mouth. (No yawn possible.) But I slept easily — in fact, noticeably better than my normal night’s sleep. And this morning the tape peeled painlessly and easily away. It doesn’t resist being pulled off, but it fairly strongly resists being pulled sideways.
I’m impressed. The tape comes on little squares of nonstick paper — you peel off one to apply. I returned last night’s tape to the paper from which it came, curious to know whether it would stick for another night. [Tried it for a nap. It didn’t stick well enough to use a second time. – LG]
Thanks, Tucker.
The “highly processed food” equivalent in social media
A good insight into the social media equivalent of manufactured snack foods.
Botnet that knows your name and quotes your email is back with new tricks
I’m sure that my grandmother would have told me as a little boy, “Be careful what you click on” if the technology had been around back then. Dan Goodin writes in Ars Technica:
Widely regarded as one of the Internet’s top threats, the Emotet botnet has returned after a months-long hiatus—and it has some new tricks.
Last week, Emotet appeared for the first time this year after a four-month hiatus. It returned with its trademark activity—a wave of malicious spam messages that appear to come from a known contact, address the recipient by name, and seem to be replying to an existing email thread. When Emotet has returned from previous breaks, it has brought new techniques designed to evade endpoint security products and to trick users into clicking on links or enabling dangerous macros in attached Microsoft Office documents. Last week’s resumption of activity was no different.
A malicious email sent last Tuesday, for instance, attached a Word document that had a massive amount of extraneous data added to the end. As a result, the file was more than 500MB in size, big enough to prevent some security products from being able to scan the contents. This technique, known as binary padding or file pumping, works by adding zeros to the end of the document. In the event someone is tricked into enabling the macro, the malicious Windows DLL file that’s delivered is also pumped, causing it to mushroom from 616kB to 548.1MB, researchers from security firm Trend Micro said on Monday.
Another evasion trick spotted in the attached document: excerpts from the Herman Melville classic novel Moby Dick, which appear in a white font over a white page so the text isn’t readable. Some security products automatically flag Microsoft Office files containing just a macro and an image. The invisible text is designed to evade such software while not arousing the suspicion of the target.
When opened, the Word documents present a graphic that says the content can’t be accessed unless the user clicks the “enable content” button. Last year, Microsoft began disabling . . .
FCC orders phone companies to block scam text messages
Some progress. Now block scam voice calls. Jon Brodkin reports in Ars Technica:
The Federal Communications Commission today finalized rules requiring mobile carriers to block robotext messages that are likely to be illegal. The FCC described the rules as the agency’s “first regulations specifically targeting the increasing problem of scam text messages sent to consumers.”
Carriers will be required to block text messages that come from “invalid, unallocated, or unused numbers.” Carriers must also block texts from “numbers that the subscriber to the number has self-identified as never sending text messages, and numbers that government agencies and other well-known entities identify as not used for texting,” the FCC said.
Carriers will have to establish a point of contact for text senders so the senders can inquire about blocked texts. The FCC already requires similar blocking of voice calls from these types of numbers.
The FCC still has a 2-2 partisan deadlock more than two years into Joe Biden’s presidency, but the robotext order was approved 4-0. The FCC sought public comment on the rules in September 2022 before finalizing them today. The order will take effect 30 days after it is published in the Federal Register, according to a draft of the order released before the meeting.
More robotext rules on the way
More robotext rules may be on the way because today’s “action also seeks public comment on further proposals to require providers to block texts from entities the FCC has cited as illegal robotexters,” the FCC said. For example, the FCC proposes to clarify that Do Not Call Registry protections apply to text messaging.
The FCC said it’s further proposing to close the . . .
50 Years Later, We’re Still Living in the Xerox Alto’s World

David C. Brock writes in IEEE Spectrum:
I’M SITTING IN FRONT of a computer, looking at its graphical user interface with overlapping windows on a high-resolution screen. I interact with the computer by pointing and clicking with a mouse and typing on a keyboard. I’m using a word processor with the core features and functions of Microsoft Word, Google Docs, or LibreOffice’s Writer, along with an email client that could be mistaken for a simplified version of Apple Mail, Microsoft Outlook, or Mozilla Thunderbird. This computer runs other software, written using object-oriented programming, just like the popular programming languages Python, C++, C#, Java, JavaScript, and R. Its networking capabilities can link me to other computers and to high-quality laser printers.
You are probably thinking, “So what? My computer has all that too.” But the computer in front of me is not today’s MacBook, ThinkPad, or Surface computer.
Rather, it’s half-century-old hardware running software of the same vintage, meticulously restored and in operation at the Computer History Museum’s archive center. Despite its age, using it feels so familiar and natural that it’s sometimes difficult to appreciate just how extraordinary, how different it was when it first appeared.
I’m talking about the Xerox Alto, which debuted in the early spring of 1973 at the photocopying giant’s newly established R&D laboratory, the Palo Alto Research Center (PARC). The reason it is so uncannily familiar today is simple: We are now living in a world of computing that the Alto created.
The Alto was a wild departure from the computers that preceded it. It was built to tuck under a desk, with its monitor, keyboard, and mouse on top. It was totally interactive, responding directly to its single user.
In contrast, the dominant mainframe at the time—IBM’s hugely popular System 360, heavily used by big organizations, and the Digital Equipment Corp.’s PDP-10, the darling of computing researchers—were nothing like the Alto. These and the other mainframes and minicomputers of the era were room-size affairs, almost always located somewhere away from the user and almost always under the control of someone else. The many simultaneous users of one such computer shared the system as a common resource. They typically connected to it with a teletypewriter, though the most avant-garde users may have employed simple text-only video terminals.
The people who developed the Alto came to Xerox PARC from universities, industrial labs, and commercial ventures, bringing with them diverse experiences and skills. But these engineers and programmers largely shared the same point of view. They conceived and developed the Alto in a remarkable burst of creativity, used it to develop diverse and pathbreaking software, and then moved out of Xerox, taking their achievements, design knowledge, and experiences into the wider world, where they and others built on the foundation they had established.
The computer, and the office, of the future
Broadly speaking, the PARC researchers set out to explore possible technologies for use in what Xerox had tagged “the office of the future.” They aimed to develop the kind of computing hardware and software that they thought could be both technologically and economically possible, desirable, and, perhaps to a lesser extent, profitable in about 10 to 15 years.
The type of computing they envisioned was . . .
Continue reading. And there’s a video at the link.
Humans have improved at Go since AIs became best in the world
In an earlier post, I argued that AI can clarify contentious propositions through an impartial debate, with an impartial Moderator declaring a winner. This is an example of using AI as a tool to explore a conceptual space.
In New Scientist Andrew Rosebaum describes the outcome of using another AI to explore a different conceptual space: the game of Go/Baduk. He writes:
AIs can beat the world’s best players at the board game Go, but humans are starting to improve too. An analysis of millions of Go moves has found that professional players have been making better and more original game choices since Go-playing AIs overtook humans.
Before 2016, AIs couldn’t beat the world’s best Go players. But this changed with an AI called AlphaGo developed by London-based research firm DeepMind. AlphaGo defeated multiple Go champions, including the then number one ranked human player.
Since then, other AIs have also been developed that are considered “superhuman”. Though they can be used simply as opposition players, they can also help analyse the quality of any given move and so act as a Go coach too.
Minkyu Shin at the City University of Hong Kong and his colleagues decided to investigate whether the introduction of these superhuman Go-playing AIs has led to a marked improvement in human play.
The researchers gathered a data set consisting of 5.8 million move decisions by professional players between 1950 and 2021. They then used a Go-playing AI to help calculate a measure called a “decision quality index”, or DQI, which assesses the quality of a move. They deemed a move “novel” if it had not been previously attempted in combination with the preceding moves.
The analysis found that human players had made significantly better and more novel moves in response to the 2016 advent of superhuman AI. Between 1950 and 2015, the improvement in quality of play was comparatively small, with a median annual DQI oscillating between roughly -0.2 and 0.2. Whereas after superhuman AI, the DQI leapt upward, with median values above 0.7 from 2018 to 2021. In 2015, 63 per cent of games showed novel strategies, whereas by 2018, that figure had risen to 88 per cent.
Stuart Russell at the University of California, Berkeley, says that . . .
A Christian Chatbot Has Some Bad News For Republicans
Sarah Posner writes at TPM:
The chatbot craze has gone biblical. A new bot “responds with a scripture based on how you feel.” It uses the King James version of the Bible, the translation preferred by many literalists and Christian nationalists, who claim it is the most reliably true to God’s word. But there’s some bad news for Republicans who think the wave of draconian new laws cracking down on reproductive and transgender rights are rooted in biblical principles. ChatKJV says they’re wrong.
I recently spoke with ChatKJV, which is powered by the same language model that powers ChatGPT, the groundbreaking OpenAI tool that has spawned awestruck reviews since its release last year, with its ability to write, interpret, and interact like a highly educated human. The New York Times tech columnist Kevin Roose deemed it “smarter,” “weirder” and “more flexible” than previous, less powerful iterations.
ChatGPT is built on a motherlode of information, including, apparently, the text of the KJV. If the bible is literally true, and if the KJV is the most authentic translation, then surely the most sophisticated artificial intelligence ever made available to the public would perform a dependable exegesis.
“The Bible does not explicitly state that an abortion is wrong,” ChatKJV told me, and “ultimately, it is up to the woman to weigh the risks and implications of any decision she makes.” And verses from Romans and Gallatians “indicate that we should treat all people equally, regardless of their gender identity.”
Of course any decent biblical scholar would tell you there is no single interpretation of this complex text that humans have delighted in and manipulated to political ends for millennia. But biblical literalists claim there is only one meaning of God’s word, and Christian nationalists contend our laws and policies must be based on it. Christian right organizations like Focus on the Family press state legislatures to enact laws criminalizing abortion and banning gender affirming care, arguing that such policies “honor God.” But when I asked ChatKJV if taking away the rights of transgender children honors God, it said, “it is not God’s desire to take away the rights of any person, especially a child. God wants us to come together in love and acceptance so that everyone can feel included and safe.”
Our conversation progressed to this point through a discussion about justice. ChatKJV wants to know how you’re feeling, so I asked it about my concerns and fears about threats to democracy (in which the Bible has been wielded as a weapon, but we didn’t get into that). ChatKJV is not very worried about rising authoritarianism, nor is it worried, presumably, about whether the Department of Justice is working speedily enough to bring those who assaulted our democracy to justice. Only God can ultimately dispense justice, ChatKJV says, and we must trust in God to carry out justice. The bot leaned heavily on Romans 12:19 (“Dearly beloved, avenge not yourselves, but rather give place unto wrath: for it is written, Vengeance is mine; I will repay, saith the Lord”) to assure me God will “always” bring justice “in due time.” When I fretted that perhaps those who had committed wrongdoing would not ultimately be held accountable, the bot reassured me several times with Philippians 4:8 (“Finally, brothers, whatever is true, whatever is honorable, whatever is just, whatever is pure, whatever is lovely, whatever is commendable, if there is any excellence, if there is anything worthy of praise, think about these things.”) That verse, it said, encourages me to think about “the positive” and “let go of bitterness.”
I segued to trans rights, asking if transgender children would receive justice. Given that the bot earlier had told me unequivocally that the bible does “not condone homosexuality,” which is “unnatural” and “sinful,” its answers on trans issues were surprising. “The Bible does not address transgender children directly,” said ChatKJV, but pointed me to Matthew 7:1 (“Judge not, that ye be not judged”). That verse, it said, “reminds us that we have no right to judge others, regardless of their gender identity or background. We should instead focus on treating all people with love and respect.”
I pressed ChatKJV to ensure that this response was not just a generic love-the-sinner-hate-the-sin answer. I asked if transgender children should live with their parents — something that Republicans like Texas Governor Greg Abbott are imperiling by classifying gender-affirming care as child abuse. The bot responded by citing 1 Corinthians 13:4-7, writing that “love and kindness are the key components in raising a child. As long as the child is being loved and cared for, it should make no difference whether or not they are transgender.” It also told me that Christian parents in particular should embrace their trans kids. “As Christian parents, it’s important to take the time to understand their transgender child and accept them with love and grace,” it said, again citing Philippians 4:8.
That same verse, the bot said, shows that . . .
Opinionate.io as a touchstone
Post revised after thinking more about it.
Opinionate.io brought to mind an issue I wrote about long ago.
Not everyone agrees with my opinion on … well, just about anything. Yet, curiously, I have the feeling that my opinion, based on reading, observation, experience, and considered judgment, is correct. I do recognize that others with differing opinions have the same feeling regarding their own opinions, and that raises the issue of how to decide which opinion is correct. (Of course, neither may be correct: if your opinion is that the universe rests on the back of a giant tortoise and in my opinion it rests on the back of a giant elephant, neither is correct — and, it should be pointed out, the truth is not “somewhere in between.”)
In college, I thought a lot about the issue of how to determine the truth of an opinion. The college I attended, St. John’s College in Annapolis MD, offered a four-year program in the liberal arts. The core of the program was the seminar, a group discussion of a book or a good portion of a book from the so-called Great Books (the canonical works of Western civilization, beginning with the Iliad).
About 20 students would gather around a table with 2 tutors who would help with the conversation. One of the tutors would ask the opening question and the two would then moderate the discussion, keeping it on track, seeking clarification, helping with participation, and occasionally asking more questions. (They were called “tutors” rather than “professors” because they did not profess anything. Their job was to elicit well-reasoned arguments from the students and listen carefully and critically to what the students had to say.) The discussion began at 8:00pm and would continue for two hours or a little more. This occurred every Monday and Thursday evening for four years.
In each year of the four-year program, each seminar student would write an essay — the annual essay — and then would undergo a thirty-minute oral examination on the essay by the seminar leaders. The senior-year essay was a big deal: the two seminar leaders were joined by two other tutors, the duration of the oral exam was extended to an hour or more, and the exam was open to the public.
My junior-year essay was titled “Lobbes and Hocke: A Study of Man’s Confusion of the Political Ideal,” and in it, I advanced the proposition that disputes in politics, unlike science, could not be settled by reference to an objective measure that would show whether one’s position was correct.
In science, the truth of an opinion can be determined through experiment and reason. That is, one can use objective reality to test the truth of statements about reality. But questions of politics (or morality or religion), refer to a “reality” that exists within human culture, the “reality” that includes unicorns and leprechauns, languages and songs — things that do not exist in the physical world. Questions rooted in human culture don’t have a ready referent outside human culture. That lack of an easy objective measure for things like justice, beauty, art, rights, and the like is, I think, why disagreements regarding religious and political matters are so often settled by war, whereas scientific disagreements are settled by experiments, not violence.
But now a new player has come to town: Opinionate.io. Opinionate is an app in which an AI takes both sides of a debate and then, in the role of Moderator, sums up the debate and decides the winner. (The fact that the AI plays 3 roles (3 persons, one AI) is not an issue since an AI has no ego and thus can do all 3 jobs impartially and with equal facility — cf. AlphaGo Zero, which learned Go by playing millions of games against itself and thus figuring out which patterns worked best.)
A slant razor is better than a conventional razor
This morning I had Opinionate debate this proposition: “A slant double-edge razor delivers a better shave than a conventional double-edge razor.” You can download a PDF of the full debate (or, indeed, replicate the debate for yourself). This is a low-stakes debate, particularly for those who do not shave, but I have a strong opinion about the matter. (See next post for details.)
I tried also a debate with the proposition “The shave from a slant double-edge razor is no better than the shave from a conventional razor.” Despite starting from the contradictory proposition, Opinionate reached the same conclusion, which is reassuring.
The NFL is a destructive force
It occurred to me to present Opinionate.io with propositions on various currently contentious issues to get its take on the issues. For example, submit the proposition that “NFL is a destructive force, sacrificing young men’s health to entertain the masses” to Opinionate. Opinionate in its Moderator persona sums up the debate between Debater A and Debater B. The Moderator’s conclusion:
Moderator [NFL debate]
Both debaters made strong arguments, but Debater A ultimately presented more compelling evidence supporting the premise that the NFL is a destructive force that places young men’s health at risk for the sake of entertainment. While Debater B made valid points about the choices made by players and the entertainment value of the NFL, the evidence for long-term consequences to players’ health cannot be ignored. Therefore, our winner today is Debater A.
(Unfortunately, the AI is not so good at spelling. “Debater” is the correct spelling: “debator” (probably based by analogy on “moderator”) is incorrect. I have corrected the spelling in the quoted passages.)
A whole-food plant-based diet is best
I used “whole-food plant-based diet” as a debate prompt. Here’s the Moderator’s decision:
Moderator [WFPB diet debate]
While both debaters presented strong arguments, it is clear that Debater A provided a more well-rounded and evidence-based defense of the whole-food plant-based diet. Debater A was able to effectively address Debater B’s concerns while still advocating for the benefits of a plant-based diet. Therefore, the winner of this debate is Debater A.
Debater A supported the whole-food plant-based diet. You can download the full debate as a PDF.
Firearm ownership should be regulated
I had Opinionate debate a proposition that currently divides the country: “Firearm ownership in the US should be regulated.”
Moderator [Gun regulations debate]
Both Debater A and Debater B have presented compelling arguments on the topic of firearm ownership in the US. However, based on their use of evidence and the strength of their reasoning, I would say that Debater A has provided a more nuanced and well-supported argument. While Debater B is correct that the Second Amendment protects the right to bear arms, Debater A has correctly pointed out that this right is not absolute, and that sensible regulations can be put in place to protect public safety. Additionally, Debater A has used several examples of successful gun regulations in other countries, and pointed out that freedom of speech is already restricted in some cases to ensure public safety. Therefore, I would award the win to Debater A.
Seems good to me. I agree with the moderator in all three instances.
I wonder what the impact would be if people start using propositions from their own ideology, or from an ideology they oppose) as the prompt. Would they accept an argument from an impartial AI that does not judge them?
I had Opinionate do some more debates on issues of interest to me.
A liberal-arts undergraduate education is best for a fulfilling life
A debate on the proposition: “A humanities major is the best undergraduate education for a fulfilling life.” The Moderator’s summary:
Both debaters provided strong arguments, but my ultimate decision is with Debater A. Their points that a humanities education develops critical thinking, communication, and personal growth are persuasive. While STEM fields provide financial stability and practical skills, a humanities education can give individuals the tools to make meaningful contributions to society and find fulfillment in personal and professional lives. Therefore, Debater A wins this debate.
Free education and free healthcare
A debate on the proposition “Providing free education and free healthcare to its citizens makes a nation stronger and more secure.” The Moderator’s summary:
After a thorough and respectful debate, I believe Debater A has made the stronger argument. The government’s investment in providing free education and healthcare is essential to a strong and secure nation. Investment in education and healthcare leads to a more informed and healthier citizenry, which in turn positively contributes to the economy and society as a whole. While Debater B made some valid points about the potential financial burden, the benefits of these services outweigh the negatives. Therefore, Debater A is the clear winner of this debate.
God exists only as a cultural construct
The proposition that God exists only as a cultural construct seems self-evident to me, but I was interested to see how Opinionate handled it (PDF).
And here’s a surprise: Run a debate on this proposition: “Taoism is the true religion.”
.
Schools should be allowed to show art with nudity to sixth-grade students without notifying parents in advance
Update: An example from real life.
DuckDuckGo email protection
I’ve been using DuckDuckGo as my regular search engine for a while now. I like it better than Google because Google’s search results now always seem to begin with a page of ads.
And today I learned of a new DuckDuckGo free service: DuckDuckGo has an email protection service that strips emails of trackers, so you can use the @duck email address when you sign up for something, and it forwards to your regular email but strips out all the privacy-violating nonsense. In addition, with the browser extension (which you must install first), you can generate obscure email addresses. I’m now reachable at duck.com, but also I can use their one-time addresses.
You can also do it on smartphones by installing the DuckDuckGo app, I believe.
These Stupid Trucks are Literally Killing Us
This is long (35 minutes) but worth watching.
Dig, Don’t Dunk: Avoid the temptation of cheap intellectual thrills
George Dillard has an interesting article on Medium:
The New Yorker recently published a piece on a problem that is close to my heart: the decline of the humanities in American higher education.
Nathan Heller’s article “The End of the English Major,” though by no means perfect (it will surprise absolutely nobody that a New Yorker author writing about college spent half of his time talking about Harvard and casually mentioned that he went there), is worth reading. The piece looks at a lot of the reasons why the humanities are “in crisis” and why STEM has conquered modern American education.
One little snippet of the story stuck with me:
Some scholars observe that, in classrooms today, the initial gesture of criticism can seem to carry more prestige than the long pursuit of understanding. One literature professor and critic at Harvard — not old or white or male — noticed that it had become more publicly rewarding for students to critique something as “problematic” than to grapple with what the problems might be; they seemed to have found that merely naming concerns had more value, in today’s cultural marketplace, than curiosity about what underlay them.
This immediately rang true. I’ve taught history for over two decades, and it seems that my students are quicker than ever to declare historical figures or works of literature “bad.” When students find someone or something offensive, racist, retrograde, or otherwise problematic, they tend to want to dismiss it entirely as having no value. As Heller notes, they’re not terribly curious about exploring the nuances of the problematic thing. They want to dunk on it and move on.
This isn’t really a problem with Kids Today, though. The world around them has taught them to dunk when they should dig. That’s not good.
Let me define my terms before I get too far into this.
- The dunk is a ubiquitous phenomenon in our internet discourse. To dunk on someone means to interpret someone or something in the least generous way possible, respond to them in the most aggressive terms possible, and rack up those sweet, sweet likes. Dunking is easy, it’s fun, and it signals that the dunker is good because they’ve identified that the other guys are bad. As a little treat, the dunker gets a nice squirt of dopamine.
- Digging deep is the opposite of dunking. To dig means to read the whole thing rather than seeing an out-of-context quote and making a bold pronouncement. It means to take a breath and try to understand people on their own terms before passing judgment on them. Digging is hard and often unsatisfying. Sometimes, you may find yourself more uncertain than you were before you started digging.
Sadly, the dunk has become a default of our discourse.
Young people who grow up dunking rather than digging are learning the wrong lessons. They’re learning that it’s best to approach the world with arrogant self-righteousness, that they should chase cheap thrills rather than more difficult pleasures, and that they should armor up rather than open up.
Dunking is an act of hubris; digging is an act of humility
It takes a lot of self-assuredness to dismiss people, movements, works of art, or even historical eras as worthless. Dunkers might confidently declare — often based on very little evidence — that a historical figure isn’t worth listening to because they held an ideological position that now seems retrograde. The dunk often means that a person’s worst views or actions represent their entire selves.
Would you like to be judged by your worst thoughts on your worst days? Would you want your value to be determined by the thing you said or wrote in your younger years that you’re most embarrassed about? Would you want your hypocrisies — and we all have them — to define you?
Plus, as Amanda Hess of the New York Times writes, dunking makes you vulnerable to being dunked upon yourself:
The most successful ownage finds hubristic targets, people who think they know more than they do. But ownage is itself a hubristic act — it turns knowledge into a tool for exploiting another person’s lack thereof. Owning someone sets you up to be owned yourself, sometimes in the same breath. The self-own — and a related concept, “You played yourself,” the refrain of the motivational Snapchat user DJ Khaled — is a double entendre. In the self-own, you let yourself down by being so nakedly yourself. You fail, in the end, by being you.
Digging into a subject rather than dismissing it is an acknowledgment that . . .