Archive for the ‘Software’ Category
The future of media with AI in the mix
Jim Vanderhei writes at Axios:
Artificial intelligence will soon transform media on a scale and pace that rivals the internet two decades ago.
- Why it matters: The media companies that survive — and thrive — will be those that adapt quickly to fast-changing consumer needs.
We have spent months talking to the people building the new AI technologies, and reflecting on how they can help — or harm — your ability to get high-quality content you can trust.
- A lot of you ask about trust in media. So think of this column as prep for substantial changes coming to your news diet.
Eight transformations seem likely and are animating our thinking at Axios:
1. Expertise. The days of gaming social media algorithms are coming to an abrupt — and needed — end. Commoditized or general interest content will fade in value. Any company betting only on high traffic seems doomed. The demand for subject matter expertise will rise fast.
- Prepare for a world of fewer big, generic brands — but more and better niche companies aimed at your passions.
2. Trust. AI will rain a hellfire of fake and doctored content on the world, starting now. That’ll push readers to seek safer and trusted sources of news — directly instead of through the side door of social media.
- Advertisers will shift to safer, well-lit spaces, creating a healthy incentive for some publishers to get rid of the litter you see on their sites today. That shift is already happening.
3. Direct . . .
Twitter Demands Academics Who Won’t Pay $42k/Month Delete Any Twitter Data They Currently Have
Elon Musk is… Well, just read this Techdirt article by Mike Masnick. The article begins:
Elon Musk has insisted that “transparency is the key to trust” in rebuilding Twitter in his image. He says it all the time. But, of course, under Musk, Twitter has been significantly less transparent, choosing to skip its transparency reports, and generally close itself off. But one of the key methods for transparency on Twitter has long been its willingness to allow academic researchers to access its API and do research around Twitter and its users.
This is how, for example, we were able to learn that (contrary to widespread belief), Twitter’s moderation efforts actually favored conservatives (rather than suppressed them), and that the “bias” in its moderation efforts was against misinformation, not any political ideology.
Of course, in Musk’s desperate efforts to poke the bird he saddled with massive debt until it makes money, means that he turned off nearly everyone’s access to Twitter’s API (including ours) and demanded a minimum $42,000 per month from academics. That’s half a million dollars a year. For access to one company’s data. This is… not the kind of money that academic institutions have to.
The whole thing seems deliberately designed to cut academics off from Twitter’s data and to be as opaque as possible, rather than transparent.
As if to put an exclamation point on that thinking, the latest is that Twitter is telling academic institutions that haven’t paid (i.e., basically all of them that used to use Twitter’s data for research) that they are required to delete all the data they collected in the past by the end of this month.
But in recent weeks, the company has been contacting researchers, asking them to pay $42,000 a month to access 0.3% of all the tweets posted to the platform – something researchers have previously said is totally unaffordable. Previous contracts for access to the data were set as low as a couple of hundred dollars a month.
An email, seen by the i, says researchers who don’t sign the new contract “will need to expunge all Twitter data stored and cached in your systems”. Researchers will be required to post screenshots “that showcase evidence of removal”. They have been given 30 days after their agreement expires to complete the process.
Now, in talking to people (both former Twitter employees and academic researchers) about this, they do say that the Twitter API contract has long had a clause regarding data deletion. But also, that it has never been used in this manner (only in cases where there were claims of misuse of the data), and that the demand to prove the data has been deleted is particularly egregious and petty.
But, really, it just highlights how little Elon is willing to have outside experts look into the details of how Twitter is working. It’s the opposite of transparency.
And, thus, Elon himself is . . .
Internet Use a Modifiable Dementia Risk Factor in Older Adults?
Batya Swift Yasgur writes in Medscape:
Self-reported, regular internet use, but not overuse, in older adults is linked to a lower dementia risk, new research suggests.
Investigators followed over 18,000 older individuals and found that regular internet use was associated with about a 50% reduction in dementia risk compared with their counterparts who did not use the internet regularly.
They also found that longer duration of regular internet use was associated with a reduced risk of dementia, although excessive daily internet usage appeared to adversely affect dementia risk.
“Online engagement can develop and maintain cognitive reserve — resiliency against physiological damage to the brain — and increased cognitive reserve can, in turn, compensate for brain aging and reduce the risk of dementia,” study investigator Gawon Cho, a doctoral candidate at New York University School of Global Public Health, New York City, told Medscape Medical News.
The study was published online May 3 in the Journal of the American Geriatrics Society.
Unexamined Benefits
Prior research has shown that older adult internet users have “better overall cognitive performance, verbal reasoning, and memory,” compared with nonusers, the authors note.
However, since this body of research consists of cross-sectional analyses and longitudinal studies with brief follow-up periods, the long-term cognitive benefits of internet usage remain “unexamined.”
In addition, despite . . .
Good use of ChatGPT to help emergency room doctors
I blogged earlier about how health professionals rate ChatGPT as showing more empathy than human doctors in conveying information to patients. That finding has now been put to practical use, as described by Josh Tamayo-Sarver, an ER doctor, in an article in Fast Company:
With explosive interest in ChatGPT and other Large Language Model programs, many AI evangelists are touting their use as a diagnostic tool in healthcare—or even broadly declaring it to be the future of medicine. As with many high tech innovations, proclamations like this can often overshadow small but crucial applications that are immediately in our grasp.
I say that as a working ER doctor who is using ChatGPT to help treat patients right now.
But not for diagnosis. As I recently wrote about and discussed on a recent podcast, ChatGPT works fairly well as a diagnostic assistant—but only if you feed it perfect information, and the actual patient has a classic presentation, which is rarely the case. (And you don’t mind a 50% success rate that often misses life-threatening conditions.)
A more immediate utility of ChatGPT in medicine was dramatized to me at around 3am during a recent night shift, treating a 96-year-old with dementia who had difficulty breathing because of fluid in her lungs. Adding to this challenge, her three children (all in their 70s) were also in the ER room.
Understandably concerned, they hovered around my staff and me, to the point where their constant barrage of requests was actually slowing down treatment. To compound the problem, they were absolutely certain what their mother needed.
“Nurse, our mother needs IV fluids to treat her dehydration. Please hurry!” All three of them repeated variations of that request to us for over an hour.
Small problem: Administering IV fluids would have made her much worse or even kill her.
“I know that you are concerned about your mom,” I tried explaining to them. “But she cannot breathe right now because she has pulmonary edema, which is fluid in her lungs. If I hydrate her with IV fluids, it will make her pulmonary edema worse and she might die. Once we have the fluid out of her lungs and breathing better, then we can worry about her being dehydrated.”
“But whenever she is sick, she just needs an IV because of dehydration,” the patient’s son insisted, adamant. “Why don’t you just give her some IV fluid? She will be better in no time.”
I tried to rephrase my explanation in multiple different ways, but judging by their blank expressions, none were resonating. This is actually a common situation in the ER. People do not wake up planning on an emergency that brings them to me in the dead of night, and are often in a decompensated emotional state.
To make matters worse, several other patients were in more immediate need of my attention.
Desperate for a solution, I went down the hall to my computer, and fired up ChatGPT 4. Typing in:
“Explain why you would not give IV fluids to someone with severe pulmonary edema and respiratory distress even though you might be concerned that the patient is dehydrated. Explain it in simple and compassionate terms so that a confused person who cares about their mother can understand.”
Within seconds, ChatGPT produced this: . . .
The argument in favor of piracy in an age of streaming
This thread gives a cogent argument for pirating movies, and the argument is made by a movie creator.
American Psychological Association Report Says That Media & Politicians Are Simply Wrong About Kids & Social Media; Media Then Lies About Report
The world has a superabundance of bad faith actors. Mike Masnick provides an example in his article in Techdirt:
What if the media and the politicians threw a moral panic about kids and social media… and the actual experts didn’t come along? The American Psychological Association has put out a thoughtful, nuanced study, about kids and social media, that suggests that the hyperventilating we’ve heard about is misplaced, and that there are some simple common sense approaches that parents can and should take to make sure their kids are having a healthy experience with social media.
But it seems that the media is so bought into the moral panic narrative, that they’re completely misrepresenting the study, claiming it supports the moral panic.
The core findings, similar to what we’ve been saying all along, and which is supported by multiple other studies, is that social media is not inherently bad for kids. For the vast majority, it’s neutral or positive. There is a small percentage who seem to have issues with it, and we should focus our attention on dealing with those cases, rather than pushing for things like outright bans. From the findings of the APA report:
Using social media is not inherently beneficial or harmful to young people. Adolescents’ lives online both reflect and impact their offline lives. In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms. In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.
Adolescents’ experiences online are affected by both 1) how they shape their own social media experiences (e.g., they choose whom to like and follow); and 2) both visible and unknown features built into social media platforms.
Not all findings apply equally to all youth. Scientific findings offer one piece of information that can be used along with knowledge of specific youths’ strengths, weaknesses, and context to make decisions that are tailored for each teen, family, and community.
Of course, If you’ve been paying attention lately, we’ve been talking a lot about the ongoing moral panic regarding kids and social media. We’re hearing over and over again that social media is dangerous for all kids. Full stop. We’ve pointed out repeatedly that the data and research on the issues do not support literally any of the claims that politicians (and the media) are making about the impact of social media on kids.
But neither the media nor politicians seem to much care about the facts here. Claiming that social media is bad for kids and “something must be done” appeases voters who have been sold this line of bullshit. And, of course, if you scratch the surface a little, it’s not difficult to find the legacy entertainment industry pulling strings behind the scenes. After all, they’ve always hated the internet, and they own the major TV news providers as well, so it’s somehow easy for them to present nonsense as fact and have everyone buy it.
For example, around the same time this report came out . . .
Good use of AI: How generative AI is building better antibodies
Ewen Callaway writes in Nature:
At the height of the pandemic, researchers raced to develop some of the first effective treatments against COVID-19: antibody molecules isolated from the blood of people who had recovered from the disease.
Now, scientists have shown that generative artificial intelligence (AI) can provide a shortcut through some of this laborious process, suggesting sequences that boost the potency of antibodies against viruses such as SARS-CoV-2 and ebolavirus. A study published last week in Nature Biotechnology1 is part of growing efforts to apply ‘neural networks’ similar to those behind the ChatGPT AI platform to antibody design.
Antibody drugs for diseases including breast cancer and rheumatoid arthritis bring in more than US$100 billion in worldwide sales each year. Researchers hope that generative AI — neural networks that can create text, images and other content on the basis of learnt patterns — will speed up development and help to unlock antibody drugs for targets that have resisted conventional design approaches.
“There’s intense interest in discovering and engineering antibodies, and how one makes antibodies better,” says Peter Kim, a biochemist at Stanford University in California, who co-led the Nature Biotechnology paper.
Immune weapons
Antibodies are among the immune system’s key weapons against infection. The proteins have become a darling of the biotechnology industry, in part because they can be engineered to attach to almost any protein imaginable to manipulate its activity. But generating antibodies with useful properties and improving on these involves “a lot of brute-force screening”, says Brian Hie, a computational biologist at Stanford who also co-led the study.
To see whether generative AI tools could cut out some of the grunt work, Hie, Kim and their colleagues used neural networks called protein language models. These are similar to the ‘large language models’ that form the basis of tools such as ChatGPT. But instead of being fed vast volumes of text, protein language models are trained on tens of millions of protein sequences.
Other researchers have used such models to design completely new proteins, and to help predict the structure of proteins with high accuracy. Hie’s team used a protein language model — developed by researchers at Meta AI, a part of tech giant Meta based in New York City — to suggest a small number of mutations for antibodies.
The model was trained on only a few thousand antibody sequences, out of the nearly 100 million protein sequences it learnt from. Despite this, a surprisingly high proportion of the model’s suggestions boosted the ability of antibodies against SARS-CoV-2, ebolavirus and influenza to bind to their targets. . . .
Fedi.garden lists good servers in the Fediverse
I continue to enjoy Mastodon a lot. My participation in Facebook, for example, has declined sharply. I never used Twitter much, so that didn’t change: I still don’t use it much. But I find Mastodon both informative and enjoyable — and also remarkably free of snark, anger, and hostility. (If you want to try it, I collected in one post everything I’ve learned and discovered.)
To begin, you choose a server. This is not really a major decision because it is easy to move your account to a different server. In fact, I started on mastodon.party, but became uneasy when the admin went AWOL and didn’t respond. I didn’t know what was up, but since changing servers is easy, I moved to mstdn.ca and took with me the list of people I was following, the people following me, the bookmarks I had collected, and so on. (The post linked about explains how to move from one server to another.)
And the particular server you pick is in general not important because a server is federated with other servers — so, for example, many of those I follow (and those who follow me) are not on the server I use: that’s the power of federation: the servers are linked. A somewhat smaller server is good because the admin is likely to be more response.
Note that a “bad” server — say, the equivalent of a Nazi bar — can be defederated so that it is, in effect, cut off from the Fediverse: it cannot communicate with the servers in the Fediverse. One of the jobs of an admin is to maintain what amounts to a blacklist of “bad” servers. This capability is one reason the Mastodon community is so pleasant: bad actors have been exiled.
Fedi.garden is a directory of good servers — look at the About page at the link for more information. From my Mastodon post linked above:
One useful tool for finding an instance is the instance search page. Or take a look at the official Mastodon server list at joinmastodon.org. (Neither of those two lists shows all servers. Those in the second list have committed to the Mastodon Server Covenant.) Or look at the site Fedi.garden. (Its About page explains more.) See also this post on the importance of choosing a good Mastodon instance/server.
TheTitanBorn offers this thought on choosing an instance:
You don’t have to choose a big server. Bigger server doesn’t mean better experience.
On big servers, the moderators have to spend more time dealing with reports and can’t work with too many queries from users.
Admins and moderators on small servers usually have more time for their members, so the community is closer and friendlier.
There are no ads on Mastodon — another thing that makes the experience enjoyable — but obviously running a server costs money, and so I (and, I hope, most other members) contribute a modest amount to keep the server going. When you join an instance, you can look for information to make a regular monthly contribution through PayPal or buymeacoffee or the like.
If you do join, I’m @leisureguy@mstdn.ca and I’d like to hear from you.
Will A.I. Become the New McKinsey?
Ted Chiang writes in the New Yorker:
When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.
Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.
Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, I’m not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, I’m talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, I’m not criticizing the idea of selling things; I’m criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, I’m criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.
As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isn’t really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?
Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one.
Many people think that A.I. will create more unemployment, and bring up universal basic income, or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, I’ve become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we don’t, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.
You may remember that, in the run-up to the 2016 election, . . .
Personal AI you can interact with.
I would say that we are well beyond the Turing test at this point. The NY Times has an article about Pi, “an AI companion.”
The article is interesting, and so is Pi. Give it a spin.
Thinking of having a beer? This AI-generated commercial will cure you of that.
Watch the commercial in this article. AI still has a way to go before it is really persuasive.
Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely
There seems to be a running thread of the dangers from corporations. I believe that is because the incentives of capitalism are restricted to a narrow basis: profits. That does not take into account quality of life, quality of the environment, survivability of civilization, or anything else — just profit. And that leads corporations to dark places and bad actions.
Benj Edwards writes for Ars Technica:
According to the New York Times, AI pioneer Dr. Geoffrey Hinton has resigned from Google so he can “speak freely” about potential risks posed by AI. Hinton, who helped create some of the fundamental technology behind today’s generative AI systems, fears that the tech industry’s drive to develop AI products could result in dangerous consequences—from misinformation to job loss or even a threat to humanity.
“Look at how it was five years ago and how it is now,” the Times quoted Hinton as saying. “Take the difference and propagate it forwards. That’s scary.”
Hinton’s resume in the field of artificial intelligence extends back to 1972, and his accomplishments have influenced current practices in generative AI. In 1987, Hinton, David Rumelhart, and Ronald J. Williams popularized backpropagation, a key technique for training neural networks that is used in today’s generative AI models. In 2012, Hinton, Alex Krizhevsky, and Ilya Sutskever created AlexNet, which is commonly hailed as a breakthrough in machine vision and deep learning, and it arguably kickstarted our current era of generative AI. In 2018, Hinton won the Turing Award, which some call the “Nobel Prize of Computing,” along with Yoshua Bengio and Yann LeCun.
Hinton joined Google in 2013 after Google acquired Hinton’s company, DNNresearch. His departure a decade later marks a notable moment for the tech industry as it simultaneously hypes and forewarns about the potential impact of increasingly sophisticated automation systems. For instance, the release of OpenAI’s GPT-4 in March led a group of tech researchers to sign an open letter calling for a six-month moratorium on developing new AI systems “more powerful” than GPT-4. However, some notable critics think that such fears are overblown or misplaced.
Hinton did not sign that open letter, but he believes that intense competition between tech giants like Google and Microsoft could lead to a global AI race that can only be stopped through international regulation. He emphasizes collaboration between leading scientists to prevent AI from becoming uncontrollable.
“I don’t think [researchers] should scale this up more until they have understood whether they can control it,” he told the Times.
Hinton is also worried about . . .
More on Notesnook
I downloaded the Notesnook app and installed it on my Macbook Air and have now been using it. I like it a lot. One recent discovery:
The Note entry box has the usual formatting toolbar (bold, italic, underline, style (paragraph, heading 1, heading 2, etc.), font, bulleted list, etc.). At the left of that toolbar is a + and if you click the + you see a popup menu.

Click the one you want and try it out — that is, learn by trying things. For example, I found that with the Tab key, I can indent a task to make it a subtask of the task above.
The app is extremely easy to use, and it offers various ways to organize Notes (and thus ideas). Notes can be collected in several independent ways:
- Notebooks
- Tags
- Color (of Note name)
- Favorites
I started with the free version, but I am trying the subscription version of a month. Right now, I think I’ll continue the subscription. It’s a useful tool.
AI may replace doctors before truckdrivers

Take a look at this post by Kevin Drum. The example question with two answers, one from a doctor and one from a chatbot, shows why. Health professionals rated the answers (without knowing their source) separately on two criteria: quality and empathy.
I assume Bluesky hopes you won’t read their terms of service agreement, just accept it
Check out the Twitter thread below, which quotes (with screengrabs) from Bluesky’s terms of service, which terms include:
‘perpetual’ & ‘irrevocable’ license to all your content (posts, name, likeness, pics)
BlueSky can delete your account for any reason, but may refuse to delete it if you ask…
All disputes = individual arbitration
Screenshots of BlueSky content is not allowed
BlueSky’s Privacy Policy says that there’s no such thing as good info security; if anything happens to your data, it’s 100% not their fault
They ‘may‘ ‘attempt’ to notify you of a breach
Reminder: if you want to argue this, it’s individual private arbitration. They pick (and pay) the arbitrators.
Fun fact: corporations choose and pay the arbitrators used for arbitration, and arbitrators overwhelmingly decide in favor of the corporation. That may just be a persistent coincidence.
Interesting (and useful) software: Notesnook
Someone on Mastodon mentioned Notesnook as a good Evernote replacement, Notesnook being open-source and fully encrypted and working across multiple devices. It has a free version (with less functionality than the subscription version), so I started using the free version.
I first used it in my browser, but after making a few notebooks and several notes, I decided to subscribe on a monthly basis to explore it in more depth. I figure that I can discontinue it at any point if it’s not the thing for me.
I like it quite a bit as a way to store in an organized way notes I make on the fly. I have notebooks (so far) for Politics, Cooking (which includes food notes as well as recipes), Things to be done (notes on movies to see (separate notes for Netflix, Prime Video, Britbox, etc.), books to read, and so on), Science/Tech (notes on interesting developments), and so on.
A note can be assigned to multiple notebooks — thus a note on climate change might be found in Politics and also in Science/Tech.
It really is a nice little program. Try the free version and if it seems useful, try a subscription for a month or two. I pay using PayPal, so it’s easy to discontinue the payments if I want to drop out.
Update: A new discovery

I just learned how to insert a task list (checkboxes) into a Note — and (as they say) so much more: outline list, horizontal rule, quote, image, etc.)
The entry box for a Note has a formatting menu at the top with the usual options: bold, italic, underline, style (paragraph, heading 1, heading 2, etc.), font, bulleted list, etc.
At the left of that toolbar is a + and if you click the + you see the popup menu shown at the left. Click one of those options, and that block is inserted into the Note.
You can play around with those to learn more about how they work. For example, in the Task List, you can press Tab to make a Task a subtask of the Task above. Same with Outline.
I like this app the more I use it. (I am working with the app I downloaded and installed on my Mac.)
The app allows you to organize Notes in a variety of ways that are independent of each other:
• Notebooks
• Tags
• Color (of Note name)
• Favorites
How to begin
I still am liking the app a lot. With some experience, I would recommend starting in this way:
- Read over the app description
- Download the app and install it using this guide.
- Spend a while trying it out. Check out the online help docs.
- The free version is limited as to the number of notebooks and tags and the like, so if you find it of interest after playing with it a bit, start a free trial of the Premium version. The Premium version has more features and capabilities. If you cancel within 30 days, no charge. It took me less than a week to decide to stick with it.
To cancel your subscription, click the Settings “gear” icon at the bottom left corner of the Notesnook screen. The first items on the popup list are about your subscription and include an easy way to cancel.
The websites used by ChatGPT
I blogged about this Washington Post article earlier, but this gift link lets you use the interactive features of the article.
Inside the secret list of websites that make AI like ChatGPT sound smart
Kevin Schaul, Szu Yu Chen, and Nitasha Tiku report in the Washington Post:
UPDATE: Here’s a gift link to the article, so the interactive features will work. /update
AI chatbots have exploded in popularity over the past four months, stunning the public with their awesome abilities, from writing sophisticated term papers to holding unnervingly lucid conversations.
Chatbots cannot think like humans: They do not actually understand what they say. They can mimic human speech because the artificial intelligence that powers them has ingested a gargantuan amount of text, mostly scraped from the internet.
[Big Tech was moving cautiously on AI. Then came ChatGPT.]
This text is the AI’s main source of information about the world as it is being built, and it influences how it responds to users. If it aces the bar exam, for example, it’s probably because its training data included thousands of LSAT practice sites.
Tech companies have grown secretive about what they feed the AI. So The Washington Post set out to analyze one of these data sets to fully reveal the types of proprietary, personal, and often offensive websites that go into an AI’s training data.
To look inside this black box, we analyzed Google’s C4 data set, a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA. (OpenAI does not disclose what datasets it uses to train the models backing its popular chatbot, ChatGPT)
The Post worked with researchers at the Allen Institute for AI on this investigation and categorized the websites using data from Similarweb, a web analytics company. About a third of the websites could not be categorized, mostly because they no longer appear on the internet. Those are not shown.
We then ranked the remaining 10 million websites based on how many “tokens” appeared from each in the data set. Tokens are small bits of text used to process disorganized information — typically a word or phrase.
Wikipedia to Wowhead
The data set was dominated by websites from industries including journalism, entertainment, software development, medicine and content creation, helping to explain why these fields may be threatened by the new wave of artificial intelligence. The three biggest sites were >patents.google.com No. 1, which contains text from patents issued around the world; wikipedia.org No. 2, the free online encyclopedia; and scribd.com No. 3, a subscription-only digital library. Also high on the list: b-ok.org No. 190, a notorious market for pirated e-books that has since been seized by the U.S. Justice Department. At least 27 other sites identified by the U.S. government as markets for piracy and counterfeits were present in the data set.
Some top sites seemed arbitrary, like wowhead.com No. 181, a World of Warcraft player forum; thriveglobal.com No. 175, a product for beating burnout founded by Arianna Huffington; and at least 10 sites that sell dumpsters, including dumpsteroid.com No. 183, that no longer appear accessible. Others raised significant privacy concerns. Two sites in the top 100, coloradovoters.info No. 40 and flvoters.com No. 73, had privately hosted copies of state voter registration databases. Though voter data is public, the models could use this personal information in unknown ways. . .
A collection of podcasts on AI hype
Per Axbom, whom I follow on Mastodon, has started a collection of podcasts about AI hype. The podcasts are from various sources, and he will add new podcasts to the collection as he discovers them. Right now there are 10 podcasts.