Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

Facial Recognition Failures Are Locking People Out of Unemployment Systems

leave a comment »

Todd Feathers writes in Vice:

People around the country are furious after being denied their unemployment benefits due to apparent problems with facial recognition technology that claims to prevent fraud.

Unemployment recipients have been complaining for months about the identity verification service ID.me, which uses a combination of biometric information and official documents to confirm that applicants are who they claim to be. The complaints reached another crescendo this week after Axios published a “deep dive” article about the threat of unemployment fraud based on statistics provided to the outlet by ID.me.

Some unemployment applicants have said that ID.me’s facial recognition models fail to properly identify them (generally speaking, facial recognition technology is notoriously less accurate for women and people of color). And after their applications were put on hold because their identity couldn’t be verified, many should-be beneficiaries have had to wait days or weeks to reach an ID.me “trusted referee” who could confirm what the technology couldn’t.

On Twitter, there are dozens of complaints about ID.me per day, and local news articles all over the country have detailed the problem over the course of months. In California, 1.4 million unemployment beneficiary accounts were abruptly suspended on New Year’s Eve and the beneficiaries were required to re-verify their identity using ID.me, a process which many found difficult and resulted in them waiting for weeks to reactivate their accounts while they struggled to make ends meet.

In Colorado, benefit recipients who had no problem establishing their identity before ID.me took over were suddenly rejected and went months without receiving the payments they were eligible for.

The story is similar in FloridaNorth CarolinaPennsylvaniaArizona, and many other states.

ID.me CEO Blake Hall told Motherboard that the company’s facial recognition technology does one-to-one matching—comparing one face against a picture of that same face (from a driver’s license, say)—whereas other applications of facial recognition attempt to find a match for a face in a large dataset of faces, known as one-to-many matching.

“The algorithms used for Face Match operate ~99.9% efficacy,” Hall wrote in an email to Motherboard. “There is in fact no relationship between skin tone and Face Match failure on a 1:1 basis” according to a regression analysis the company performed.

That doesn’t mesh with the experiences being shared on Twitter by people like Tim Weaver, a gig economy worker in Las Vegas who was suddenly cut off from his unemployment benefits in late March after ID.me failed to identify him.

Weaver told Motherboard that when he attempted to pass ID.me’s facial recognition test he held a phone in front of him in the instructed position but “it rejected it, didn’t give us a reason, just rejected it. It rejected it three times, and then it locked me out of the system.”

Weaver said he attempted to contact the company’s customer support through its chat feature, which claims to provide assistance 24-hours a day, seven days a week. He tried numerous times at all hours of the day. He tried contacting the state of Nevada for help, but the employees there directed him back to ID.me.

This went on for several weeks, Weaver said, until he tweeted a scathing criticism of the company, which then reached out and—after several more frustrating days—verified Weaver’s identity.

Weaver went for three weeks without receiving his benefit. “I couldn’t pay bills,” he said. “Luckily I had enough food saved up so I didn’t have to worry about that. It’s just ridiculous.”

In his statement to Motherboard, Hall said that facial recognition failures are not a problem with the technology but with  . . .

Continue reading. There’s more.

This is bad, and the company is taking no responsibility. Welcome to dystopia.

Written by Leisureguy

18 June 2021 at 1:13 pm

Phone Network Encryption Was Deliberately Weakened

leave a comment »

Lorenzo Franceschi-Bicchierai writes in Vice:

A weakness in the algorithm used to encrypt cellphone data in the 1990s and 2000s allowed hackers to spy on some internet traffic, according to a new research paper.

The paper has sent shockwaves through the encryption community because of what it implies: The researchers believe that the mathematical probability of the weakness being introduced on accident is extremely low. Thus, they speculate that a weakness was intentionally put into the algorithm. After the paper was published, the group that designed the algorithm confirmed this was the case.

Researchers from several universities in Europe found that the encryption algorithm GEA-1, which was used in cellphones when the industry adopted GPRS standards in 2G networks, was intentionally designed to include a weakness that at least one cryptography expert sees as a backdoor. The researchers said they obtained two encryption algorithms, GEA-1 and GEA-2, which are proprietary and thus not public, “from a source.” They then analyzed them and realized they were vulnerable to attacks that allowed for decryption of all traffic.

When trying to reverse-engineer the algorithm, the researchers wrote that (to simplify), they tried to design a similar encryption algorithm using a random number generator often used in cryptography and never came close to creating an encryption scheme as weak as the one actually used: “In a million tries we never even got close to such a weak instance,” they wrote. “This implies that the weakness in GEA-1 is unlikely to occur by chance, indicating that the security level of 40 bits is due to export regulations.”

Researchers dubbed the attack “divide-and-conquer,” and said it was “rather straightforward.” In short, the attack allows someone who can intercept cellphone data traffic to recover the key used to encrypt the data and then decrypt all traffic. The weakness in GEA-1, the oldest algorithm developed in 1998, is that it provides only 40-bit security. That’s what allows an attacker to get the key and decrypt all traffic, according to the researchers.

A spokesperson for the organization that designed the GEA-1 algorithm, the European Telecommunications Standards Institute (ETSI), admitted that the algorithm contained a weakness, but said it was introduced because the export regulations at the time did not allow for stronger encryption.

“We followed regulations: we followed export control regulations that limited the strength of GEA-1,” a spokesperson for ETSI told Motherboard in an email. . .

Continue reading.

Written by Leisureguy

17 June 2021 at 3:36 pm

How Software Is Eating the Car

leave a comment »

Robert N. Charette writes in IEEE Spectrum:

Predictions of lost global vehicle production caused by the ongoing semiconductor shortage continue to rise. In January, analysts forecast that 1.5 million fewer vehicles would be produced as a result of the shortage; by April that number had steadily climbed to more than 2.7 million units, and by May, to more than 4.1 million units.

The semiconductor shortage has underscored not only the fragility of the automotive supply chain, but placed an intense spotlight on the auto industry’s reliance on the dozens of concealed computers embedded throughout vehicles today.

“No other industry is undergoing as rapid technological change as the auto industry,” says Zoran Filipi, Chair of the Department of Automotive Engineering at Clemson University’s International Center for Automotive Research. “This is driven by the need to address impending, evermore stringent CO2 and criteria emission regulations, while sustaining unprecedented rate of progress with development of automation and infotainment, and meeting the customer expectations regarding performance, comfort, and utility.” 

The coming years will see even greater change, as more auto manufacturers commit to phasing out their internal combustion engine (ICE) powered vehicles to meet global climate-change targets by replacing them with electric vehicles (EVs) that will eventually be capable of autonomous operation.

The past decade of ICE vehicle development illustrates the rapid progress it has made, as well as where it is heading.

“Once, software was a part of the car. Now, software determines the value of a car,” notes Manfred Broy, emeritus professor of informatics at Technical University, Munich and a leading expert on software in automobiles. “The success of a car depends on its software much more than the mechanical side.” Nearly all vehicle innovations by auto manufacturers, or original equipment manufacturers (OEMs) as they are called by industry insiders, are now tied to software, he says.

Ten years ago, only premium cars contained 100 microprocessor-based electronic control units (ECUs) networked throughout the body of a car, executing 100 million lines of code or more. Today, high-end cars like the BMW 7-series with advanced technology like advanced driver-assist systems (ADAS) may contain 150 ECUs or more, while pick-up trucks like Ford’s F-150 top 150 million lines of code. Even low-end vehicles are quickly approaching 100 ECUs and 100 million of lines of code as more features that were once considered luxury options, such as adaptive cruise control and automatic emergency braking, are becoming standard.

Additional safety features that have been mandated since 2010 like electronic stability control, backup cameras, and automatic emergency calling (eCall) in the EU, as well as more stringent emission standards that ICE vehicles can only meet using yet more innovative electronics and software, have further driven ECU and software proliferation.

Consulting firm Deloitte Touche Tohmatsu Limited estimates that as of 2017, some 40% of the cost of a new car can be attributed to semiconductor-based electronic systems, a cost doubling since 2007. It estimates this total will approach 50% by 2030. The company further predicts that each new car today has about $600 worth of semiconductors packed into it, consisting of up to 3,000 chips of all types.

Totaling the number of ECUs and lines of software only hints at the intricate electronic orchestration and software choreography found in vehicles today. By observing how they perform together, the extraordinary complexity that is meant to be invisible from a driver’s perspective begins to emerge. New safety, comfort, performance and entertainment features, the commercial imperative to offer scores of options to buyers resulting in a multiplicity of variants for each make and model, and the shift from gasoline and human drivers to electric and artificially intelligent drivers and the hundreds of millions of lines of new code that will need to be written, checked, debugged and secured against hackers, are making cars into supercomputers on wheels and forcing the auto industry to adapt. But can it? 

Features and Variants Drive Complexity

The drive over the last two decades to provide more safety and entertainment features has transformed automobiles from mere conveyances to mobile computing centers. Instead of . . .

Continue reading.  There’s much much more, including an interesting chart showing what jobs are assigned to microprocessors.

Written by Leisureguy

14 June 2021 at 1:00 pm

Cool pattern generator

leave a comment »

This little pattern generator is fun to play with. Click “Shuffle” to see new random patterns, which you can modify by fiddling with the various controls and options.

Update: Turns out that there a lot of pattern generators, including (for example) plaid generators.

Written by Leisureguy

14 June 2021 at 9:25 am

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

What Really Happened When Google Ousted Timnit Gebru

leave a comment »

Tom Simonite writes in Wired:

ONE AFTERNOON IN late November of last year, Timnit Gebru was sitting on the couch in her San Francisco Bay Area home, crying.

Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper—or else remove her name from its list of authors, along with those of several other members of her team.

The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software—most famously exemplified by a system called GPT-3—that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the company’s search engine. Jeff Dean, Google’s revered head of research, had encouraged Gebru to think about the approach’s possible downsides. The paper had sailed through the company’s internal review process and had been submitted to a prominent conference. But Kacholia now said that a group of product leaders and others inside the company had deemed the work unacceptable, Gebru recalls. Kacholia was vague about their objections but gave Gebru a week to act. Her firm deadline was the day after Thanksgiving.

Gebru’s distress turned to anger as that date drew closer and the situation turned weirder. Kacholia gave Gebru’s manager, Samy Bengio, a document listing the paper’s supposed flaws, but told him not to send it to Gebru, only to read it to her. On Thanksgiving Day, Gebru skipped some festivities with her family to hear Bengio’s recital. According to Gebru’s recollection and contemporaneous notes, the document didn’t offer specific edits but complained that the paper handled topics “casually” and painted too bleak a picture of the new technology. It also claimed that all of Google’s uses of large language models were “engineered to avoid” the pitfalls that the paper described.

Gebru spent Thanksgiving writing a six-page response, explaining her perspective on the paper and asking for guidance on how it might be revised instead of quashed. She titled her reply “Addressing Feedback from the Ether at Google,” because she still didn’t know who had set her Kafkaesque ordeal in motion, and sent it to Kacholia the next day.

On Saturday, Gebru set out on a preplanned cross-country road trip. She had reached New Mexico by Monday, when Kacholia emailed to ask for confirmation that the paper would either be withdrawn or cleansed of its Google affiliations. Gebru tweeted a cryptic reproach of “censorship and intimidation” against AI ethics researchers. Then, on Tuesday, she fired off two emails: one that sought to end the dispute, and another that escalated it beyond her wildest imaginings.

The first was addressed to Kacholia and offered her a deal: Gebru would remove herself from the paper if Google provided an account of who had reviewed the work and how, and established a more transparent review process for future research. If those conditions weren’t met, Gebru wrote, she would leave Google once she’d had time to make sure her team wouldn’t be too destabilized. The second email showed less corporate diplomacy. Addressed to a listserv for women who worked in Google Brain, the company’s most prominent AI lab and home to Gebru’s Ethical AI team, it accused the company of “silencing marginalized voices” and dismissed Google’s internal diversity programs as a waste of time.

Relaxing in an Airbnb in Austin, Texas, the following night, Gebru received a message with a 😮 from one of her direct reports: “You resigned??” In her personal inbox she then found an email from Kacholia, rejecting Gebru’s offer and casting her out of Google. “We cannot agree as you are requesting,” Kacholia wrote. “The end of your employment should happen faster than your email reflects.” Parts of Gebru’s email to the listserv, she went on, had shown “behavior inconsistent with the expectations of a Google manager.” Gebru tweeted that she had been fired. Google maintained—and still does—that she resigned.

Gebru’s tweet lit the fuse on a controversy that quickly inflamed Google. The company has been dogged in recent years by accusations from employees that it mistreats women and people of color, and from lawmakers that it wields unhealthy technological and economic power. Now Google had expelled a Black woman who was a prominent advocate for more diversity in tech, and who was seen as an important internal voice for greater restraint in the helter-­skelter race to develop and deploy AI. One Google machine-learning researcher who had followed Gebru’s writing and work on diversity felt the news of her departure like a punch to the gut. “It was like, oh, maybe things aren’t going to change so easily,” says the employee, who asked to remain anonymous because they were not authorized to speak by Google management.

Dean sent out a message urging Googlers to ignore Gebru’s call to disengage from corporate diversity exercises; Gebru’s paper had been subpar, he said, and she and her collaborators had not followed the proper approval process. In turn, Gebru claimed in tweets and interviews that she’d been felled by a toxic cocktail of racism, sexism, and censorship. Sympathy for Gebru’s account grew as the disputed paper circulated like samizdat among AI researchers, many of whom found it neither controversial nor particularly remarkable. Thousands of Googlers and outside AI experts signed a public letter castigating the company.

But Google seemed to double down. Margaret Mitchell, the other coleader of the Ethical AI team and a prominent researcher in her own right, was among the hardest hit by Gebru’s ouster. The two had been a professional and emotional tag team, building up their group—which was one of several that worked on what Google called “responsible AI”—while parrying the sexist and racist tendencies they saw at large in the company’s culture. Confident that those same forces had played a role in Gebru’s downfall, Mitchell wrote an automated script to retrieve notes she’d kept in her corporate Gmail account that documented allegedly discriminatory incidents, according to sources inside Google. On January 20, Google said Mitchell had triggered an internal security system and had been suspended. On February 19, she was fired, with Google stating that it had found “multiple violations of our code of conduct, as well as of our security policies, which included exfiltration of confidential, business-­sensitive documents.”

Google had now fully decapitated its own Ethical AI research group. The long, spectacular fallout from that Thanksgiving ultimatum to Gebru left countless bystanders wondering: Had one paper really precipitated all of these events?

The story of what actually happened in the lead-up to Gebru’s exit from Google reveals a more tortured and complex backdrop. It’s the tale of a gifted engineer who was swept up in the AI revolution before she became one of its biggest critics, a refugee who worked her way to the center of the tech industry and became determined to reform it. It’s also about a company—the world’s fifth largest—trying to regain its equilibrium after four years of scandals, controversies, and mutinies, but doing so in ways that unbalanced the ship even further.

Beyond Google, the fate of Timnit Gebru lays bare something even larger:  . . .

Continue reading. There’s much more.

Written by Leisureguy

9 June 2021 at 10:58 am

Apple’s tightly controlled App Store is teeming with scams

leave a comment »

Reed Albergotti and Chris Alcantaran report in the Washington Post:

Apple chief executive Tim Cook has long argued it needs to control app distribution on iPhones, otherwise the App Store would turn into “a flea market.”

But among the 1.8 million apps on the App Store, scams are hiding in plain sight. Customers for several VPN apps, which allegedly protect users’ data, complained in Apple App Store reviews that the apps told users their devices have been infected by a virus to dupe them into downloading and paying for software they don’t need. A QR code reader app that remains on the store tricks customers into paying $4.99 a week for a service that is now included in the camera app of the iPhone. Some apps fraudulently present themselves as being from major brands such as Amazon and Samsung.

Of the highest 1,000 grossing apps on the App Store, nearly two percent are scams, according to an analysis by The Washington Post. And those apps have bilked consumers out of an estimated $48 million during the time they’ve been on the App Store, according to market research firm Appfigures. The scale of the problem has never before been reported. What’s more, Apple profits from these apps because it takes a cut of up to a 30 percent of all revenue generated through the App Store. Even more common, according to The Post’s analysis, are “fleeceware” apps that use inauthentic customer reviews to move up in the App Store rankings and give apps a sense of legitimacy to convince customers to pay higher prices for a service usually offered elsewhere with higher legitimate customer reviews.

Two-thirds of the 18 apps The Post flagged to Apple were removed from the App Store.

The most valuable company in U.S. history, Apple is facing unprecedented scrutiny for how it wields its power and is fighting to hold onto it, including in a blockbuster trial that concluded last month. Regulators and competitors have zeroed in on the App Store in particular: Unlike app stores on other mobile operating systems, Apple’s store faces no competition and is the only way for iPhone owners to download software to their phones without bypassing Apple’s restrictions. Through it, Apple keeps a tight grip on software distribution and payments on its mobile operating system, called iOS.

Apple has long maintained that its exclusive control of the App Store is essential to protecting customers, and it only lets the best apps on its system. But Apple’s monopoly over how consumers access apps on iPhones can actually create an environment that gives customers a false sense of safety, according to experts. Because Apple doesn’t face any major competition and so many consumers are locked into using the App Store on iPhones, there’s little incentive for Apple to spend money on improving it, experts say.

[He believed Apple’s App Store was safe. Then a fake app stole his life savings in bitcoin]

“If consumers were to have access to alternative app stores or other methods of distributing software, Apple would be a lot more likely to take this problem more seriously,” said Stan Miles, an economics professor at Thompson Rivers University in British Columbia, Canada. . .

Continue reading. There’s much, much more. Apple’s responses are evasive and defensive and Apple employees seem to have been forbidden to comment (except for those employees paid to lie).

Later in the article:

Apple says it is constantly improving its methods for sniffing out scams and usually catches them within a month of hitting the App Store. In a recent news release, Apple said it employed new tools to verify the authenticity of user reviews and last year kicked 470,000 app developer accounts off the App Store. Developers, however, can create new accounts and continue to distribute new apps.

Apple unwittingly may be aiding the most sophisticated scammers by eliminating so many of the less competent ones during its app review process, said Miles, who co-authored a paper called “The Economics of Scams.” [Typical of meme evolution when a selection process eliminates some memes: the surviving memes adapt to evade being selected out — cf. in lifeform evolution how widespread use of antibiotics leads to the rise of antibiotic-resistant bacteria. – LG]

“If people do believe or are not worried about being scammed, then there’s going to be a lot of victimization,” he said. Miles also said Apple could warn consumers that some apps “are probably fraud and so buyer beware and you do your homework before you buy the app and don’t trust our store.”

And later:

The prevalence of scams on Apple’s App Store played a key role at trial. Apple’s lawyers were so focused on the company’s role in making the App Store safe that Epic’s attorneys accused them of trying to scare the court into a ruling in favor of Apple.In other internal emails unearthed during trial that date as far back as 2013, Apple’s PhilSchiller, who runs the App Store, expressed dismay when fraudulent apps made it pastApp Store review.

[Apple is lobbying against a bill aimed at stopping forced labor in China]

After a rip-off version of theTemple Run video game became the top-rated app, according to Schiller’s email exchange, he sent an irate message to two other Apple executives responsible for the store. “Remember our talking about finding bad apps with low ratings? Remember our talk about becoming the ‘Nordstroms’ of stores in quality of service? How does an obvious rip off of the super popular Temple Run, with no screenshots, garbage marketing text, and almost all 1-star ratings become the #1 free app on the store?” Schiller asked his team. “Is no one reviewing these apps? Is no one minding the store?” Apple declined to make Schiller available to comment.At trial, Schiller defended the safety of the app store on the stand. The app review process is “the best way we could come up with … to make it safe and fair.”

Eric Friedman, head of Apple’s Fraud Engineering Algorithms and Risk unit, or FEAR, said that Apple’s screening process is “more like the pretty lady who greets you with a lei at the Hawaiian airport than the drug sniffing dog,” according to a 2016 internal email uncovered during the Epic Games trial. Apple employs a 500-person App Review team, which sifts through submissions from developers. “App Review is bringing a plastic butter knife to a gun fight,” Friedman wrote in another email. Apple declined to make Friedman available to comment. In deposition testimony, Friedman pointed to investments Apple has made to stop fraud. “A lot has changed in the last five years,” he said.

Though the App Store ratings section is filled with customer complaints referring to apps as scams, there is no way for Apple customers to report this to Apple, other than reaching out to a regular Apple customer service representative. Apple used to have a button, just under the ratings and reviews section in the App Store, that said “report a problem,” which allowed users to report inappropriate apps. Based on discussions among Apple customers on Apple’s own website, the feature was removed some time around 2016. Sainzsaid customers can still report apps through other channels.

And there’s much more. It’s a long article and it shows just what a bad job Apple is doing. Part of that may be because Apple gets a heft cut of money spent in the App Store and so doesn’t really care to police it effectively: they make money in either case.

Written by Leisureguy

6 June 2021 at 12:49 pm

US Soldiers Expose Nuclear Weapons Secrets Via Flashcard Apps

leave a comment »

It seems as though society has lost control of technology, with technology no longer serving us so much as undermining us. Foeke Postma writes at Bellingcat:

For US soldiers tasked with the custody of nuclear weapons in Europe, the stakes are high. Security protocols are lengthy, detailed and need to be known by heart. To simplify this process, some service members have been using publicly visible flashcard learning apps — inadvertently revealing a multitude of sensitive security protocols about US nuclear weapons and the bases at which they are stored.

While the presence of US nuclear weapons in Europe has long been detailed by various leaked documents, photos and statements by retired officials, their specific locations are officially still a secret with governments neither confirming nor denying their presence.

As many campaigners and parliamentarians in some European nations see it, this ambiguity has often hampered open and democratic debate about the rights and wrongs of hosting nuclear weapons.

However, the flashcards studied by soldiers tasked with guarding these devices reveal not just the bases, but even identify the exact shelters with “hot” vaults that likely contain nuclear weapons.

They also detail intricate security details and protocols such as the positions of cameras, the frequency of patrols around the vaults, secret duress words that signal when a guard is being threatened and the unique identifiers that a restricted area badge needs to have.

Like their analogue namesakes, flashcard learning apps are popular digital learning tools that show questions on one side and answers on the other. By simply searching online for terms publicly known to be associated with nuclear weapons, Bellingcat was able to discover cards used by military personnel serving at all six European military bases reported to store nuclear devices.

Experts approached by Bellingcat said that these findings represented serious breaches of security protocols and raised renewed questions about US nuclear weapons deployment in Europe.

Dr Jeffrey Lewis, founding publisher of Arms Control Wonk.com and Director of the East Asia Nonproliferation Program at the James Martin Center for Nonproliferation Studies, said that the findings showed a “flagrant breach” in security practices related to US nuclear weapons stationed in NATO countries.

He added that “secrecy about US nuclear weapons deployments in Europe does not exist to protect the weapons from terrorists, but only to protect politicians and military leaders from having to answer tough questions about whether NATO’s nuclear-sharing arrangements still make sense today. This is yet one more warning that these weapons are not secure.”

Hans Kristenssen, director of the Nuclear Information Project at the Federation of American Scientists, broadly agreed and said that safety is provided by “effective security, not secrecy.”

Some flashcards uncovered during the course of this investigation had been publicly visible online as far back as 2013. Other sets detailed processes that were being learned by users  until at least April 2021. It is not known whether secret phrases, protocols or other security practices have been altered since then.

However, all flashcards described within this article appear to have been taken down from the learning platforms on which they appeared after Bellingcat reached out to NATO and the US Military for comment prior to publication. A spokesperson for the Dutch Ministry of Defence stated that . . .

Continue reading. There’s more

For what it’s worth, my favorite flashcard program is Anki.

Written by Leisureguy

29 May 2021 at 11:43 am

Blog post directory now available

leave a comment »

Photo by Pablo Heimplatz on Unsplash

Look to the right at the list “Reference pages” and lo! you’ll see a new entry: Blog post directory. Click that and you’ll be presented with a scrollable list of blog titles along with the first few lines of each post.

The origin story

I was scrolling through recent posts looking for a particular one, and found the process frustrating because of the length of many posts. So I emailed WordPress tech support to suggest that they offer a blog directory option for those reading the blog. WordPress pointed out that that function is already available, and they told me how to implement it. It was easy, so it’s already done.

WordPress support is absolutely top-notch, the best tech support I’ve ever used. And their blogging software is not only good, it gets better and better.

Enjoy the blog post directory — and, of course, do not neglect the search options: the general search for words in the blog title and the category search.

Written by Leisureguy

25 May 2021 at 9:25 am

The Colonial Pipeline Ransomware Hackers Had a Secret Weapon: Self-Promoting Cybersecurity Firms

leave a comment »

ProPublica has a very interesting report by by Renee Dudley and Daniel Golden that begins:

On Jan. 11, antivirus company Bitdefender said it was “happy to announce” a startling breakthrough. It had found a flaw in the ransomware that a gang known as DarkSide was using to freeze computer networks of dozens of businesses in the U.S. and Europe. Companies facing demands from DarkSide could download a free tool from Bitdefender and avoid paying millions of dollars in ransom to the hackers.

But Bitdefender wasn’t the first to identify this flaw. Two other researchers, Fabian Wosar and Michael Gillespie, had noticed it the month before and had begun discreetly looking for victims to help. By publicizing its tool, Bitdefender alerted DarkSide to the lapse, which involved reusing the same digital keys to lock and unlock multiple victims. The next day, DarkSide declared that it had repaired the problem, and that “new companies have nothing to hope for.”

“Special thanks to BitDefender for helping fix our issues,” DarkSide said. “This will make us even better.”

DarkSide soon proved it wasn’t bluffing, unleashing a string of attacks. This month, it paralyzed the Colonial Pipeline Co., prompting a shutdown of the 5,500 mile pipeline that carries 45% of the fuel used on the East Coast, quickly followed by a rise in gasoline prices, panic buying of gas across the Southeast and closures of thousands of gas stations. Absent Bitdefender’s announcement, it’s possible that the crisis might have been contained, and that Colonial might have quietly restored its system with Wosar and Gillespie’s decryption tool.

Instead, Colonial paid DarkSide $4.4 million in Bitcoin for a key to unlock its files. “I will admit that I wasn’t comfortable seeing money go out the door to people like this,” CEO Joseph Blount told The Wall Street Journal.

The missed opportunity was part of a broader pattern of botched or half-hearted responses to the growing menace of ransomware, which during the pandemic has disabled businesses, schools, hospitals, and government agencies across the country. The incident also shows how antivirus companies eager to make a name for themselves sometimes violate one of the cardinal rules of the cat-and-mouse game of cyber-warfare: Don’t let your opponents know what you’ve figured out. During World War II, when the British secret service learned from decrypted communications that the Gestapo was planning to abduct and murder a valuable double agent, Johnny Jebsen, his handler wasn’t allowed to warn him for fear of cluing in the enemy that its cipher had been cracked. Today, ransomware hunters like Wosar and Gillespie try to prolong the attackers’ ignorance, even at the cost of contacting fewer victims. Sooner or later, as payments drop off, the cybercriminals realize that something has gone wrong.

Whether to tout a decryption tool is a “calculated decision,” said Rob McLeod, senior director of the threat response unit for cybersecurity firm eSentire. From the marketing perspective, “You are singing that song from the rooftops about how you have come up with a security solution that will decrypt a victim’s data. And then the security researcher angle says, ‘Don’t disclose any information here. Keep the ransomware bugs that we’ve found that allow us to decode the data secret, so as not to notify the threat actors.’”

Wosar said that publicly releasing tools, as Bitdefender did, has become riskier as ransoms have soared and the gangs have grown wealthier and more technically adept. In the early days of ransomware, when hackers froze home computers for a few hundred dollars, they often couldn’t determine how their code was broken unless the flaw was specifically pointed out to them.

Today, the creators of ransomware “have access to reverse engineers and penetration testers who are very very capable,” he said. “That’s how they gain entrance to these oftentimes highly secured networks in the first place. They download the decryptor, they disassemble it, they reverse engineer it and they figure out exactly why we were able to decrypt their files. And 24 hours later, the whole thing is fixed. Bitdefender should have known better.”

It wasn’t the first time that Bitdefender trumpeted a solution that Wosar or Gillespie had beaten it to. Gillespie had broken the code of a ransomware strain called GoGoogle and was helping victims without any fanfare, when Bitdefender released a decryption tool in May 2020. Other companies have also announced breakthroughs publicly, Wosar and Gillespie said.

“People are desperate for a news mention, and big security companies don’t care about victims,” Wosar said.

Bogdan Botezatu, director of threat research at Bucharest, Romania-based Bitdefender, said the company wasn’t aware of the earlier success in unlocking files infected by DarkSide. Regardless, he said, Bitdefender decided to publish its tool “because most victims who fall for ransomware do not have the right connection with ransomware support groups and won’t know where to ask for help unless they can learn about the existence of tools from media reports or with a simple search.”

Bitdefender has provided free technical support to more than a dozen DarkSide victims, and “we believe many others have successfully used the tool without our intervention,” Botezatu said. Over the years, Bitdefender has helped individuals and businesses avoid paying more than $100 million in ransom, he said.

Bitdefender recognized that DarkSide might correct the flaw, Botezatu said. “We are well aware that attackers are agile and adapt to our decryptors.” But DarkSide might have “spotted the issue” anyway. “We don’t believe in ransomware decryptors made silently available. Attackers will learn about their existence by impersonating home users or companies in need, while the vast majority of victims will have no idea that they can get their data back for free.” . . .

Continue reading. There’s more, and more about what is being done to protect data systems.

Written by Leisureguy

24 May 2021 at 3:24 pm

When an algorithm taps you on the shoulder

leave a comment »

Matt Stroud reports in the Verge:

ROBERT MCDANIEL’S TROUBLES began with a knock on the door. It was a weekday in mid-2013, as he made lunch in the crowded three-bedroom house where he lives with his grandmother and several of his adult siblings.

When he went to answer the door, McDaniel discovered not one person, but a cohort of visitors: two police officers in uniform, a neighbor working with the police, and a muscular guy in shorts and a T-shirt sporting short, graying hair.

Police officers weren’t a new sight for McDaniel. They often drove down his tree-lined street in the Austin neighborhood of Chicago making stops and arrests. Out of the 775 homicides tracked by the Chicago Sun-Times in 2020, 72 of them happened in Austin. That’s almost 10 percent of the city’s murder rate, in a region that takes up just 3 percent of its total area. The City of Chicago puts out a “heat map” of where gun crimes occur, with areas of moderate shooting numbers shaded in blue or green. Red splotches represent large numbers — and hottest concentrations — of shootings. On the map, Austin is the color of a fire engine.

Still, this visit from authorities caught McDaniel off guard: at that point in time, he had nothing remotely violent on his criminal record — just arrests for marijuana-related offenses and street gambling. And despite two officers showing up at his front door with the cohort, neither of them, nor anyone else in the cohort, accused McDaniel of breaking the law. They were not there to arrest him. No one was there to investigate a crime. They just wanted to talk.

“I had no idea why these cops were here,” McDaniel says, recounting it to me years later. “I didn’t do shit to bring them here.”

He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.

McDaniel was both a potential victim and a potential perpetrator, and the visitors on his porch treated him as such. A social worker told him that he could help him if he was interested in finding assistance to secure a job, for example, or mental health services. And police were there, too, with a warning: from here on out, the Chicago Police Department would be watching him. The algorithm indicated Robert McDaniel was more likely than 99.9 percent of Chicago’s population to either be shot or to have a shooting connected to him. That made him dangerous, and top brass at the Chicago PD knew it. So McDaniel had better be on his best behavior.

The idea that a series of calculations could predict that he would soon shoot someone, or be shot, seemed outlandish. At the time, McDaniel didn’t know how to take the news.

But the visit set a series of gears in motion. This Kafka-esque policing nightmare — a circumstance in which police identified a man to be surveilled based on a purely theoretical danger — would seem to cause the thing it predicted, in a deranged feat of self-fulfilling prophecy. . .

Continue reading. There’s much more, and story gets even more interesting. The “help” offered causes the problem it was intended to prevent.

Later in the article, Stroud points out one weakness built into the system:

Forecasting isn’t magic; it’s an educated guess about what might happen based on things that have already occurred. The data feeding forecasting software for police are typically built around police stops and arrests. That might sound straightforward and unbiased, but consider that US Department of Justice data show that African Americans are more than twice as likely to be arrested than white people. And if you’re Black, your likelihood of being stopped by a police officer can be nearly four times higher than if you’re white, depending on which city you live in, according to the Stanford Open Policing Project.

Building a forecasting model around data like these can run the risk of stigmatizing entire populations based on discriminatory data; a 2017 study from the Journal of Statistics and Public Policy found that arrests doubled in a quadrant of Los Angeles where its police department tested forecasting software. Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

Which is all to say that forecasting can put entire populations at risk of over-policing — which has led to countless unnecessary police killings for relatively insignificant infractions. (Think George Floyd. And Michael Brown. Twelve-year-old Tamir Rice. Sandra Bland, Philando Castile, Walter Scott. Thirteen-year-old Adam Toledo, this year, in Chicago. Alton Sterling, Breonna Taylor, Ahmaud Arbery. The list goes on.)

Later still:

IN MCDANIEL’S VIEW, the heat list caused the harm its creators hoped to avoid: it predicted a shooting that wouldn’t have happened if it hadn’t predicted the shooting.

As the heat list continued to operate, researchers tore it to shreds. A 2016 paper published in the Journal of Experimental Criminology came to some troubling conclusions about the list that had, by then, been rebranded as the “Strategic Subject List,” or SSL. Among them: “The individuals on the SSL were considered to be ‘persons of interest’ to the CPD,” meaning that McDaniel’s description of being routinely targeted for surveillance and searches matched what researchers discovered. “Overall,” the report goes on, “there was no practical direction about what to do with individuals on the SSL, little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders.”

The heat list wasn’t particularly predictive, it turned out. It wasn’t high-tech. Cops would just use the list as a way to target people.

There was another problem, too. . .

Written by Leisureguy

24 May 2021 at 1:41 pm

Inside an International Tech-Support Scam

leave a comment »

Cybercrime operates large-scale (for example, the current takedown of the oil pipeline that serves the East Coast of the US, which is going to hit hard as fuel supplies run low) and small-scale (individuals). Doug Shadel and Neil Wertheimer write for AARP:

A light rain fell and a cold gray mist hung over the street as Jim Browning arrived home from work. A middle-aged Irishman with a strong brogue, Jim is a software engineer at a midsize consulting firm, and on this workday, like most, there were few surprises. He shared a pleasant dinner with his wife, and when the dishes were cleared, he retreated to his office, shut the door, opened his computer and went undercover.

Jim Browning is not his real name. The alias is necessary to protect him and his family from criminals and law enforcement, as what he does in the privacy of his office may be morally upright but technically illegal. It’s a classic gray area in the netherworld of computer hacking, as we will explain. What is important to know is that back in 2014, it was the same annoying robocalls that you and I get most days that set Jim on his journey to become a vigilante.

A relative of Jim’s had told him about warnings popping up on his computer, and Jim, too, was besieged with recorded calls saying his computer was on the verge of meltdown, and that to prevent it he should call immediately. As a software expert, Jim knew there was nothing wrong with his system, but the automated calls from “certified technicians” didn’t stop. One night that spring, his curiosity got the better of him. “It was part nosiness and part intellectual curiosity,” Jim said. “I’m a problem solver and I wanted to get to the bottom of what these people wanted.” So he returned one of the calls.

The person who answered asked if he could access Jim’s computer to diagnose the problem. Jim granted access, but he was ready; he had created a “virtual computer” within his computer, a walled-off digital domain that kept Jim’s personal information and key operations safe and secure. As he played along with the caller, Jim recorded the conversation and activity on his Trojan horse setup to find out what he was up to. It took mere moments to confirm his hunch: It was a scam.

Intrigued by the experience, Jim started spending his evenings getting telephone scammers online, playing the dupe, recording the interactions and then posting videos of the encounters on YouTube. It became, if not a second career, an avocation—after-dinner entertainment exposing “tech support” scammers who try to scare us into paying for unnecessary repairs.

“Listening to them at first, honestly, made me sick, because I realized right away all they wanted to do was steal money,” Jim would later tell me. “It doesn’t matter if you are 95 or 15, they will say whatever they need to say to get as much money out of you as possible.” Jim saw, for example, how the callers used psychology to put targets at ease. “They say reassuring phrases like ‘Take your time, sir,’ or ‘Do you want to get a glass of water?’ And they will also try to endear themselves to older people, saying things like ‘You sound like my grandmother,’ or ‘You don’t sound your age—you sound 20 years younger.’ “

Jim’s YouTube videos garnered mild interest — a couple thousand views at best. For Jim, this didn’t matter. The engineer in him enjoyed solving the maze. At the least, he was wasting the scammers’ time. At best, his videos maybe helped prevent some cases of fraud.

Then one day in 2018, Jim’s evening forays took an unexpected turn. A tech support scammer called from India and went through the normal spiel, but then he asked Jim to do something unusual: to log in to the scammer’s computer using a remote-access software program called TeamViewer. Later on, Jim found out why: The developers of TeamViewer had discovered that criminals in India were abusing their software, so they temporarily banned its use from computers initiating connections from India. But there was a loophole: It didn’t stop scammers from asking U.S. and U.K. consumers like Jim to initiate access into computers in India.

Hence, the scammer’s request. The voice on the phone talked Jim through the connection process, then told him to initiate a “switch sides” function so the caller could “be in charge” and look through Jim’s computer.

Presented with this opportunity, Jim acted quickly. Instead of “switching sides,” he took control of the criminal’s computer and locked the scammer out of his own computer. Lo and behold, mild-mannered programmer Jim Browning had complete access to all of the scammer’s files and software. And he was able to see everything the scammer was frantically trying to do to regain control.

This bit of digital jujitsu changed everything. Over the next few months, Jim figured out ways to infiltrate the computers of almost every scammer who tried to victimize him. “My process worked on almost every remote access program out there, certainly the ones most popular with scammers, like TeamViewer, AnyDesk or FastSupport.” He also figured out how to secretly install software that recorded what the scammers were doing — without them even knowing it.

Suddenly, Jim was sitting on some powerful knowledge. But as Spider-Man was told, with great power comes great responsibility. Jim wondered, What should I do with what I’ve learned?

Scammers mock and make fun of victims

By now Jim had reverse engineered his way into dozens of scammers’ computers, sometimes four or five at a time. He would set his software to record, then leave for work as his computers did their thing. When he came home at night, he reviewed the footage. Often, he couldn’t believe what he saw: call after call of boiler room scammers — mostly in India — contacting older people — mostly in the U.S. and U.K. — and scaring them into spending money to fix a fake computer problem, or sending money based on other deceptions.

Jim posted these new videos, which gave an authentic, bird’s-eye view of how scammers operate. As a result, his YouTube channel jumped to tens of thousands of subscribers.

One night in May 2019, Jim found his way into the computer network of a large New Delhi boiler room. While lurking in their network, he noticed the company had installed closed-circuit television (CCTV) cameras so the bosses could monitor their employees. So Jim hacked his way into that network and was able to turn the cameras this way and that, capturing the facial expressions and attitudes of dozens of scammers in action.

In one remarkable scene, he . . .

Continue reading. There’s a lot more, including some actual examples.

Written by Leisureguy

10 May 2021 at 11:46 am

Snapchat Can Be Sued Over Role In Fatal Car Crash, Court Rules

leave a comment »

As a joke, I sometimes would suggest that sharp curves on roads should be posted with a sign giving the highest speed to date someone has traversed the curve. I meant it as a service for those competing for a Darwin award. It was a joke.

But Snapchat seemed to have liked the idea in general. Bobby Allyn reports for NPR:

Three young men got into a car in Walworth County, Wis., in May 2017. They were set on driving at rapid speeds down a long, cornfield-lined road — and sharing their escapade on social media.

As the 17-year-old behind the wheel accelerated to 123 miles per hour, one of the passengers opened Snapchat.

His parents say their son wanted to capture the experience using an app feature — the controversial “speed filter” — that documents real-life speed, hoping for engagement and attention from followers on the messaging app.

It was one of the last things the trio did before the vehicle ran off the road and crashed into a tree, killing all of them.

Was Snapchat partially to blame? The boys’ parents think so. And, in a surprise decision on Tuesday, a federal appeals court ordered that the parents should have the right to sue Snap Inc.

The ruling, from a three-judge panel of the 9th U.S. Circuit Court of Appeals, has set off intense debate among legal watchers about the future of a decades-old law that has shielded tech companies from civil lawsuits.

The boys’ parents sued Snap Inc., the maker of Snapchat, after the tragedy. They alleged that the company “knowingly created a dangerous game” through its filter and bore some responsibility.

The district court responded how courts usually do when a tech platform is sued in a civil lawsuit: by dismissing the case. The judge cited the sweeping immunity that social media companies enjoy under Section 230 of the Communications Decency Act.

The law provides legal immunity to tech companies from libel and other civil suits for what people post on sites, regardless of how harmful it may be.

But the appeals court’s reversal paves a way around the all-powerful law, saying it doesn’t apply because this case is not about what someone posted to Snapchat, but rather the design of the app itself.

Continue reading. There are more details of the decision, and they are interesting — partly because different courts have given different decisions in similar cases. Read the whole thing.

Written by Leisureguy

5 May 2021 at 10:39 am

Code Miko

leave a comment »

Written by Leisureguy

26 April 2021 at 5:12 pm

This book looks at ancient Rome in a new light

leave a comment »

I am indebted to The Younger Daughter in two ways regarding the audiobook The Fate of Rome.

  1. She recommended the book in the first place, and it’s fascinating; and
  2. She told me how I can listen to it free through getting it as audiobook with 1 of the 2 free credits I would get for signing up with Audible.com.

So I signed up, and the 2 credits were clearly displayed. I searched for the book title, purchased the audiobook for 1 credit, and I’ve been listening to it. It’s amazing how the change in perspective adds to one’s understanding. One example: while the Romans were building all those excellent roads that lead to Rome, they also were in effect constructing efficient transportation channels that would allow infectious diseases to spread swiftly and widely.

So go ahead and sign up — even if you don’t get any other books, this one is definitely worth the (free) sign-up.

Written by Leisureguy

25 April 2021 at 12:18 pm

AI and big data transform insurance

leave a comment »

Mike Allen has an interesting column of miscellaneous news items that catch his eye, and I thought this item was particularly intriguing:

AI and data science are transforming life insurance — one of the oldest industries in existence, Axios Future correspondent Bryan Walsh writes.

  • With a business model built on predicting the future of its customers, the industry is digging into the prognostication powers of machine learning.
  • Impediments include the industry’s ingrained conservatism — and the dangers of AI bias.

How it works: McKinsey report envisions a near future where AI has shifted the insurance industry from the traditional “detect and repair” to “predict and prevent,” with active insurance policies that respond in real time to changing customer behavior.

  • Tools include drones, data collected by the internet of things — sensors and software behind smart appliances, for instance.
  • Information from such devices will allow insurance companies to “provide personalized wellness products and care suggestions,” according to a separate McKinsey report.

Startups are disrupting the giants:

  • Lemonade — which last year became the first “insurtech” company to go public — overhauls the hassle of buying insurance, with a digital-first interface and machine-learning analytics.
  • Legacy life insurance providers use frozen-in-time actuarial tables to write policies. Insurtech companies like Traffk draw on thousands of data points to provide a more personalized analysis.

What we’re watching: Some experts worry more precise insurance rates set by AI could end up discriminating against certain groups, which in turn could draw the attention of regulators.

Written by Leisureguy

24 April 2021 at 11:31 am

Consciousness in the electric brain: Currents? or Field?

leave a comment »

I came across “Brain wifi,” with the subtitle:

Instead of a code encrypted in the wiring of our neurons, could consciousness reside in the brain’s electromagnetic field?

The article, by Johnjoe McFadden, professor of molecular genetics at the University of Surrey, begins:

Some 2,700 years ago in the ancient city of Sam’al, in what is now modern Turkey, an elderly servant of the king sits in a corner of his house and contemplates the nature of his soul. His name is Katumuwa. He stares at a basalt stele made for him, featuring his own graven portrait together with an inscription in ancient Aramaic. It instructs his family, when he dies, to celebrate ‘a feast at this chamber: a bull for Hadad harpatalli and a ram for Nik-arawas of the hunters and a ram for Shamash, and a ram for Hadad of the vineyards, and a ram for Kubaba, and a ram for my soul that is in this stele.’ Katumuwa believed that he had built a durable stone receptacle for his soul after death. This stele might be one of the earliest written records of dualism: the belief that our conscious mind is located in an immaterial soul or spirit, distinct from the matter of the body.

The Katamuwa Stele cast, digitally rendered by Travis Saul. Courtesy of the Oriental Institute of the University of Chicago.

More than 2 millennia later, I was also contemplating the nature of the soul, as my son lay propped up on a hospital gurney. He was undertaking an electroencephalogram (EEG), a test that detects electrical activity in the brain, for a condition that fortunately turned out to be benign. As I watched the irregular wavy lines march across the screen, with spikes provoked by his perceptions of events such as the banging of a door, I wondered at the nature of the consciousness that generated those signals.

Just how do the atoms and molecules that make up the neurons in our brain – not so different to the bits of matter in Katumwa’s inert stele or the steel barriers on my son’s hospital bed – manage to generate human awareness and the power of thought? In answering that longstanding question, most neurobiologists today would point to the information-processing performed by brain neurons. For both Katumuwa and my son, this would begin as soon as light and sound reached their eyes and ears, stimulating their neurons to fire in response to different aspects of their environment. For Katumuwa, perhaps, this might have been the pinecone or comb that his likeness was holding on the stele; for my son, the beeps from the machine or the movement of the clock on the wall.

Each ‘firing’ event involves the movement of electrically charged atoms called ions in and out of the neurons. That movement triggers a kind of chain reaction that travels from one nerve cell to another via logical rules, roughly analogous to the AND, OR and NOT Boolean operations performed by today’s computer gates, in order to generate outputs such as speech. So, within milliseconds of him glancing at his stele, the firing rate of millions of neurons in Katumuwa’s brain correlated with thousands of visual features of the stele and its context in the room. In this sense of correlating with, those brain neurons would supposedly know at least some aspects of Katumuwa’s stele.

Yet information-processing clearly isn’t sufficient for conscious knowing. Computers process lots of information yet have not exhibited the slightest spark of consciousness. Several decades ago, in an essay exploring the phenomenology of consciousness, the philosopher Thomas Nagel asked us to imagine what it’s like to be a bat. This feature of being-like-something, of having a perspective on the world, captures something about what it means to be a truly conscious ‘knower’. In that hospital room watching my son’s EEG, I wondered what it was like to be one of his neurons, processing the information registering the slamming of a door. As far as we can tell, an individual neuron knows just one thing – its firing rate. It fires or doesn’t fire based on its inputs, so the information it carries is pretty much equivalent to the zero or one of binary computer language. It thereby encodes just a single bit of information. The value of that bit, whether a zero or a one, might correlate with the slamming of a door, but it says nothing about the door’s shape, its colour, its use as a portal between rooms or the noise of its slamming – all features that I’m sure were part of my son’s conscious experience. I concluded that being a single neuron in my son’s brain would not feel like anything.

Of course, you could argue, as neurobiologists usually do, that . . .

Continue reading. There’s much more.

Then in the New Yorker I was reading “Do Brain Implants Change Your Identity?” by Christine Kenneally. It’s an interesting article, but what caught my eye was a description of the conscious experience of an epilectic seizure, which is (as the article explains) an electric storm in the brain which of course would disrupt the electromagnetic field. If that indeed is where consciousness resides, that would explain this woman’s description:

. . . The human brain is a small electrical device of super-galactic complexity. It contains an estimated hundred billion neurons, with many more links between them than there are stars in the Milky Way. Each neuron works by passing an electrical charge along its length, causing neurotransmitters to leap to the next neuron, which ignites in turn, usually in concert with many thousands of others. Somehow, human intelligence emerges from this constant, thrilling choreography. How it happens remains an almost total mystery, but it has become clear that neural technologies will be able to synch with the brain only if they learn the steps of this dance. . .

. . . I asked Leggett to describe what it was like to have a seizure. She didn’t know. When one took hold, she was ripped out of her consciousness; she wasn’t there. Afterward, there was a terrible sense of having been absent. She would feel mortified in front of anyone who had witnessed the seizure and alarmed as she took stock of the injuries that she often suffered. Even worse, she said, was that epilepsy stole her memories. Every time she had a seizure and then returned, she seemed to have left some of her memories behind her. . .

Written by Leisureguy

19 April 2021 at 5:42 pm

Brain control of devices

leave a comment »

This video bears an interesting relation to the previous post.

Written by Leisureguy

16 April 2021 at 1:17 pm

Ingenious and stimulating science-fiction story

leave a comment »

The story, “Lena,” is by qntm, translated from the Russian by Boris Ostanin. It begins:

This article is about the standard test brain image. For the original human, see Miguel Acevedo.

MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Álvarez Acevedo (2010–2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo’s memories.

The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time’s “Persons of the Year” at the end of 2031. The breakthrough was also met with severe opposition from humans rights groups.

Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo’s permission. Acevedo’s attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.

Acevedo died from coronary heart failure in 2073 at the age of 62. . .

Read the whole thing at the link.

Written by Leisureguy

16 April 2021 at 12:54 pm

Can a prime number be illegal? Yes.

leave a comment »

See this Wikipedia article, which begins:

An illegal prime is a prime number that represents information whose possession or distribution is forbidden in some legal jurisdictions. One of the first illegal primes was found in 2001. When interpreted in a particular way, it describes a computer program that bypasses the digital rights management scheme used on DVDs. Distribution of such a program in the United States is illegal under the Digital Millennium Copyright Act.[1] An illegal prime is a kind of illegal number.

Written by Leisureguy

16 April 2021 at 9:29 am

%d bloggers like this: