Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

What Really Happened When Google Ousted Timnit Gebru

leave a comment »

Tom Simonite writes in Wired:

ONE AFTERNOON IN late November of last year, Timnit Gebru was sitting on the couch in her San Francisco Bay Area home, crying.

Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper—or else remove her name from its list of authors, along with those of several other members of her team.

The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software—most famously exemplified by a system called GPT-3—that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the company’s search engine. Jeff Dean, Google’s revered head of research, had encouraged Gebru to think about the approach’s possible downsides. The paper had sailed through the company’s internal review process and had been submitted to a prominent conference. But Kacholia now said that a group of product leaders and others inside the company had deemed the work unacceptable, Gebru recalls. Kacholia was vague about their objections but gave Gebru a week to act. Her firm deadline was the day after Thanksgiving.

Gebru’s distress turned to anger as that date drew closer and the situation turned weirder. Kacholia gave Gebru’s manager, Samy Bengio, a document listing the paper’s supposed flaws, but told him not to send it to Gebru, only to read it to her. On Thanksgiving Day, Gebru skipped some festivities with her family to hear Bengio’s recital. According to Gebru’s recollection and contemporaneous notes, the document didn’t offer specific edits but complained that the paper handled topics “casually” and painted too bleak a picture of the new technology. It also claimed that all of Google’s uses of large language models were “engineered to avoid” the pitfalls that the paper described.

Gebru spent Thanksgiving writing a six-page response, explaining her perspective on the paper and asking for guidance on how it might be revised instead of quashed. She titled her reply “Addressing Feedback from the Ether at Google,” because she still didn’t know who had set her Kafkaesque ordeal in motion, and sent it to Kacholia the next day.

On Saturday, Gebru set out on a preplanned cross-country road trip. She had reached New Mexico by Monday, when Kacholia emailed to ask for confirmation that the paper would either be withdrawn or cleansed of its Google affiliations. Gebru tweeted a cryptic reproach of “censorship and intimidation” against AI ethics researchers. Then, on Tuesday, she fired off two emails: one that sought to end the dispute, and another that escalated it beyond her wildest imaginings.

The first was addressed to Kacholia and offered her a deal: Gebru would remove herself from the paper if Google provided an account of who had reviewed the work and how, and established a more transparent review process for future research. If those conditions weren’t met, Gebru wrote, she would leave Google once she’d had time to make sure her team wouldn’t be too destabilized. The second email showed less corporate diplomacy. Addressed to a listserv for women who worked in Google Brain, the company’s most prominent AI lab and home to Gebru’s Ethical AI team, it accused the company of “silencing marginalized voices” and dismissed Google’s internal diversity programs as a waste of time.

Relaxing in an Airbnb in Austin, Texas, the following night, Gebru received a message with a 😮 from one of her direct reports: “You resigned??” In her personal inbox she then found an email from Kacholia, rejecting Gebru’s offer and casting her out of Google. “We cannot agree as you are requesting,” Kacholia wrote. “The end of your employment should happen faster than your email reflects.” Parts of Gebru’s email to the listserv, she went on, had shown “behavior inconsistent with the expectations of a Google manager.” Gebru tweeted that she had been fired. Google maintained—and still does—that she resigned.

Gebru’s tweet lit the fuse on a controversy that quickly inflamed Google. The company has been dogged in recent years by accusations from employees that it mistreats women and people of color, and from lawmakers that it wields unhealthy technological and economic power. Now Google had expelled a Black woman who was a prominent advocate for more diversity in tech, and who was seen as an important internal voice for greater restraint in the helter-­skelter race to develop and deploy AI. One Google machine-learning researcher who had followed Gebru’s writing and work on diversity felt the news of her departure like a punch to the gut. “It was like, oh, maybe things aren’t going to change so easily,” says the employee, who asked to remain anonymous because they were not authorized to speak by Google management.

Dean sent out a message urging Googlers to ignore Gebru’s call to disengage from corporate diversity exercises; Gebru’s paper had been subpar, he said, and she and her collaborators had not followed the proper approval process. In turn, Gebru claimed in tweets and interviews that she’d been felled by a toxic cocktail of racism, sexism, and censorship. Sympathy for Gebru’s account grew as the disputed paper circulated like samizdat among AI researchers, many of whom found it neither controversial nor particularly remarkable. Thousands of Googlers and outside AI experts signed a public letter castigating the company.

But Google seemed to double down. Margaret Mitchell, the other coleader of the Ethical AI team and a prominent researcher in her own right, was among the hardest hit by Gebru’s ouster. The two had been a professional and emotional tag team, building up their group—which was one of several that worked on what Google called “responsible AI”—while parrying the sexist and racist tendencies they saw at large in the company’s culture. Confident that those same forces had played a role in Gebru’s downfall, Mitchell wrote an automated script to retrieve notes she’d kept in her corporate Gmail account that documented allegedly discriminatory incidents, according to sources inside Google. On January 20, Google said Mitchell had triggered an internal security system and had been suspended. On February 19, she was fired, with Google stating that it had found “multiple violations of our code of conduct, as well as of our security policies, which included exfiltration of confidential, business-­sensitive documents.”

Google had now fully decapitated its own Ethical AI research group. The long, spectacular fallout from that Thanksgiving ultimatum to Gebru left countless bystanders wondering: Had one paper really precipitated all of these events?

The story of what actually happened in the lead-up to Gebru’s exit from Google reveals a more tortured and complex backdrop. It’s the tale of a gifted engineer who was swept up in the AI revolution before she became one of its biggest critics, a refugee who worked her way to the center of the tech industry and became determined to reform it. It’s also about a company—the world’s fifth largest—trying to regain its equilibrium after four years of scandals, controversies, and mutinies, but doing so in ways that unbalanced the ship even further.

Beyond Google, the fate of Timnit Gebru lays bare something even larger:  . . .

Continue reading. There’s much more.

Written by Leisureguy

9 June 2021 at 10:58 am

Apple’s tightly controlled App Store is teeming with scams

leave a comment »

Reed Albergotti and Chris Alcantaran report in the Washington Post:

Apple chief executive Tim Cook has long argued it needs to control app distribution on iPhones, otherwise the App Store would turn into “a flea market.”

But among the 1.8 million apps on the App Store, scams are hiding in plain sight. Customers for several VPN apps, which allegedly protect users’ data, complained in Apple App Store reviews that the apps told users their devices have been infected by a virus to dupe them into downloading and paying for software they don’t need. A QR code reader app that remains on the store tricks customers into paying $4.99 a week for a service that is now included in the camera app of the iPhone. Some apps fraudulently present themselves as being from major brands such as Amazon and Samsung.

Of the highest 1,000 grossing apps on the App Store, nearly two percent are scams, according to an analysis by The Washington Post. And those apps have bilked consumers out of an estimated $48 million during the time they’ve been on the App Store, according to market research firm Appfigures. The scale of the problem has never before been reported. What’s more, Apple profits from these apps because it takes a cut of up to a 30 percent of all revenue generated through the App Store. Even more common, according to The Post’s analysis, are “fleeceware” apps that use inauthentic customer reviews to move up in the App Store rankings and give apps a sense of legitimacy to convince customers to pay higher prices for a service usually offered elsewhere with higher legitimate customer reviews.

Two-thirds of the 18 apps The Post flagged to Apple were removed from the App Store.

The most valuable company in U.S. history, Apple is facing unprecedented scrutiny for how it wields its power and is fighting to hold onto it, including in a blockbuster trial that concluded last month. Regulators and competitors have zeroed in on the App Store in particular: Unlike app stores on other mobile operating systems, Apple’s store faces no competition and is the only way for iPhone owners to download software to their phones without bypassing Apple’s restrictions. Through it, Apple keeps a tight grip on software distribution and payments on its mobile operating system, called iOS.

Apple has long maintained that its exclusive control of the App Store is essential to protecting customers, and it only lets the best apps on its system. But Apple’s monopoly over how consumers access apps on iPhones can actually create an environment that gives customers a false sense of safety, according to experts. Because Apple doesn’t face any major competition and so many consumers are locked into using the App Store on iPhones, there’s little incentive for Apple to spend money on improving it, experts say.

[He believed Apple’s App Store was safe. Then a fake app stole his life savings in bitcoin]

“If consumers were to have access to alternative app stores or other methods of distributing software, Apple would be a lot more likely to take this problem more seriously,” said Stan Miles, an economics professor at Thompson Rivers University in British Columbia, Canada. . .

Continue reading. There’s much, much more. Apple’s responses are evasive and defensive and Apple employees seem to have been forbidden to comment (except for those employees paid to lie).

Later in the article:

Apple says it is constantly improving its methods for sniffing out scams and usually catches them within a month of hitting the App Store. In a recent news release, Apple said it employed new tools to verify the authenticity of user reviews and last year kicked 470,000 app developer accounts off the App Store. Developers, however, can create new accounts and continue to distribute new apps.

Apple unwittingly may be aiding the most sophisticated scammers by eliminating so many of the less competent ones during its app review process, said Miles, who co-authored a paper called “The Economics of Scams.” [Typical of meme evolution when a selection process eliminates some memes: the surviving memes adapt to evade being selected out — cf. in lifeform evolution how widespread use of antibiotics leads to the rise of antibiotic-resistant bacteria. – LG]

“If people do believe or are not worried about being scammed, then there’s going to be a lot of victimization,” he said. Miles also said Apple could warn consumers that some apps “are probably fraud and so buyer beware and you do your homework before you buy the app and don’t trust our store.”

And later:

The prevalence of scams on Apple’s App Store played a key role at trial. Apple’s lawyers were so focused on the company’s role in making the App Store safe that Epic’s attorneys accused them of trying to scare the court into a ruling in favor of Apple.In other internal emails unearthed during trial that date as far back as 2013, Apple’s PhilSchiller, who runs the App Store, expressed dismay when fraudulent apps made it pastApp Store review.

[Apple is lobbying against a bill aimed at stopping forced labor in China]

After a rip-off version of theTemple Run video game became the top-rated app, according to Schiller’s email exchange, he sent an irate message to two other Apple executives responsible for the store. “Remember our talking about finding bad apps with low ratings? Remember our talk about becoming the ‘Nordstroms’ of stores in quality of service? How does an obvious rip off of the super popular Temple Run, with no screenshots, garbage marketing text, and almost all 1-star ratings become the #1 free app on the store?” Schiller asked his team. “Is no one reviewing these apps? Is no one minding the store?” Apple declined to make Schiller available to comment.At trial, Schiller defended the safety of the app store on the stand. The app review process is “the best way we could come up with … to make it safe and fair.”

Eric Friedman, head of Apple’s Fraud Engineering Algorithms and Risk unit, or FEAR, said that Apple’s screening process is “more like the pretty lady who greets you with a lei at the Hawaiian airport than the drug sniffing dog,” according to a 2016 internal email uncovered during the Epic Games trial. Apple employs a 500-person App Review team, which sifts through submissions from developers. “App Review is bringing a plastic butter knife to a gun fight,” Friedman wrote in another email. Apple declined to make Friedman available to comment. In deposition testimony, Friedman pointed to investments Apple has made to stop fraud. “A lot has changed in the last five years,” he said.

Though the App Store ratings section is filled with customer complaints referring to apps as scams, there is no way for Apple customers to report this to Apple, other than reaching out to a regular Apple customer service representative. Apple used to have a button, just under the ratings and reviews section in the App Store, that said “report a problem,” which allowed users to report inappropriate apps. Based on discussions among Apple customers on Apple’s own website, the feature was removed some time around 2016. Sainzsaid customers can still report apps through other channels.

And there’s much more. It’s a long article and it shows just what a bad job Apple is doing. Part of that may be because Apple gets a heft cut of money spent in the App Store and so doesn’t really care to police it effectively: they make money in either case.

Written by Leisureguy

6 June 2021 at 12:49 pm

US Soldiers Expose Nuclear Weapons Secrets Via Flashcard Apps

leave a comment »

It seems as though society has lost control of technology, with technology no longer serving us so much as undermining us. Foeke Postma writes at Bellingcat:

For US soldiers tasked with the custody of nuclear weapons in Europe, the stakes are high. Security protocols are lengthy, detailed and need to be known by heart. To simplify this process, some service members have been using publicly visible flashcard learning apps — inadvertently revealing a multitude of sensitive security protocols about US nuclear weapons and the bases at which they are stored.

While the presence of US nuclear weapons in Europe has long been detailed by various leaked documents, photos and statements by retired officials, their specific locations are officially still a secret with governments neither confirming nor denying their presence.

As many campaigners and parliamentarians in some European nations see it, this ambiguity has often hampered open and democratic debate about the rights and wrongs of hosting nuclear weapons.

However, the flashcards studied by soldiers tasked with guarding these devices reveal not just the bases, but even identify the exact shelters with “hot” vaults that likely contain nuclear weapons.

They also detail intricate security details and protocols such as the positions of cameras, the frequency of patrols around the vaults, secret duress words that signal when a guard is being threatened and the unique identifiers that a restricted area badge needs to have.

Like their analogue namesakes, flashcard learning apps are popular digital learning tools that show questions on one side and answers on the other. By simply searching online for terms publicly known to be associated with nuclear weapons, Bellingcat was able to discover cards used by military personnel serving at all six European military bases reported to store nuclear devices.

Experts approached by Bellingcat said that these findings represented serious breaches of security protocols and raised renewed questions about US nuclear weapons deployment in Europe.

Dr Jeffrey Lewis, founding publisher of Arms Control Wonk.com and Director of the East Asia Nonproliferation Program at the James Martin Center for Nonproliferation Studies, said that the findings showed a “flagrant breach” in security practices related to US nuclear weapons stationed in NATO countries.

He added that “secrecy about US nuclear weapons deployments in Europe does not exist to protect the weapons from terrorists, but only to protect politicians and military leaders from having to answer tough questions about whether NATO’s nuclear-sharing arrangements still make sense today. This is yet one more warning that these weapons are not secure.”

Hans Kristenssen, director of the Nuclear Information Project at the Federation of American Scientists, broadly agreed and said that safety is provided by “effective security, not secrecy.”

Some flashcards uncovered during the course of this investigation had been publicly visible online as far back as 2013. Other sets detailed processes that were being learned by users  until at least April 2021. It is not known whether secret phrases, protocols or other security practices have been altered since then.

However, all flashcards described within this article appear to have been taken down from the learning platforms on which they appeared after Bellingcat reached out to NATO and the US Military for comment prior to publication. A spokesperson for the Dutch Ministry of Defence stated that . . .

Continue reading. There’s more

For what it’s worth, my favorite flashcard program is Anki.

Written by Leisureguy

29 May 2021 at 11:43 am

Blog post directory now available

leave a comment »

Photo by Pablo Heimplatz on Unsplash

Look to the right at the list “Reference pages” and lo! you’ll see a new entry: Blog post directory. Click that and you’ll be presented with a scrollable list of blog titles along with the first few lines of each post.

The origin story

I was scrolling through recent posts looking for a particular one, and found the process frustrating because of the length of many posts. So I emailed WordPress tech support to suggest that they offer a blog directory option for those reading the blog. WordPress pointed out that that function is already available, and they told me how to implement it. It was easy, so it’s already done.

WordPress support is absolutely top-notch, the best tech support I’ve ever used. And their blogging software is not only good, it gets better and better.

Enjoy the blog post directory — and, of course, do not neglect the search options: the general search for words in the blog title and the category search.

Written by Leisureguy

25 May 2021 at 9:25 am

The Colonial Pipeline Ransomware Hackers Had a Secret Weapon: Self-Promoting Cybersecurity Firms

leave a comment »

ProPublica has a very interesting report by by Renee Dudley and Daniel Golden that begins:

On Jan. 11, antivirus company Bitdefender said it was “happy to announce” a startling breakthrough. It had found a flaw in the ransomware that a gang known as DarkSide was using to freeze computer networks of dozens of businesses in the U.S. and Europe. Companies facing demands from DarkSide could download a free tool from Bitdefender and avoid paying millions of dollars in ransom to the hackers.

But Bitdefender wasn’t the first to identify this flaw. Two other researchers, Fabian Wosar and Michael Gillespie, had noticed it the month before and had begun discreetly looking for victims to help. By publicizing its tool, Bitdefender alerted DarkSide to the lapse, which involved reusing the same digital keys to lock and unlock multiple victims. The next day, DarkSide declared that it had repaired the problem, and that “new companies have nothing to hope for.”

“Special thanks to BitDefender for helping fix our issues,” DarkSide said. “This will make us even better.”

DarkSide soon proved it wasn’t bluffing, unleashing a string of attacks. This month, it paralyzed the Colonial Pipeline Co., prompting a shutdown of the 5,500 mile pipeline that carries 45% of the fuel used on the East Coast, quickly followed by a rise in gasoline prices, panic buying of gas across the Southeast and closures of thousands of gas stations. Absent Bitdefender’s announcement, it’s possible that the crisis might have been contained, and that Colonial might have quietly restored its system with Wosar and Gillespie’s decryption tool.

Instead, Colonial paid DarkSide $4.4 million in Bitcoin for a key to unlock its files. “I will admit that I wasn’t comfortable seeing money go out the door to people like this,” CEO Joseph Blount told The Wall Street Journal.

The missed opportunity was part of a broader pattern of botched or half-hearted responses to the growing menace of ransomware, which during the pandemic has disabled businesses, schools, hospitals, and government agencies across the country. The incident also shows how antivirus companies eager to make a name for themselves sometimes violate one of the cardinal rules of the cat-and-mouse game of cyber-warfare: Don’t let your opponents know what you’ve figured out. During World War II, when the British secret service learned from decrypted communications that the Gestapo was planning to abduct and murder a valuable double agent, Johnny Jebsen, his handler wasn’t allowed to warn him for fear of cluing in the enemy that its cipher had been cracked. Today, ransomware hunters like Wosar and Gillespie try to prolong the attackers’ ignorance, even at the cost of contacting fewer victims. Sooner or later, as payments drop off, the cybercriminals realize that something has gone wrong.

Whether to tout a decryption tool is a “calculated decision,” said Rob McLeod, senior director of the threat response unit for cybersecurity firm eSentire. From the marketing perspective, “You are singing that song from the rooftops about how you have come up with a security solution that will decrypt a victim’s data. And then the security researcher angle says, ‘Don’t disclose any information here. Keep the ransomware bugs that we’ve found that allow us to decode the data secret, so as not to notify the threat actors.’”

Wosar said that publicly releasing tools, as Bitdefender did, has become riskier as ransoms have soared and the gangs have grown wealthier and more technically adept. In the early days of ransomware, when hackers froze home computers for a few hundred dollars, they often couldn’t determine how their code was broken unless the flaw was specifically pointed out to them.

Today, the creators of ransomware “have access to reverse engineers and penetration testers who are very very capable,” he said. “That’s how they gain entrance to these oftentimes highly secured networks in the first place. They download the decryptor, they disassemble it, they reverse engineer it and they figure out exactly why we were able to decrypt their files. And 24 hours later, the whole thing is fixed. Bitdefender should have known better.”

It wasn’t the first time that Bitdefender trumpeted a solution that Wosar or Gillespie had beaten it to. Gillespie had broken the code of a ransomware strain called GoGoogle and was helping victims without any fanfare, when Bitdefender released a decryption tool in May 2020. Other companies have also announced breakthroughs publicly, Wosar and Gillespie said.

“People are desperate for a news mention, and big security companies don’t care about victims,” Wosar said.

Bogdan Botezatu, director of threat research at Bucharest, Romania-based Bitdefender, said the company wasn’t aware of the earlier success in unlocking files infected by DarkSide. Regardless, he said, Bitdefender decided to publish its tool “because most victims who fall for ransomware do not have the right connection with ransomware support groups and won’t know where to ask for help unless they can learn about the existence of tools from media reports or with a simple search.”

Bitdefender has provided free technical support to more than a dozen DarkSide victims, and “we believe many others have successfully used the tool without our intervention,” Botezatu said. Over the years, Bitdefender has helped individuals and businesses avoid paying more than $100 million in ransom, he said.

Bitdefender recognized that DarkSide might correct the flaw, Botezatu said. “We are well aware that attackers are agile and adapt to our decryptors.” But DarkSide might have “spotted the issue” anyway. “We don’t believe in ransomware decryptors made silently available. Attackers will learn about their existence by impersonating home users or companies in need, while the vast majority of victims will have no idea that they can get their data back for free.” . . .

Continue reading. There’s more, and more about what is being done to protect data systems.

Written by Leisureguy

24 May 2021 at 3:24 pm

When an algorithm taps you on the shoulder

leave a comment »

Matt Stroud reports in the Verge:

ROBERT MCDANIEL’S TROUBLES began with a knock on the door. It was a weekday in mid-2013, as he made lunch in the crowded three-bedroom house where he lives with his grandmother and several of his adult siblings.

When he went to answer the door, McDaniel discovered not one person, but a cohort of visitors: two police officers in uniform, a neighbor working with the police, and a muscular guy in shorts and a T-shirt sporting short, graying hair.

Police officers weren’t a new sight for McDaniel. They often drove down his tree-lined street in the Austin neighborhood of Chicago making stops and arrests. Out of the 775 homicides tracked by the Chicago Sun-Times in 2020, 72 of them happened in Austin. That’s almost 10 percent of the city’s murder rate, in a region that takes up just 3 percent of its total area. The City of Chicago puts out a “heat map” of where gun crimes occur, with areas of moderate shooting numbers shaded in blue or green. Red splotches represent large numbers — and hottest concentrations — of shootings. On the map, Austin is the color of a fire engine.

Still, this visit from authorities caught McDaniel off guard: at that point in time, he had nothing remotely violent on his criminal record — just arrests for marijuana-related offenses and street gambling. And despite two officers showing up at his front door with the cohort, neither of them, nor anyone else in the cohort, accused McDaniel of breaking the law. They were not there to arrest him. No one was there to investigate a crime. They just wanted to talk.

“I had no idea why these cops were here,” McDaniel says, recounting it to me years later. “I didn’t do shit to bring them here.”

He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.

McDaniel was both a potential victim and a potential perpetrator, and the visitors on his porch treated him as such. A social worker told him that he could help him if he was interested in finding assistance to secure a job, for example, or mental health services. And police were there, too, with a warning: from here on out, the Chicago Police Department would be watching him. The algorithm indicated Robert McDaniel was more likely than 99.9 percent of Chicago’s population to either be shot or to have a shooting connected to him. That made him dangerous, and top brass at the Chicago PD knew it. So McDaniel had better be on his best behavior.

The idea that a series of calculations could predict that he would soon shoot someone, or be shot, seemed outlandish. At the time, McDaniel didn’t know how to take the news.

But the visit set a series of gears in motion. This Kafka-esque policing nightmare — a circumstance in which police identified a man to be surveilled based on a purely theoretical danger — would seem to cause the thing it predicted, in a deranged feat of self-fulfilling prophecy. . .

Continue reading. There’s much more, and story gets even more interesting. The “help” offered causes the problem it was intended to prevent.

Later in the article, Stroud points out one weakness built into the system:

Forecasting isn’t magic; it’s an educated guess about what might happen based on things that have already occurred. The data feeding forecasting software for police are typically built around police stops and arrests. That might sound straightforward and unbiased, but consider that US Department of Justice data show that African Americans are more than twice as likely to be arrested than white people. And if you’re Black, your likelihood of being stopped by a police officer can be nearly four times higher than if you’re white, depending on which city you live in, according to the Stanford Open Policing Project.

Building a forecasting model around data like these can run the risk of stigmatizing entire populations based on discriminatory data; a 2017 study from the Journal of Statistics and Public Policy found that arrests doubled in a quadrant of Los Angeles where its police department tested forecasting software. Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

Which is all to say that forecasting can put entire populations at risk of over-policing — which has led to countless unnecessary police killings for relatively insignificant infractions. (Think George Floyd. And Michael Brown. Twelve-year-old Tamir Rice. Sandra Bland, Philando Castile, Walter Scott. Thirteen-year-old Adam Toledo, this year, in Chicago. Alton Sterling, Breonna Taylor, Ahmaud Arbery. The list goes on.)

Later still:

IN MCDANIEL’S VIEW, the heat list caused the harm its creators hoped to avoid: it predicted a shooting that wouldn’t have happened if it hadn’t predicted the shooting.

As the heat list continued to operate, researchers tore it to shreds. A 2016 paper published in the Journal of Experimental Criminology came to some troubling conclusions about the list that had, by then, been rebranded as the “Strategic Subject List,” or SSL. Among them: “The individuals on the SSL were considered to be ‘persons of interest’ to the CPD,” meaning that McDaniel’s description of being routinely targeted for surveillance and searches matched what researchers discovered. “Overall,” the report goes on, “there was no practical direction about what to do with individuals on the SSL, little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders.”

The heat list wasn’t particularly predictive, it turned out. It wasn’t high-tech. Cops would just use the list as a way to target people.

There was another problem, too. . .

Written by Leisureguy

24 May 2021 at 1:41 pm

Inside an International Tech-Support Scam

leave a comment »

Cybercrime operates large-scale (for example, the current takedown of the oil pipeline that serves the East Coast of the US, which is going to hit hard as fuel supplies run low) and small-scale (individuals). Doug Shadel and Neil Wertheimer write for AARP:

A light rain fell and a cold gray mist hung over the street as Jim Browning arrived home from work. A middle-aged Irishman with a strong brogue, Jim is a software engineer at a midsize consulting firm, and on this workday, like most, there were few surprises. He shared a pleasant dinner with his wife, and when the dishes were cleared, he retreated to his office, shut the door, opened his computer and went undercover.

Jim Browning is not his real name. The alias is necessary to protect him and his family from criminals and law enforcement, as what he does in the privacy of his office may be morally upright but technically illegal. It’s a classic gray area in the netherworld of computer hacking, as we will explain. What is important to know is that back in 2014, it was the same annoying robocalls that you and I get most days that set Jim on his journey to become a vigilante.

A relative of Jim’s had told him about warnings popping up on his computer, and Jim, too, was besieged with recorded calls saying his computer was on the verge of meltdown, and that to prevent it he should call immediately. As a software expert, Jim knew there was nothing wrong with his system, but the automated calls from “certified technicians” didn’t stop. One night that spring, his curiosity got the better of him. “It was part nosiness and part intellectual curiosity,” Jim said. “I’m a problem solver and I wanted to get to the bottom of what these people wanted.” So he returned one of the calls.

The person who answered asked if he could access Jim’s computer to diagnose the problem. Jim granted access, but he was ready; he had created a “virtual computer” within his computer, a walled-off digital domain that kept Jim’s personal information and key operations safe and secure. As he played along with the caller, Jim recorded the conversation and activity on his Trojan horse setup to find out what he was up to. It took mere moments to confirm his hunch: It was a scam.

Intrigued by the experience, Jim started spending his evenings getting telephone scammers online, playing the dupe, recording the interactions and then posting videos of the encounters on YouTube. It became, if not a second career, an avocation—after-dinner entertainment exposing “tech support” scammers who try to scare us into paying for unnecessary repairs.

“Listening to them at first, honestly, made me sick, because I realized right away all they wanted to do was steal money,” Jim would later tell me. “It doesn’t matter if you are 95 or 15, they will say whatever they need to say to get as much money out of you as possible.” Jim saw, for example, how the callers used psychology to put targets at ease. “They say reassuring phrases like ‘Take your time, sir,’ or ‘Do you want to get a glass of water?’ And they will also try to endear themselves to older people, saying things like ‘You sound like my grandmother,’ or ‘You don’t sound your age—you sound 20 years younger.’ “

Jim’s YouTube videos garnered mild interest — a couple thousand views at best. For Jim, this didn’t matter. The engineer in him enjoyed solving the maze. At the least, he was wasting the scammers’ time. At best, his videos maybe helped prevent some cases of fraud.

Then one day in 2018, Jim’s evening forays took an unexpected turn. A tech support scammer called from India and went through the normal spiel, but then he asked Jim to do something unusual: to log in to the scammer’s computer using a remote-access software program called TeamViewer. Later on, Jim found out why: The developers of TeamViewer had discovered that criminals in India were abusing their software, so they temporarily banned its use from computers initiating connections from India. But there was a loophole: It didn’t stop scammers from asking U.S. and U.K. consumers like Jim to initiate access into computers in India.

Hence, the scammer’s request. The voice on the phone talked Jim through the connection process, then told him to initiate a “switch sides” function so the caller could “be in charge” and look through Jim’s computer.

Presented with this opportunity, Jim acted quickly. Instead of “switching sides,” he took control of the criminal’s computer and locked the scammer out of his own computer. Lo and behold, mild-mannered programmer Jim Browning had complete access to all of the scammer’s files and software. And he was able to see everything the scammer was frantically trying to do to regain control.

This bit of digital jujitsu changed everything. Over the next few months, Jim figured out ways to infiltrate the computers of almost every scammer who tried to victimize him. “My process worked on almost every remote access program out there, certainly the ones most popular with scammers, like TeamViewer, AnyDesk or FastSupport.” He also figured out how to secretly install software that recorded what the scammers were doing — without them even knowing it.

Suddenly, Jim was sitting on some powerful knowledge. But as Spider-Man was told, with great power comes great responsibility. Jim wondered, What should I do with what I’ve learned?

Scammers mock and make fun of victims

By now Jim had reverse engineered his way into dozens of scammers’ computers, sometimes four or five at a time. He would set his software to record, then leave for work as his computers did their thing. When he came home at night, he reviewed the footage. Often, he couldn’t believe what he saw: call after call of boiler room scammers — mostly in India — contacting older people — mostly in the U.S. and U.K. — and scaring them into spending money to fix a fake computer problem, or sending money based on other deceptions.

Jim posted these new videos, which gave an authentic, bird’s-eye view of how scammers operate. As a result, his YouTube channel jumped to tens of thousands of subscribers.

One night in May 2019, Jim found his way into the computer network of a large New Delhi boiler room. While lurking in their network, he noticed the company had installed closed-circuit television (CCTV) cameras so the bosses could monitor their employees. So Jim hacked his way into that network and was able to turn the cameras this way and that, capturing the facial expressions and attitudes of dozens of scammers in action.

In one remarkable scene, he . . .

Continue reading. There’s a lot more, including some actual examples.

Written by Leisureguy

10 May 2021 at 11:46 am

Snapchat Can Be Sued Over Role In Fatal Car Crash, Court Rules

leave a comment »

As a joke, I sometimes would suggest that sharp curves on roads should be posted with a sign giving the highest speed to date someone has traversed the curve. I meant it as a service for those competing for a Darwin award. It was a joke.

But Snapchat seemed to have liked the idea in general. Bobby Allyn reports for NPR:

Three young men got into a car in Walworth County, Wis., in May 2017. They were set on driving at rapid speeds down a long, cornfield-lined road — and sharing their escapade on social media.

As the 17-year-old behind the wheel accelerated to 123 miles per hour, one of the passengers opened Snapchat.

His parents say their son wanted to capture the experience using an app feature — the controversial “speed filter” — that documents real-life speed, hoping for engagement and attention from followers on the messaging app.

It was one of the last things the trio did before the vehicle ran off the road and crashed into a tree, killing all of them.

Was Snapchat partially to blame? The boys’ parents think so. And, in a surprise decision on Tuesday, a federal appeals court ordered that the parents should have the right to sue Snap Inc.

The ruling, from a three-judge panel of the 9th U.S. Circuit Court of Appeals, has set off intense debate among legal watchers about the future of a decades-old law that has shielded tech companies from civil lawsuits.

The boys’ parents sued Snap Inc., the maker of Snapchat, after the tragedy. They alleged that the company “knowingly created a dangerous game” through its filter and bore some responsibility.

The district court responded how courts usually do when a tech platform is sued in a civil lawsuit: by dismissing the case. The judge cited the sweeping immunity that social media companies enjoy under Section 230 of the Communications Decency Act.

The law provides legal immunity to tech companies from libel and other civil suits for what people post on sites, regardless of how harmful it may be.

But the appeals court’s reversal paves a way around the all-powerful law, saying it doesn’t apply because this case is not about what someone posted to Snapchat, but rather the design of the app itself.

Continue reading. There are more details of the decision, and they are interesting — partly because different courts have given different decisions in similar cases. Read the whole thing.

Written by Leisureguy

5 May 2021 at 10:39 am

Code Miko

leave a comment »

Written by Leisureguy

26 April 2021 at 5:12 pm

This book looks at ancient Rome in a new light

leave a comment »

I am indebted to The Younger Daughter in two ways regarding the audiobook The Fate of Rome.

  1. She recommended the book in the first place, and it’s fascinating; and
  2. She told me how I can listen to it free through getting it as audiobook with 1 of the 2 free credits I would get for signing up with Audible.com.

So I signed up, and the 2 credits were clearly displayed. I searched for the book title, purchased the audiobook for 1 credit, and I’ve been listening to it. It’s amazing how the change in perspective adds to one’s understanding. One example: while the Romans were building all those excellent roads that lead to Rome, they also were in effect constructing efficient transportation channels that would allow infectious diseases to spread swiftly and widely.

So go ahead and sign up — even if you don’t get any other books, this one is definitely worth the (free) sign-up.

Written by Leisureguy

25 April 2021 at 12:18 pm

AI and big data transform insurance

leave a comment »

Mike Allen has an interesting column of miscellaneous news items that catch his eye, and I thought this item was particularly intriguing:

AI and data science are transforming life insurance — one of the oldest industries in existence, Axios Future correspondent Bryan Walsh writes.

  • With a business model built on predicting the future of its customers, the industry is digging into the prognostication powers of machine learning.
  • Impediments include the industry’s ingrained conservatism — and the dangers of AI bias.

How it works: McKinsey report envisions a near future where AI has shifted the insurance industry from the traditional “detect and repair” to “predict and prevent,” with active insurance policies that respond in real time to changing customer behavior.

  • Tools include drones, data collected by the internet of things — sensors and software behind smart appliances, for instance.
  • Information from such devices will allow insurance companies to “provide personalized wellness products and care suggestions,” according to a separate McKinsey report.

Startups are disrupting the giants:

  • Lemonade — which last year became the first “insurtech” company to go public — overhauls the hassle of buying insurance, with a digital-first interface and machine-learning analytics.
  • Legacy life insurance providers use frozen-in-time actuarial tables to write policies. Insurtech companies like Traffk draw on thousands of data points to provide a more personalized analysis.

What we’re watching: Some experts worry more precise insurance rates set by AI could end up discriminating against certain groups, which in turn could draw the attention of regulators.

Written by Leisureguy

24 April 2021 at 11:31 am

Consciousness in the electric brain: Currents? or Field?

leave a comment »

I came across “Brain wifi,” with the subtitle:

Instead of a code encrypted in the wiring of our neurons, could consciousness reside in the brain’s electromagnetic field?

The article, by Johnjoe McFadden, professor of molecular genetics at the University of Surrey, begins:

Some 2,700 years ago in the ancient city of Sam’al, in what is now modern Turkey, an elderly servant of the king sits in a corner of his house and contemplates the nature of his soul. His name is Katumuwa. He stares at a basalt stele made for him, featuring his own graven portrait together with an inscription in ancient Aramaic. It instructs his family, when he dies, to celebrate ‘a feast at this chamber: a bull for Hadad harpatalli and a ram for Nik-arawas of the hunters and a ram for Shamash, and a ram for Hadad of the vineyards, and a ram for Kubaba, and a ram for my soul that is in this stele.’ Katumuwa believed that he had built a durable stone receptacle for his soul after death. This stele might be one of the earliest written records of dualism: the belief that our conscious mind is located in an immaterial soul or spirit, distinct from the matter of the body.

The Katamuwa Stele cast, digitally rendered by Travis Saul. Courtesy of the Oriental Institute of the University of Chicago.

More than 2 millennia later, I was also contemplating the nature of the soul, as my son lay propped up on a hospital gurney. He was undertaking an electroencephalogram (EEG), a test that detects electrical activity in the brain, for a condition that fortunately turned out to be benign. As I watched the irregular wavy lines march across the screen, with spikes provoked by his perceptions of events such as the banging of a door, I wondered at the nature of the consciousness that generated those signals.

Just how do the atoms and molecules that make up the neurons in our brain – not so different to the bits of matter in Katumwa’s inert stele or the steel barriers on my son’s hospital bed – manage to generate human awareness and the power of thought? In answering that longstanding question, most neurobiologists today would point to the information-processing performed by brain neurons. For both Katumuwa and my son, this would begin as soon as light and sound reached their eyes and ears, stimulating their neurons to fire in response to different aspects of their environment. For Katumuwa, perhaps, this might have been the pinecone or comb that his likeness was holding on the stele; for my son, the beeps from the machine or the movement of the clock on the wall.

Each ‘firing’ event involves the movement of electrically charged atoms called ions in and out of the neurons. That movement triggers a kind of chain reaction that travels from one nerve cell to another via logical rules, roughly analogous to the AND, OR and NOT Boolean operations performed by today’s computer gates, in order to generate outputs such as speech. So, within milliseconds of him glancing at his stele, the firing rate of millions of neurons in Katumuwa’s brain correlated with thousands of visual features of the stele and its context in the room. In this sense of correlating with, those brain neurons would supposedly know at least some aspects of Katumuwa’s stele.

Yet information-processing clearly isn’t sufficient for conscious knowing. Computers process lots of information yet have not exhibited the slightest spark of consciousness. Several decades ago, in an essay exploring the phenomenology of consciousness, the philosopher Thomas Nagel asked us to imagine what it’s like to be a bat. This feature of being-like-something, of having a perspective on the world, captures something about what it means to be a truly conscious ‘knower’. In that hospital room watching my son’s EEG, I wondered what it was like to be one of his neurons, processing the information registering the slamming of a door. As far as we can tell, an individual neuron knows just one thing – its firing rate. It fires or doesn’t fire based on its inputs, so the information it carries is pretty much equivalent to the zero or one of binary computer language. It thereby encodes just a single bit of information. The value of that bit, whether a zero or a one, might correlate with the slamming of a door, but it says nothing about the door’s shape, its colour, its use as a portal between rooms or the noise of its slamming – all features that I’m sure were part of my son’s conscious experience. I concluded that being a single neuron in my son’s brain would not feel like anything.

Of course, you could argue, as neurobiologists usually do, that . . .

Continue reading. There’s much more.

Then in the New Yorker I was reading “Do Brain Implants Change Your Identity?” by Christine Kenneally. It’s an interesting article, but what caught my eye was a description of the conscious experience of an epilectic seizure, which is (as the article explains) an electric storm in the brain which of course would disrupt the electromagnetic field. If that indeed is where consciousness resides, that would explain this woman’s description:

. . . The human brain is a small electrical device of super-galactic complexity. It contains an estimated hundred billion neurons, with many more links between them than there are stars in the Milky Way. Each neuron works by passing an electrical charge along its length, causing neurotransmitters to leap to the next neuron, which ignites in turn, usually in concert with many thousands of others. Somehow, human intelligence emerges from this constant, thrilling choreography. How it happens remains an almost total mystery, but it has become clear that neural technologies will be able to synch with the brain only if they learn the steps of this dance. . .

. . . I asked Leggett to describe what it was like to have a seizure. She didn’t know. When one took hold, she was ripped out of her consciousness; she wasn’t there. Afterward, there was a terrible sense of having been absent. She would feel mortified in front of anyone who had witnessed the seizure and alarmed as she took stock of the injuries that she often suffered. Even worse, she said, was that epilepsy stole her memories. Every time she had a seizure and then returned, she seemed to have left some of her memories behind her. . .

Written by Leisureguy

19 April 2021 at 5:42 pm

Brain control of devices

leave a comment »

This video bears an interesting relation to the previous post.

Written by Leisureguy

16 April 2021 at 1:17 pm

Ingenious and stimulating science-fiction story

leave a comment »

The story, “Lena,” is by qntm, translated from the Russian by Boris Ostanin. It begins:

This article is about the standard test brain image. For the original human, see Miguel Acevedo.

MMAcevedo (Mnemonic Map/Acevedo), also known as Miguel, is the earliest executable image of a human brain. It is a snapshot of the living brain of neurology graduate Miguel Álvarez Acevedo (2010–2073), taken by researchers at the Uplift Laboratory at the University of New Mexico on August 1, 2031. Though it was not the first successful snapshot taken of the living state of a human brain, it was the first to be captured with sufficient fidelity that it could be run in simulation on computer hardware without succumbing to cascading errors and rapidly crashing. The original MMAcevedo file was 974.3PiB in size and was encoded in the then-cutting-edge, high-resolution MYBB format. More modern brain compression techniques, many of them developed with direct reference to the MMAcevedo image, have compressed the image to 6.75TiB losslessly. In modern brain emulation circles, streamlined, lossily-compressed versions of MMAcevedo run to less than a tebibyte. These versions typically omit large amounts of state data which are more easily supplied by the virtualisation environment, and most if not all of Acevedo’s memories.

The successful creation of MMAcevedo was hailed as a breakthrough achievement in neuroscience, with the Uplift researchers receiving numerous accolades and Acevedo himself briefly becoming an acclaimed celebrity. Acevedo and MMAcevedo were jointly recognised as Time’s “Persons of the Year” at the end of 2031. The breakthrough was also met with severe opposition from humans rights groups.

Between 2031 and 2049, MMAcevedo was duplicated more than 80 times, so that it could be distributed to other research organisations. Each duplicate was made with the express permission of Acevedo himself or, from 2043 onwards, the permission of a legal organisation he founded to manage the rights to his image. Usage of MMAcevedo diminished in the mid-2040s as more standard brain images were produced, these from other subjects who were more lenient with their distribution rights and/or who had been scanned involuntarily. In 2049 it became known that MMAcevedo was being widely shared and experimented upon without Acevedo’s permission. Acevedo’s attempts to curtail this proliferation had the opposite of the intended effect. A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used, with the result that MMAcevedo is now by far the most widely distributed, frequently copied, and closely analysed human brain image.

Acevedo died from coronary heart failure in 2073 at the age of 62. . .

Read the whole thing at the link.

Written by Leisureguy

16 April 2021 at 12:54 pm

Can a prime number be illegal? Yes.

leave a comment »

See this Wikipedia article, which begins:

An illegal prime is a prime number that represents information whose possession or distribution is forbidden in some legal jurisdictions. One of the first illegal primes was found in 2001. When interpreted in a particular way, it describes a computer program that bypasses the digital rights management scheme used on DVDs. Distribution of such a program in the United States is illegal under the Digital Millennium Copyright Act.[1] An illegal prime is a kind of illegal number.

Written by Leisureguy

16 April 2021 at 9:29 am

Locusts Swarmed East Africa, and This Tech Helped Squash Them

leave a comment »

In the NY Times Rachel Nuwer describes a very interesting approach toward controlling a plague of locusts in Africa:

. . . In 2020, billions of the insects descended on East African countries that had not seen locusts in decades, fueled by unusual weather connected to climate change. Kenya had last dealt with a plague of this scale more than 70 years ago; Ethiopia and Somalia, more than 30 years ago. Nineteen million farmers and herders across these three countries, which bore the brunt of the damage, saw their livelihoods severely affected.

. . . But as bad as 2020’s swarms were, they and their offspring could have caused much worse damage. While the weather has helped slow the insects’ reproduction, the success, Mr. Cressman said, has primarily resulted from a technology-driven anti-locust operation that hastily formed in the chaotic months following the insects’ arrival to East Africa. This groundbreaking approach proved so effective at clamping down on the winged invaders in some places that some experts say it could transform management of other natural disasters around the world.

“We’d better not let this crisis go to waste,” said David Hughes, an entomologist at Penn State University. “We should use this lesson as a way not just to be adapted to the next locust crisis, but to climate change, generally.”

Desert locusts are the Dr. Jekylls and Mr. Hydes of the insect world. Normally, the grasshopper-like plant eaters spend their time living solitarily across the deserts of North Africa, Southwest Asia and the Middle East. But when rains arrive, they change from a muted brown into a fiery yellow and become gregarious, forming groups of more than 15 million insects per square mile. Such a swarm can consume the equivalent amount of food in a single day as more than 13,000 people.

The locust plague that hit East Africa in 2020 was two years in the making. In 2018, two major cyclones dumped rain in a remote area of Saudi Arabia, leading to an 8,000-fold increase in desert locust numbers. By mid-2019, winds had pushed the insects into the Horn of Africa, where a wet autumn further boosted their population. An unusual cyclone in Somalia in early December finally tipped the situation into a true emergency.

“Ten years ago, there would have been between zero and one cyclones coming off the Indian Ocean,” Dr. Hughes said. “Now there’s eight to 12 per year — a consequence of climate change.”

Countries like Sudan and Eritrea that regularly deal with small, seasonal swarms have teams of locust trackers who are trained to find the insects and recognize which life cycle stage they are in. They use a tablet-based program to transmit locust data by satellite to national and international authorities so experts can design appropriate control strategies.

But people outside of those frontline locust nations who may want to start using this system today would encounter a typical technology problem: The version of the tablets that the locust-tracking program was written for is no longer manufactured, and newer tablets are not compatible with the software. And even if the hardware were available, in 2020, East Africa lacked experts who could identify locusts.

“We’d never had a dress rehearsal for the real thing,” said Alphonse Owuor, a United Nations Food and Agriculture Organization specialist in Somalia. “We had people who were very familiar with locusts in theory, but who didn’t have the experience or equipment required to carry out this massive operation.”

With swarms suddenly covering an area of Kenya larger than New Jersey, officials were tasked with creating a locust-combating operation virtually from scratch. Collecting dependable, detailed data about locusts was the first crucial step.

“Saying ‘Oh, there’s locusts in northern Kenya’ doesn’t help at all,” Mr. Cressman said. “We need longitude and latitude coordinates in real time.”

Rather than try to rewrite the locust-tracking software for newer tablets, Mr. Cressman thought it would be more efficient to create a simple smartphone app that would allow anyone to collect data like an expert. He reached out to Dr. Hughes, who had already created a similar mobile tool with the Food and Agriculture Organization to track a devastating crop pest, the fall armyworm, through PlantVillage, which he founded.

PlantVillage’s app uses artificial intelligence and machine learning to help farmers in 60 countries, primarily in Africa, diagnose problems in their fields. Borrowing from this blueprint, Dr. Hughes and his colleagues completed the new app, eLocust3m, in just a month.

Unlike the previous tablet-based program, anyone with a smartphone can use eLocust3m. The app presents photos of locusts at different stages of their life cycles, which helps users diagnose what they see in the field. GPS coordinates are automatically recorded and algorithms double check photos submitted with each entry. Garmin International also helped with another program that worked on satellite-transmitting devices.

“The app is really easy to use,” said Ms. Jeptoo of PlantVillage. Last year, she recruited and trained locust trackers in four hard-hit Kenyan regions. “We had scouts who were 40- to 50-year-old elders, and even they were able to use it.”

In the last year, more than 240,000 locust records have poured in from East Africa, collected by PlantVillage scouts, government-trained personnel and citizens. But that was only the first step. Countries next needed to act on the data in a systematic way to quash locusts. In the first few months, however, officials were strategizing “on the back of envelopes,” Mr. Cressman said, and the entire region had just four planes for spraying pesticides.

When Batian Craig, director of 51 Degrees, a security and logistics company focused on protecting wildlife, saw Mr. Cressman quoted in a news story about locusts, he realized he could help.

Mr. Craig and his colleagues, who are headquartered at Lewa Wildlife Conservancy in Central Kenya, conduct regular anti-poaching aerial surveys that could be repurposed to seek out and destroy locust swarms. They also closely communicate with rural communities affected by the insects.

Additionally, 51 Degrees uses a free program called EarthRanger. Created by Vulcan, a Seattle-based philanthropic company originally co-founded by Paul Allen of Microsoft and his sister Jody Allen, EarthRanger compiles and analyzes geographic data ranging from rhino and ranger locations to sensor data and remote imagery.

Engineers at Vulcan agreed to customize a version of EarthRanger for locusts, integrating data from the eLocust programs and the computer loggers on aerial pesticide sprayers.

Lewa Conservancy quickly became the headquarters for aerial survey and control across the region. By June 2020, these efforts were paying off. Locusts were prevented from spilling into Africa’s Sahel region and west to Senegal.

“If we didn’t stop them, . . .

Continue reading. There’s more, including a good college of large photos.

Written by Leisureguy

13 April 2021 at 10:54 am

“After Working at Google, I’ll Never Let Myself Love a Job Again”

leave a comment »

Emil Nietfeld, a software engineer, learns that corporations, as persons, are sociopaths. She writes in the NY Times:

I used to be a Google engineer. That often feels like the defining fact about my life. When I joined the company after college in 2015, it was at the start of a multiyear reign atop Forbes’s list of best workplaces.

I bought into the Google dream completely. In high school, I spent time homeless and in foster care, and was often ostracized for being nerdy. I longed for the prestige of a blue-chip job, the security it would bring and a collegial environment where I would work alongside people as driven as I was.

What I found was a surrogate family. During the week, I ate all my meals at the office. I went to the Google doctor and the Google gym. My colleagues and I piled into Airbnbs on business trips, played volleyball in Maui after a big product launch and even spent weekends together, once paying $170 and driving hours to run an obstacle course in the freezing rain.

My manager felt like the father I wished I’d had. He believed in my potential and cared about my feelings. All I wanted was to keep getting promoted so that as his star rose, we could keep working together. This gave purpose to every task, no matter how grueling or tedious.

The few people who’d worked at other companies reminded us that there was nowhere better. I believed them, even when my technical lead — not my manager, but the man in charge of my day-to-day work — addressed me as “beautiful” and “gorgeous,” even after I asked him to stop. (Finally, I agreed that he could call me “my queen.”) He used many of our one-on-one meetings to ask me to set him up with friends, then said he wanted “A blonde. A tall blonde.” Someone who looked like me.

Saying anything about his behavior meant challenging the story we told ourselves about Google being so special. The company anticipated our every need — nap pods, massage chairs, Q-Tips in the bathroom, a shuttle system to compensate for the Bay Area’s dysfunctional public transportation — until the outside world began to seem hostile. Google was the Garden of Eden; I lived in fear of being cast out.

When I talked to outsiders about the harassment, they couldn’t understand: I had one of the sexiest jobs in the world. How bad could it be? I asked myself this, too. I worried that I was taking things personally and that if anyone knew I was upset, they’d think I wasn’t tough enough to hack it in our intense environment.

So I didn’t tell my manager about my tech lead’s behavior for more than a year. Playing along felt like the price of inclusion. I spoke up only when it looked like he would become an official manager — my manager — replacing the one I adored and wielding even more power over me. At least four other women said that he’d made them uncomfortable, in addition to two senior engineers who already made it clear that they wouldn’t work with him.

As soon as my complaint with H.R. was filed, Google went from being a great workplace to being any other company: It would protect itself first. I’d structured my life around my job — exactly what they wanted me to do — but that only made the fallout worse when I learned that the workplace that I cherished considered me just an employee, one of many and disposable.

The process stretched out for nearly three months. In the meantime I had to have one-on-one meetings with my harasser and sit next to him. Every time I asked for an update on the timeline and expressed my discomfort at having to continue to work in proximity to my harasser, the investigators said that I could seek counseling, work from home or go on leave. I later learned that Google had similar responses to other employees who reported racism or sexism. Claire Stapleton, one of the 2018 walkout organizers, was encouraged to take leave, and Timnit Gebru, a lead researcher on Google’s Ethical AI team, was encouraged to seek mental health care before being forced out.

I resisted. How would being alone by myself all day, apart from my colleagues, friends and support system, possibly help? And I feared that if I stepped away, the company wouldn’t continue the investigation.

Eventually, the investigators corroborated my claims and found my tech lead violated the Code of Conduct and the policy against harassment. My harasser still sat next to me. My manager told me H.R. wouldn’t even make him change his desk, let alone work from home or go on leave. He also told me that my harasser received a consequence that was severe and that I would feel better if I could know what it was, but it sure seemed like nothing happened.

The aftermath of speaking up had broken me down. It dredged up the betrayals of my past that I’d gone into tech trying to overcome. I’d made myself vulnerable to my manager and the investigators but felt I got nothing solid in return. I was constantly on edge from seeing my harasser in the hallways and at the cafes. When people came up behind my desk, I startled more and more easily, my scream echoing across the open-floor-plan office. I worried I’d get a poor performance review, ruining my upward trajectory and setting my career back even further.

I went weeks without sleeping through the night.

I decided to take three months of paid leave. I feared that going on leave would set me back for promotion in a place where almost everyone’s progress is public and seen as a measure of an engineer’s worth and expertise. Like most of my colleagues, I’d built my life around the company. It could so easily be taken away. People on leave weren’t supposed to enter the office — where I went to the gym and had my entire social life.

Fortunately, I still had a job when I got back. If anything, I was more eager than ever to excel, to make up for lost time. I was able to earn a very high performance rating — my second in a row. But it seemed clear I would not be a candidate for promotion. After my leave, the manager I loved started treating me as fragile. He tried to analyze me, suggesting that I drank too much caffeine, didn’t sleep enough or needed more cardiovascular exercise. Speaking out irreparably damaged one of my most treasured relationships. Six months after my return, when I broached the subject of promotion, he told me, “People in wood houses shouldn’t light matches.”

When I didn’t get a promotion, some of my stock grants ran out and so I effectively took a big pay cut. Nevertheless, I wanted to stay at Google. I still believed, despite everything, that Google was the best company in the world. Now I see that my judgment was clouded, but after years of idolizing my workplace, I couldn’t imagine life beyond its walls.

So I interviewed with and got offers from two other top tech companies, hoping that Google would match. In response,  . . .

Continue reading. There’s much more.

Written by Leisureguy

11 April 2021 at 12:43 pm

The History of the Pivot Table, The Spreadsheet’s Most Powerful Tool

leave a comment »

Dan Knopf writes in Quartz:

Pivot tables are the quickest and most powerful way for the average person to analyze large datasets. No coding skills or mathematical brilliance are necessary—just the ability to point and click your mouse.

But don’t take our word for it. Pivot tables had a superfan in none other than Apple founder Steve Jobs, who immediately saw their genius.

In 1985, Jobs was forced out of his role as chairman of the board at Apple after failing to beat IBM in the business computer market. Fortunately, he was a stubborn man. Jobs immediately started the company NeXT, with the idea of taking on IBM once again.

As he developed the NeXT computer, which would launch in 1988, Jobs was looking for killer software programs to create demand for the product. From his experience at Apple, he knew that a good spreadsheet program could drive sales. Jobs credited VisiCalc, the first widely used spreadsheet software, for the huge success of the Apple II computer, released in 1979.

In his search for that need-to-have product, Jobs met with software company Lotus. The organization had already developed Lotus 1-2-3, a popular spreadsheet program that ran on IBM computers. It was in these meetings that Jobs would first stumble upon the “pivot table.”

Software developer Pito Salas was at the time working in research and development for Lotus, looking into how people typically utilize spreadsheets. Salas saw that users would often use spreadsheets to try to calculate summary statistics by categories (often referred to as crosstabs). For example, a company selling bicycles might want to examine their data to find unit sales by month or revenue by country. The way people did that at the time was cumbersome and error-prone because it involved writing complicated formulas.

Salas decided the world needed software that would make those calculations simple. Rather than enter formulas, users would be able to point and click to get those summary statistics. The Lotus team called this tool “flexible views,” but today similar tools are called “pivot tables” in both Microsoft Excel and Google Sheets.

The Lotus team showed Jobs an early prototype. “Steve Jobs thought it was the coolest thing ever,” Salas, now a professor at Brandeis University, tells Quartz. Jobs then convinced Lotus to develop the pivot table software exclusively for the NeXT computer. The software came out as Lotus Improv, and though the NeXT computer was a commercial failure, Lotus Improv would be hugely influential. The “flexible views” aspect of Improv would be built into both Lotus 1-2-3 and Excel (the latter was the first to actually use the term “pivot table”).

Bill Jelen, Excel evangelist and co-author of Pivot Table Data Crunching, credits Salas as the “father of pivot tables.” Salas says his contribution to pivot tables is one of his life’s most gratifying accomplishments, though he believes he was just building on the foundations of many others.

Today, pivot tables are among the most important and commonly used tools in the spreadsheet wizard’s toolbox. “A pivot table lets you create a one-page summary report from hundreds of thousands of rows of data, often in four, five, or six clicks,” says Jelen. “It is the fastest way to get answers from large datasets.”

It’s hard to know exactly how . . .

Continue reading.

Written by Leisureguy

29 March 2021 at 11:44 am

What is this? The case for continually questioning our online experience

leave a comment »

Dan Nixon has an interesting piece in Perspectiva. He notes:

What is this? is part of the Digital Ego project.

Read & download the essay as a PDF here: What is this? The case for continually questioning our online experience

At the link in the first line you will also find an audio file of his reading the piece, along with the piece itself, which begins:

It is all too easy to take what we see for granted. Even the most basic act of perception can encompass so much more than at first seems to be the case. ‘Seeing is more than a physiological phenomenon’, the American photographer Dorothea Lange once remarked. ‘We see not only with our eyes but with all that we are and all that our culture is’. We might even say that our human being is, to a large extent, a matter of human perceiving; as the philosopher Pierre Teilhard de Chardin put it, the totality of life lies in the verb “seeing.”

This ‘frame’, life as seeing, is well suited for efforts to understand the various ways in which our digital technologies are shaping our lives at the deepest levels. Checking in with so many feeds and updates throughout the day, our everyday experience has become increasingly fragmented; in what can feel like a digital ‘hall of mirrors’, it is ever harder to see things in an integrated way. Meanwhile, our social fabric is increasingly tugged by divisive forces that split us apart and encourage us to see past each other entirely.

Underlying both sets of issues lies the particular logic of a digital media ecosystem through which everything comes to be viewed, at some level, in terms of data. As the philosopher Luciano Floridi notes, digital technologies ‘make us think about the world informationally and make the world we experience informational’.

It is within this context that Perspectiva has launched the Digital Ego project, with the aim of exploring what it means to grow and flourish as humans against this digital background to our lives. As Tom Chatfield, my co-lead for the project sets out here, this inquiry includes starting a dialogue around the ‘virtues for the virtual’ that we collectively need to cultivate. Capacities such as critical thinking, kindness, and humility seem especially important here, as does our ability to see things from multiple perspectives, to adopt a more integrated worldview, and to be okay with not knowing.

Yet underpinning all of the above, and amidst the swirl of urgent issues we find ourselves caught up in at the current time – the pandemic, taut political climates, our precarious environmental position, to name but a few – I argue here that what we need most of all is to cultivate a spirit of questioning towards our actual, lived experience in the digital sphere of our lives.

Not so much cerebral efforts to pin things down in order to get fixed answers, but an ongoing, open-ended questioning towards what’s happening in our actual experience. It’s the practice of coming back to the simple question, ‘what is this?’ over and over again, in relation to all that we encounter with and through our digital technologies.

Using this simple method, what follows is an invitation to question our actual experience at all levels: from our most mundane day-to-day experiences using our technologies, through to the less visible forces and contexts shaping those experiences. We will consider: what is the quality of the exchanges we are having online? How does a particular ‘currency of ideas’ shape how we see ourselves and others on social media platforms, and what might we experiment with here? How do our egos come to take centre-stage in our online spaces? What options do we have, amidst the algorithms and incentives underpinning our media ecosystem, for getting a more expansive view of what’s really going on?

We will end with some of the deeper questions that emerge from this inquiry, reflecting on what is problematic about the tech mindset of ‘solutionism’ and why an open-ended spirit of questioning can serve as the ideal response. Why should we be vigilant about making room for the inherent mysteriousness of our everyday experience? Why, finally, is it crucial that we consider what silence and stillness and ‘intermundane space’ look like in a digitally-mediated world?

Before exploring these different levels of questioning, let me briefly outline the general approach a little further.

Questioning as a spiritual and philosophical practice for the digital age

For over twenty years, the Zen meditation teachers Martine and Stephen Batchelor have taught the practice of continually coming back to the simple question, ‘what is this?’ in relation to one’s actual, lived experience. Through questioning, they suggest, we can learn to undercut our habitual tendency to fixate on things – to identify with some sense that ‘I am like this’ or ‘This is like that’.

This chimes with the value placed on curiosity in the West, although the form of questioning undertaken in the Zen tradition is quite distinctive. Recounting his years spent living in a monastery in Korea, Stephen Batchelor describes how ‘we would all sit in a darkened room and ask ourselves ‘What is this?’. And rest with that question. Nothing else’. In What is this? Ancient questions for modern minds, written with Stephen, Martine elaborates:

The practice is about questioning; it’s not a practice of answering… [it’s about] trying to cultivate a sensation of questioning in the whole body and mind. The anchor is the question, and we come back to the question again and again.

The practice that the Batchelors describe is a spiritual one, but a similar spirit of questioning runs through the philosophical tradition of phenomenology. Beginning with the work of Edmund Husserl around the turn of the 20th Century, phenomenologists emphasise the need to . . .

Continue reading. There’s much more.

Written by Leisureguy

24 March 2021 at 2:14 pm

%d bloggers like this: