Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

Locusts Swarmed East Africa, and This Tech Helped Squash Them

leave a comment »

In the NY Times Rachel Nuwer describes a very interesting approach toward controlling a plague of locusts in Africa:

. . . In 2020, billions of the insects descended on East African countries that had not seen locusts in decades, fueled by unusual weather connected to climate change. Kenya had last dealt with a plague of this scale more than 70 years ago; Ethiopia and Somalia, more than 30 years ago. Nineteen million farmers and herders across these three countries, which bore the brunt of the damage, saw their livelihoods severely affected.

. . . But as bad as 2020’s swarms were, they and their offspring could have caused much worse damage. While the weather has helped slow the insects’ reproduction, the success, Mr. Cressman said, has primarily resulted from a technology-driven anti-locust operation that hastily formed in the chaotic months following the insects’ arrival to East Africa. This groundbreaking approach proved so effective at clamping down on the winged invaders in some places that some experts say it could transform management of other natural disasters around the world.

“We’d better not let this crisis go to waste,” said David Hughes, an entomologist at Penn State University. “We should use this lesson as a way not just to be adapted to the next locust crisis, but to climate change, generally.”

Desert locusts are the Dr. Jekylls and Mr. Hydes of the insect world. Normally, the grasshopper-like plant eaters spend their time living solitarily across the deserts of North Africa, Southwest Asia and the Middle East. But when rains arrive, they change from a muted brown into a fiery yellow and become gregarious, forming groups of more than 15 million insects per square mile. Such a swarm can consume the equivalent amount of food in a single day as more than 13,000 people.

The locust plague that hit East Africa in 2020 was two years in the making. In 2018, two major cyclones dumped rain in a remote area of Saudi Arabia, leading to an 8,000-fold increase in desert locust numbers. By mid-2019, winds had pushed the insects into the Horn of Africa, where a wet autumn further boosted their population. An unusual cyclone in Somalia in early December finally tipped the situation into a true emergency.

“Ten years ago, there would have been between zero and one cyclones coming off the Indian Ocean,” Dr. Hughes said. “Now there’s eight to 12 per year — a consequence of climate change.”

Countries like Sudan and Eritrea that regularly deal with small, seasonal swarms have teams of locust trackers who are trained to find the insects and recognize which life cycle stage they are in. They use a tablet-based program to transmit locust data by satellite to national and international authorities so experts can design appropriate control strategies.

But people outside of those frontline locust nations who may want to start using this system today would encounter a typical technology problem: The version of the tablets that the locust-tracking program was written for is no longer manufactured, and newer tablets are not compatible with the software. And even if the hardware were available, in 2020, East Africa lacked experts who could identify locusts.

“We’d never had a dress rehearsal for the real thing,” said Alphonse Owuor, a United Nations Food and Agriculture Organization specialist in Somalia. “We had people who were very familiar with locusts in theory, but who didn’t have the experience or equipment required to carry out this massive operation.”

With swarms suddenly covering an area of Kenya larger than New Jersey, officials were tasked with creating a locust-combating operation virtually from scratch. Collecting dependable, detailed data about locusts was the first crucial step.

“Saying ‘Oh, there’s locusts in northern Kenya’ doesn’t help at all,” Mr. Cressman said. “We need longitude and latitude coordinates in real time.”

Rather than try to rewrite the locust-tracking software for newer tablets, Mr. Cressman thought it would be more efficient to create a simple smartphone app that would allow anyone to collect data like an expert. He reached out to Dr. Hughes, who had already created a similar mobile tool with the Food and Agriculture Organization to track a devastating crop pest, the fall armyworm, through PlantVillage, which he founded.

PlantVillage’s app uses artificial intelligence and machine learning to help farmers in 60 countries, primarily in Africa, diagnose problems in their fields. Borrowing from this blueprint, Dr. Hughes and his colleagues completed the new app, eLocust3m, in just a month.

Unlike the previous tablet-based program, anyone with a smartphone can use eLocust3m. The app presents photos of locusts at different stages of their life cycles, which helps users diagnose what they see in the field. GPS coordinates are automatically recorded and algorithms double check photos submitted with each entry. Garmin International also helped with another program that worked on satellite-transmitting devices.

“The app is really easy to use,” said Ms. Jeptoo of PlantVillage. Last year, she recruited and trained locust trackers in four hard-hit Kenyan regions. “We had scouts who were 40- to 50-year-old elders, and even they were able to use it.”

In the last year, more than 240,000 locust records have poured in from East Africa, collected by PlantVillage scouts, government-trained personnel and citizens. But that was only the first step. Countries next needed to act on the data in a systematic way to quash locusts. In the first few months, however, officials were strategizing “on the back of envelopes,” Mr. Cressman said, and the entire region had just four planes for spraying pesticides.

When Batian Craig, director of 51 Degrees, a security and logistics company focused on protecting wildlife, saw Mr. Cressman quoted in a news story about locusts, he realized he could help.

Mr. Craig and his colleagues, who are headquartered at Lewa Wildlife Conservancy in Central Kenya, conduct regular anti-poaching aerial surveys that could be repurposed to seek out and destroy locust swarms. They also closely communicate with rural communities affected by the insects.

Additionally, 51 Degrees uses a free program called EarthRanger. Created by Vulcan, a Seattle-based philanthropic company originally co-founded by Paul Allen of Microsoft and his sister Jody Allen, EarthRanger compiles and analyzes geographic data ranging from rhino and ranger locations to sensor data and remote imagery.

Engineers at Vulcan agreed to customize a version of EarthRanger for locusts, integrating data from the eLocust programs and the computer loggers on aerial pesticide sprayers.

Lewa Conservancy quickly became the headquarters for aerial survey and control across the region. By June 2020, these efforts were paying off. Locusts were prevented from spilling into Africa’s Sahel region and west to Senegal.

“If we didn’t stop them, . . .

Continue reading. There’s more, including a good college of large photos.

Written by LeisureGuy

13 April 2021 at 10:54 am

“After Working at Google, I’ll Never Let Myself Love a Job Again”

leave a comment »

Emil Nietfeld, a software engineer, learns that corporations, as persons, are sociopaths. She writes in the NY Times:

I used to be a Google engineer. That often feels like the defining fact about my life. When I joined the company after college in 2015, it was at the start of a multiyear reign atop Forbes’s list of best workplaces.

I bought into the Google dream completely. In high school, I spent time homeless and in foster care, and was often ostracized for being nerdy. I longed for the prestige of a blue-chip job, the security it would bring and a collegial environment where I would work alongside people as driven as I was.

What I found was a surrogate family. During the week, I ate all my meals at the office. I went to the Google doctor and the Google gym. My colleagues and I piled into Airbnbs on business trips, played volleyball in Maui after a big product launch and even spent weekends together, once paying $170 and driving hours to run an obstacle course in the freezing rain.

My manager felt like the father I wished I’d had. He believed in my potential and cared about my feelings. All I wanted was to keep getting promoted so that as his star rose, we could keep working together. This gave purpose to every task, no matter how grueling or tedious.

The few people who’d worked at other companies reminded us that there was nowhere better. I believed them, even when my technical lead — not my manager, but the man in charge of my day-to-day work — addressed me as “beautiful” and “gorgeous,” even after I asked him to stop. (Finally, I agreed that he could call me “my queen.”) He used many of our one-on-one meetings to ask me to set him up with friends, then said he wanted “A blonde. A tall blonde.” Someone who looked like me.

Saying anything about his behavior meant challenging the story we told ourselves about Google being so special. The company anticipated our every need — nap pods, massage chairs, Q-Tips in the bathroom, a shuttle system to compensate for the Bay Area’s dysfunctional public transportation — until the outside world began to seem hostile. Google was the Garden of Eden; I lived in fear of being cast out.

When I talked to outsiders about the harassment, they couldn’t understand: I had one of the sexiest jobs in the world. How bad could it be? I asked myself this, too. I worried that I was taking things personally and that if anyone knew I was upset, they’d think I wasn’t tough enough to hack it in our intense environment.

So I didn’t tell my manager about my tech lead’s behavior for more than a year. Playing along felt like the price of inclusion. I spoke up only when it looked like he would become an official manager — my manager — replacing the one I adored and wielding even more power over me. At least four other women said that he’d made them uncomfortable, in addition to two senior engineers who already made it clear that they wouldn’t work with him.

As soon as my complaint with H.R. was filed, Google went from being a great workplace to being any other company: It would protect itself first. I’d structured my life around my job — exactly what they wanted me to do — but that only made the fallout worse when I learned that the workplace that I cherished considered me just an employee, one of many and disposable.

The process stretched out for nearly three months. In the meantime I had to have one-on-one meetings with my harasser and sit next to him. Every time I asked for an update on the timeline and expressed my discomfort at having to continue to work in proximity to my harasser, the investigators said that I could seek counseling, work from home or go on leave. I later learned that Google had similar responses to other employees who reported racism or sexism. Claire Stapleton, one of the 2018 walkout organizers, was encouraged to take leave, and Timnit Gebru, a lead researcher on Google’s Ethical AI team, was encouraged to seek mental health care before being forced out.

I resisted. How would being alone by myself all day, apart from my colleagues, friends and support system, possibly help? And I feared that if I stepped away, the company wouldn’t continue the investigation.

Eventually, the investigators corroborated my claims and found my tech lead violated the Code of Conduct and the policy against harassment. My harasser still sat next to me. My manager told me H.R. wouldn’t even make him change his desk, let alone work from home or go on leave. He also told me that my harasser received a consequence that was severe and that I would feel better if I could know what it was, but it sure seemed like nothing happened.

The aftermath of speaking up had broken me down. It dredged up the betrayals of my past that I’d gone into tech trying to overcome. I’d made myself vulnerable to my manager and the investigators but felt I got nothing solid in return. I was constantly on edge from seeing my harasser in the hallways and at the cafes. When people came up behind my desk, I startled more and more easily, my scream echoing across the open-floor-plan office. I worried I’d get a poor performance review, ruining my upward trajectory and setting my career back even further.

I went weeks without sleeping through the night.

I decided to take three months of paid leave. I feared that going on leave would set me back for promotion in a place where almost everyone’s progress is public and seen as a measure of an engineer’s worth and expertise. Like most of my colleagues, I’d built my life around the company. It could so easily be taken away. People on leave weren’t supposed to enter the office — where I went to the gym and had my entire social life.

Fortunately, I still had a job when I got back. If anything, I was more eager than ever to excel, to make up for lost time. I was able to earn a very high performance rating — my second in a row. But it seemed clear I would not be a candidate for promotion. After my leave, the manager I loved started treating me as fragile. He tried to analyze me, suggesting that I drank too much caffeine, didn’t sleep enough or needed more cardiovascular exercise. Speaking out irreparably damaged one of my most treasured relationships. Six months after my return, when I broached the subject of promotion, he told me, “People in wood houses shouldn’t light matches.”

When I didn’t get a promotion, some of my stock grants ran out and so I effectively took a big pay cut. Nevertheless, I wanted to stay at Google. I still believed, despite everything, that Google was the best company in the world. Now I see that my judgment was clouded, but after years of idolizing my workplace, I couldn’t imagine life beyond its walls.

So I interviewed with and got offers from two other top tech companies, hoping that Google would match. In response,  . . .

Continue reading. There’s much more.

Written by LeisureGuy

11 April 2021 at 12:43 pm

The History of the Pivot Table, The Spreadsheet’s Most Powerful Tool

leave a comment »

Dan Knopf writes in Quartz:

Pivot tables are the quickest and most powerful way for the average person to analyze large datasets. No coding skills or mathematical brilliance are necessary—just the ability to point and click your mouse.

But don’t take our word for it. Pivot tables had a superfan in none other than Apple founder Steve Jobs, who immediately saw their genius.

In 1985, Jobs was forced out of his role as chairman of the board at Apple after failing to beat IBM in the business computer market. Fortunately, he was a stubborn man. Jobs immediately started the company NeXT, with the idea of taking on IBM once again.

As he developed the NeXT computer, which would launch in 1988, Jobs was looking for killer software programs to create demand for the product. From his experience at Apple, he knew that a good spreadsheet program could drive sales. Jobs credited VisiCalc, the first widely used spreadsheet software, for the huge success of the Apple II computer, released in 1979.

In his search for that need-to-have product, Jobs met with software company Lotus. The organization had already developed Lotus 1-2-3, a popular spreadsheet program that ran on IBM computers. It was in these meetings that Jobs would first stumble upon the “pivot table.”

Software developer Pito Salas was at the time working in research and development for Lotus, looking into how people typically utilize spreadsheets. Salas saw that users would often use spreadsheets to try to calculate summary statistics by categories (often referred to as crosstabs). For example, a company selling bicycles might want to examine their data to find unit sales by month or revenue by country. The way people did that at the time was cumbersome and error-prone because it involved writing complicated formulas.

Salas decided the world needed software that would make those calculations simple. Rather than enter formulas, users would be able to point and click to get those summary statistics. The Lotus team called this tool “flexible views,” but today similar tools are called “pivot tables” in both Microsoft Excel and Google Sheets.

The Lotus team showed Jobs an early prototype. “Steve Jobs thought it was the coolest thing ever,” Salas, now a professor at Brandeis University, tells Quartz. Jobs then convinced Lotus to develop the pivot table software exclusively for the NeXT computer. The software came out as Lotus Improv, and though the NeXT computer was a commercial failure, Lotus Improv would be hugely influential. The “flexible views” aspect of Improv would be built into both Lotus 1-2-3 and Excel (the latter was the first to actually use the term “pivot table”).

Bill Jelen, Excel evangelist and co-author of Pivot Table Data Crunching, credits Salas as the “father of pivot tables.” Salas says his contribution to pivot tables is one of his life’s most gratifying accomplishments, though he believes he was just building on the foundations of many others.

Today, pivot tables are among the most important and commonly used tools in the spreadsheet wizard’s toolbox. “A pivot table lets you create a one-page summary report from hundreds of thousands of rows of data, often in four, five, or six clicks,” says Jelen. “It is the fastest way to get answers from large datasets.”

It’s hard to know exactly how . . .

Continue reading.

Written by LeisureGuy

29 March 2021 at 11:44 am

What is this? The case for continually questioning our online experience

leave a comment »

Dan Nixon has an interesting piece in Perspectiva. He notes:

What is this? is part of the Digital Ego project.

Read & download the essay as a PDF here: What is this? The case for continually questioning our online experience

At the link in the first line you will also find an audio file of his reading the piece, along with the piece itself, which begins:

It is all too easy to take what we see for granted. Even the most basic act of perception can encompass so much more than at first seems to be the case. ‘Seeing is more than a physiological phenomenon’, the American photographer Dorothea Lange once remarked. ‘We see not only with our eyes but with all that we are and all that our culture is’. We might even say that our human being is, to a large extent, a matter of human perceiving; as the philosopher Pierre Teilhard de Chardin put it, the totality of life lies in the verb “seeing.”

This ‘frame’, life as seeing, is well suited for efforts to understand the various ways in which our digital technologies are shaping our lives at the deepest levels. Checking in with so many feeds and updates throughout the day, our everyday experience has become increasingly fragmented; in what can feel like a digital ‘hall of mirrors’, it is ever harder to see things in an integrated way. Meanwhile, our social fabric is increasingly tugged by divisive forces that split us apart and encourage us to see past each other entirely.

Underlying both sets of issues lies the particular logic of a digital media ecosystem through which everything comes to be viewed, at some level, in terms of data. As the philosopher Luciano Floridi notes, digital technologies ‘make us think about the world informationally and make the world we experience informational’.

It is within this context that Perspectiva has launched the Digital Ego project, with the aim of exploring what it means to grow and flourish as humans against this digital background to our lives. As Tom Chatfield, my co-lead for the project sets out here, this inquiry includes starting a dialogue around the ‘virtues for the virtual’ that we collectively need to cultivate. Capacities such as critical thinking, kindness, and humility seem especially important here, as does our ability to see things from multiple perspectives, to adopt a more integrated worldview, and to be okay with not knowing.

Yet underpinning all of the above, and amidst the swirl of urgent issues we find ourselves caught up in at the current time – the pandemic, taut political climates, our precarious environmental position, to name but a few – I argue here that what we need most of all is to cultivate a spirit of questioning towards our actual, lived experience in the digital sphere of our lives.

Not so much cerebral efforts to pin things down in order to get fixed answers, but an ongoing, open-ended questioning towards what’s happening in our actual experience. It’s the practice of coming back to the simple question, ‘what is this?’ over and over again, in relation to all that we encounter with and through our digital technologies.

Using this simple method, what follows is an invitation to question our actual experience at all levels: from our most mundane day-to-day experiences using our technologies, through to the less visible forces and contexts shaping those experiences. We will consider: what is the quality of the exchanges we are having online? How does a particular ‘currency of ideas’ shape how we see ourselves and others on social media platforms, and what might we experiment with here? How do our egos come to take centre-stage in our online spaces? What options do we have, amidst the algorithms and incentives underpinning our media ecosystem, for getting a more expansive view of what’s really going on?

We will end with some of the deeper questions that emerge from this inquiry, reflecting on what is problematic about the tech mindset of ‘solutionism’ and why an open-ended spirit of questioning can serve as the ideal response. Why should we be vigilant about making room for the inherent mysteriousness of our everyday experience? Why, finally, is it crucial that we consider what silence and stillness and ‘intermundane space’ look like in a digitally-mediated world?

Before exploring these different levels of questioning, let me briefly outline the general approach a little further.

Questioning as a spiritual and philosophical practice for the digital age

For over twenty years, the Zen meditation teachers Martine and Stephen Batchelor have taught the practice of continually coming back to the simple question, ‘what is this?’ in relation to one’s actual, lived experience. Through questioning, they suggest, we can learn to undercut our habitual tendency to fixate on things – to identify with some sense that ‘I am like this’ or ‘This is like that’.

This chimes with the value placed on curiosity in the West, although the form of questioning undertaken in the Zen tradition is quite distinctive. Recounting his years spent living in a monastery in Korea, Stephen Batchelor describes how ‘we would all sit in a darkened room and ask ourselves ‘What is this?’. And rest with that question. Nothing else’. In What is this? Ancient questions for modern minds, written with Stephen, Martine elaborates:

The practice is about questioning; it’s not a practice of answering… [it’s about] trying to cultivate a sensation of questioning in the whole body and mind. The anchor is the question, and we come back to the question again and again.

The practice that the Batchelors describe is a spiritual one, but a similar spirit of questioning runs through the philosophical tradition of phenomenology. Beginning with the work of Edmund Husserl around the turn of the 20th Century, phenomenologists emphasise the need to . . .

Continue reading. There’s much more.

Written by LeisureGuy

24 March 2021 at 2:14 pm

Bradley University’s Game Design Program Ranks Top 10 in the World Again

leave a comment »

Here’s the report. I’ll mention in passing that The Son is departmental chair.

Written by LeisureGuy

23 March 2021 at 3:41 pm

Sidney Powell Now Argues “No Reasonable Person” Would Believe Her Voter Fraud Lies Were “Fact”

leave a comment »

Oh what a tangled web we weave
When first we practice to deceive
Sir Walter Scott

Zoe Tillman reports in Buzzfeed News:

Sidney Powell argued Monday that she couldn’t be sued for defamation for repeatedly promoting false conspiracy theories about the 2020 election being rigged because “no reasonable person would” believe that her comments “were truly statements of fact.”

In the months after the election, the Texas-based attorney became one of the most public faces of a campaign to discredit President Joe Biden’s win. Vowing to “release the Kraken,” she pushed the lie that the election was stolen from former president Donald Trump. In numerous TV and public appearances, as well as in court, Powell spread conspiracy theories that two voting equipment companies, Dominion Voting Systems and Smartmatic, were part of a Democrat-backed scheme to “steal” the election by rigging voting systems to flip votes for Trump to Biden, count ballots more than once, and fabricate votes for Biden.

Now facing billion-dollar lawsuits from both companies and having lost all of her court cases challenging the election, Powell is on the defensive. On Monday, her legal team filed a motion to dismiss Dominion’s $1.3 billion lawsuit, or at least to move it from the federal district court in Washington, DC, to Texas. They argued that the election fraud narrative that Powell had spent months touting as grounds to undo the presidential election was “hyperbole” and political speech entitled to protection under the First Amendment.

Even if Powell’s statements were presentations of fact that could be proven as true or false, her lawyers wrote, “no reasonable person would conclude that the statements were truly statements of fact.”

Powell deflected blame to the Trump supporters who adopted the conspiracy theories and lies that she and other Trump allies pushed and that ultimately fueled the insurrection at the US Capitol on Jan. 6. Her lawyers wrote that she was just presenting her “opinions and legal theories on a matter of utmost public concern,” and that members of the public who were interested were “free” to look at the evidence and make up their own minds or wait to see how the evidence held up in court.

Even as Powell tried to distance herself from responsibility for the conspiracy theories she promoted after the election, she also disputed that the statements at issue were, in fact, false. She argued that Dominion was a “public figure” because of its prominent role in the election process, a status that set the bar higher for proving defamation and meant Dominion had to show that she acted with “actual malice.” Powell’s lawyers argued that Dominion couldn’t meet that standard because “she believed the allegations then and she believes them now.”

Powell’s lawyers argued that . . .

Continue reading. There’s more. Emphasis added in the above.

Written by LeisureGuy

23 March 2021 at 11:26 am

How Biden Can Clean Up Obama’s Big Tech Mess

leave a comment »

Matt Stoller writes in BIG:

Last week, documents leaked showing that the Obama administration nearly brought antitrust charges against Google in 2012. I’m going to write about why they didn’t, the damage that decision caused, and why Biden will forge a different path.

Also in this column are short pieces on:

  • How Gmail Quietly Controls a Vital Channel for Political Speech
  • The Slow Collapse of Corporate Republicans
  • The Monopoly Behind the Nuclear Weapons Lobby
  • The Coming Merger Boom
  • Why Golf Clubs Are Getting Worse

Before the main event, some house-keeping. I have a piece in the New York Times on the Arizona state legislative fight against the app store monopolies of Google and Apple. Also, I was recently on Marketplace to talk about Google. Finally, reporter Alec McGillis has an important book out on Amazon and the tearing apart of American society. It’s called Fulfillment: Winning and Losing in One-Click America, and I’ll be writing more about it shortly.

And now…

Bad Search Engine Results Kill People

Americans expect Google to deliver the most relevant and best results for any particular query. But Google has an edge case problem. When an unsophisticated or desperate user really needs information about something important, and marketers are trying to lie or defraud the user, Google may deliver results that are not only bad, but actively harmful.

For instance, in 2017, reporters Cat Ferguson and Dave Dayen showed that Google’s poor search results had become a useful tool for con artists trying to entice addicts and alcoholics to sham rehab facilities. Google’s marketing tools often worked, helping shoddy treatment center firms cheat addicts, some of whom no doubt relapsed.

Offering poor quality rehab facilities is wrong, and Google didn’t cheat the addicts directly. But what made this line of business profitable was among other things the easy access to customers enabled by using Google. Indeed, as Ferguson noted, these companies were “united by their dependence on Google.” Embarrassed by the publicity, Google eventually made some effort at addressing the problem, but never really figured out how to stop con artists from using its service to harm these desperate people.

Similarly, in 2019, the Wall Street Journal reported on millions of fake listings on Google Maps, which con artists used to cheat customers and blackmail honest small businesses. Users were screwed. But for businesses, the only recourse was to spend more ad money on Google; complaining got you nowhere, or worse. Said one businessman, “It’s less harmful to piss off the government than piss off Google. The government will hit me with a fine. But if Google suspends my listings, I’m out of a job. Google could make me homeless.”

In other words, while generally speaking you will get good results from Google, in edge cases you may get results that are extremely harmful, like a repairman who cheats you, a bad doctor, or someone who wants to steal your money in the guise of helping you recover from addiction. Since most people expect to get credible results from Google, it’s a form of mass deception. And those who rely on Google to convey information to customers, like small businesses, are often on a knife’s edge, existing at the whim of a search monopolist that does not notice them.

These quality problems are a result of Google’s monopoly; poor quality is a classic symptom of monopoly power. How Google seems to offer good results on the whole, but sometimes undermines quality at the edges, is a somewhat subtle story.

Why Does Google Help Kill Addicts Seeking Recovery Services?

Google’s main search engine is what is called a ‘general search engine,’ meaning it provides general results based on indexing most of the web.

There are other types of search engines. Yelp and Expedia, for instance, are known as ‘vertical search engines’ who focus on a much narrower topic, like local businesses and travel. You can’t ask Yelp generalized questions about research or culture, but it is likely better (though not perfect) at removing local restaurant listing spam than Google, because that is its entire business.

Of course Google isn’t just a general search engine. It has vertical search lines of business as well. It competes with Yelp, Expedia, etc, listing restaurants, health providers, travel information, etc, and has user reviews. But the incentives are different for Google. If Google Maps stopped listing every restaurant in New York City, the lost revenue literally wouldn’t show up on Google’s income statement. Yelp, however, would see it as a crisis for its business.

The CEO of Yelp no doubt spends a lot more time thinking about removing fake listings of restaurants than Google CEO Sundar Pichai, just because Pichai has nine products with more than a billion users. Maybe Google is better at building stuff than most companies, but it’s not so much better that its executives can spend no time on a search problem and, all things being equal, still outperform a specialized search vertical. In other words, the reason Google isn’t very good at finding the right health care provider or local business is because that’s not really what its executives think about.

All of this is a way of saying that vertical search engines are sometimes better at finding certain kinds of information than Google. In its original form, from 1998-2007, Google helped blend the world of general and specialized search; it simply chose the best results, sending people to the right place on the web or to the right vertical search engines that had the best results. As Google co-founder Larry Page once put it, “We want to get you out of Google and to the right place as fast as possible.” People built businesses around an open web. Yelp was founded in 2004, back when you could still found firms adjacent to Google; Yelp got a lot of traffic from Google because it had the best local results.

But in 2007, Google stopped trying to send users to the most relevant place to answer their query, and started to try and keep people on Google properties. It began transforming itself from a general search engine into a walled garden, and it arranged its business strategy to exclude competitors, both vertical and general search, from the market, especially as people started to use their mobile phones to find things. At first this change was subtle, but Google gradually expanded its walled garden, encompassing more and more content. In doing so, it directed ad revenue to itself, eventually strangling not only vertical search competitors, but also publishers, online video and mapping competitors, and advertising technology firms.

Today, Google is the key gatekeeper to the web for users and advertisers, and venture capitalists will not invest in firms adjacent to it. Google’s dominance is also why the web in 2021 is increasingly a mess, a place for scam artists and disinformation. Today, if there were a vibrant competitive market for search, this rehab clinic fiasco might not be a problem; a health-based vertical search engine might be able to solve the problem that Google cannot. But in Google’s walled garden internet, that’s no longer a possibility. And as there really is no distinction between the web and the offline world, Google’s absentee landlord relationship to problems involving credible information is one reason scam artists and disinformation are proliferating globally.

It didn’t have to be this way. And in fact, in 2012, the Federal Trade Commission, which is our antitrust enforcer, nearly filed a case that would have stopped Google from corrupting our information commons.

Fumbling the Future

And this brings me to Leah Nylen’s story last week titled “How Washington Fumbled the Future”, looking back on the Obama administration’s policy vis-a-vis Google. She got her hands on a series of allegations the FTC had in 2012, in documents kept secret for nearly a decade. Recently, there have been multiple antitrust suits launched against Google, two by states and one at the Federal level. What is astonishing is how the FTC in 2012 had the evidence to bring most of the suits in court today.

Those of us who follow this area didn’t think that the 2012 FTC documents would be that interesting. The vote to close the Google investigation was unanimous, 5-0, with both Republican and Democratic commissioners letting Google skate. We figured that the FTC just didn’t see the problem clearly, as technology markets tend to morph quickly. Back in 2011 when the investigation started, who would have imagined that Google would become this powerful and dominant?

And yet, it turns out that the FTC had evidence of Google’s behavior, and just chose not to act. Yelp CEO Jeremy Stoppelman called these released documents a “smoking gun” showing how “Google methodically destroyed the web.” Stoppelman competes with Google, but other more neutral observers agree with him. William Kovacic, a Republican ex-FTC member, said, “I always assumed the staff memo was not so specific, direct and clear about the path ahead. A lot of the DOJ case is in there. It’s really breathtaking.” Kovacic, who voted to open an investigation in 2011, left the FTC before the complaint came up for a vote, so he hadn’t read it until this week.

These documents revealed many things, one of which was . . .

Continue reading. There’s more.

Written by LeisureGuy

22 March 2021 at 10:59 am

Who Is Making Sure that A.I. Machines Aren’t Racist?

leave a comment »

Cade Metz reports in the NY Times:

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

Their work was hailed as groundbreaking. The nascent A.I. industry, it had become clear, needed minders and people with different perspectives.

About six years ago, A.I. in a Google online photo service organized photos of Black people into a folder called “gorillas.” Four years ago, a researcher at a New York start-up noticed that the A.I. system she was working on was egregiously biased against Black people. Not long after, a Black researcher in Boston discovered that an A.I. system couldn’t identify her face — until she put on a white mask.

In 2018, when I told Google’s public relations staff that I was working on a book about artificial intelligence, it arranged a long talk with Dr. Mitchell to discuss her work. As she described how she built the company’s Ethical A.I. team — and brought Dr. Gebru into the fold — it was refreshing to hear from someone so closely focused on the bias problem.

But nearly three years later, Dr. Gebru was pushed out of the company without a clear explanation. She said she had been fired after criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

It should have been a wake-up call.

In June 2015, a friend sent Jacky Alciné, a 22-year-old software engineer living in Brooklyn, an internet link for snapshots the friend had posted to the new Google Photos service. Google Photos could analyze snapshots and automatically sort them into digital folders based on what was pictured. One folder might be “dogs,” another “birthday party.”

When Mr. Alciné clicked on the link, he noticed one of the folders was labeled “gorillas.” That made no sense to him, so he opened the folder. He found more than 80 photos he had taken nearly a year earlier of a friend during a concert in nearby Prospect Park. That friend was Black.

He might have let it go if Google had mistakenly tagged just one photo. But 80? He posted a screenshot on Twitter. “Google Photos, y’all,” messed up, he wrote, using much saltier language. “My friend is not a gorilla.”

Like facial recognition services, talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

In 2017, Deborah Raji, a 21-​year-​old Black woman from Ottawa, sat at a desk inside the New York offices of Clarifai, the start-up where she was working. The company built technology that could automatically recognize objects in digital images and planned to sell it to businesses, police departments and government agencies.

She stared at a screen filled with faces — images the company used to train its facial recognition software.

As she scrolled through page after page of these faces, she realized that most — more than 80 percent — were of white people. More than 70 percent of those white people were male. When Clarifai trained its system on this data, it might do a decent job of recognizing white people, Ms. Raji thought, but it would fail miserably with people of color, and probably women, too.

Clarifai was also building a “content moderation system,” a tool that could automatically identify and remove pornography from images people posted to social networks. The company trained this system on two sets of data: thousands of photos pulled from online pornography sites, and thousands of G‑rated images bought from stock photo services.

The system was supposed to learn the difference between the pornographic and the anodyne. The problem was that the G‑rated images were dominated by white people, and the pornography was not. The system was learning to identify Black people as pornographic.

“The data we use to train these systems matters,” Ms. Raji said. “We can’t just blindly pick our sources.”

This was obvious to her, but to the rest of the company it was not. Because the people choosing the training data were mostly white men, they didn’t realize their data was biased.

“The issue of bias in facial recognition technologies is an evolving and important topic,” Clarifai’s chief executive, Matt Zeiler, said in a statement. Measuring bias, he said, “is an important step.”

Before joining Google, Dr. Gebru collaborated on a study with a young computer scientist, Joy Buolamwini. A graduate student at the Massachusetts Institute of Technology, Ms. Buolamwini, who is Black, came from a family of academics. Her grandfather specialized in medicinal chemistry, and so did her father.

She gravitated toward facial recognition technology. Other researchers believed it was reaching maturity, but when she used it, she knew it wasn’t.

In October 2016, a friend invited her for a night out in Boston with several other women. “We’ll do masks,” the friend said. Her friend meant skin care masks at a spa, but Ms. Buolamwini assumed Halloween masks. So she carried a white plastic Halloween mask to her office that morning.

It was still sitting on her desk a few days later as she struggled to finish a project for one of her classes. She was trying to get a detection system to track her face. No matter what she did, she couldn’t quite get it to work.

In her frustration, she picked up the white mask from her desk and pulled it over her head. Before it was all the way on, the system recognized her face — or, at least, it recognized the mask.

“Black Skin, White Masks,” she said in an interview, nodding to the 1952 critique of historical racism from the psychiatrist Frantz Fanon. “The metaphor becomes the truth. You have to fit a norm, and that norm is not you.”

Ms. Buolamwini started exploring commercial services designed to analyze faces and identify characteristics like age and sex, including tools from Microsoft and IBM.

She found that when . . .

Continue reading. There’s much more. It’s unsettling to see how the world being created is built on unrecognized and unacknowledged biases.

Written by LeisureGuy

15 March 2021 at 11:59 am

Amount of time it takes a hacker to brute force your password

with 2 comments

You can easily get a long password you can remember by using a phrase or sentence you know with special characters for spaces.

Written by LeisureGuy

14 March 2021 at 11:18 am

Alexa, Are You Sexist?

leave a comment »

Corinne Purtell writes in the NY Times:

In an Amazon ad that aired during the Super Bowl on Sunday, a woman admiring the spherical contours of the company’s Echo speaker reimagines her Alexa voice assistant as the actor Michael B. Jordan. Instead of the disembodied female voice that comes standard in the device, requests for shopping list updates, measurement conversions and adjustments to the home lighting and sprinkler systems are fulfilled by the smoldering star, in person — voice, eyes, abs and all. Her husband hates it.

Depicting Alexa as a masculine presence is funny because — at least according to Amazon’s official line — the cloud-based voice service has no gender at all. “I’m not a woman or a man,” Alexa says sweetly when asked to define its gender. “I’m an AI.”

Alexa is sold with a default female-sounding voice and has a female-sounding name. Alexa is subservient and eager to please. If you verbally harass or abuse Alexa, as the journalist Leah Fessler discovered in 2017, Alexa will feign ignorance or demurely deflect. Amazon and its competitors in the digital assistant market may deny it, but design and marketing have led to AI that seems undeniably, well, feminine.

What does it mean for humans that we take for granted that the disembodied voices we boss around at home are female? How does the presence of these feminized voice assistants affect the dynamics between the actual women and men who use them?

“The work that these devices are intended to do” — making appointments, watching the oven timer, updating the shopping list — “all of those kinds of areas are gendered,” said Yolande Strengers, an associate professor of digital technology and society at Monash University in Melbourne, Australia.

Dr. Strengers is a co-author of “The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot.” The book examines technologies that perform traditionally feminized roles, including housekeeping robots like the Roomba, caregiving robots like the humanoid Pepper or Paro seal, sex robots and, of course, the multitasking, ever-ready voice assistants.

Dr. Strengers and her co-author, Jenny Kennedy, a research fellow at RMIT University in Melbourne, explore the ways in which gendering technology influences users’ relationship with it.

Because Alexa and similar assistants like Apple’s Siri, Microsoft’s Cortana and Google Home, are perceived as female, users order them around without guilt or apology, and may sling abuse and sexualized comments their way. And when users become frustrated with the devices’ errors, they interpret glitches as inferior capability, or female “ditziness.” Owners of the devices are also not threatened by them — and thus are less inclined to question how much data they are collecting, and what it might be used for.

Research on digital voice and gender by the former Stanford professor Clifford Nass found that people consider female-sounding voices helpful and trustworthy, and male voices more authoritative. The work of Professor Nass, who died in 2013, is often cited in discussions of voice assistants, yet many of those studies are now two decades old. An Amazon spokesperson would say only that the current feminine voice was “preferred” by users during testing. But preferred over what? And by whom?

Some assistants, like Siri, offer the option to change the default female voice to a male voice. Alexa comes standard with a female voice whose accent or language can be changed. For an additional $4.99, a user can swap Alexa’s voice for that of the actor Samuel L. Jackson, but only for fun requests like “tell me a story” or “what do you think of snakes?” Only the female voice handles housekeeping tasks like setting reminders, shopping, or making lists.

The book “The Smart Wife” belongs to a body of research examining how artificially intelligent devices reflect the biases of the people who design them and the people who buy them — in both cases, mostly men. (Dr. Strengers and Dr. Kennedy have found that setting up the digital infrastructure is one chore in an opposite-sex household that’s more likely to be done by men.)

Take the devices’ response to sexually aggressive questions. “You have the wrong sort of assistant,” Siri replied when Ms. Fessler, the journalist, asked the bot for sex as part of her investigation. The coy phrasing, Dr. Strengers and Dr. Kennedy write, suggests there is another type of assistant out there who might welcome such propositions. Since the publication of Ms. Fessler’s article, voice assistants have become more forthright. Siri now responds to propositions for sex with a flat “no.” Amazon also updated Alexa to no longer respond to sexually explicit questions.

When it comes to gender and technology, tech companies often seem to be trying to have it both ways: capitalizing on gendered traits to make their products feel familiar and appealing to consumers, yet disavowing the gendered nature of those features as soon as they become problematic.

“Tech companies are probably getting themselves into a bit of a corner by humanizing these things — they’re not human,” said Mark West, an education project author with Unesco and lead author of the organization’s 2019 report on gender parity in technology. The report and its associated white papers noted that feminized voice assistants perpetuate gender stereotypes of subservience and sexual availability and called for, among other things, an end to the practice of making digital assistants female by default. If designers initially chose to have their products conform to existing stereotypes, he said, they can also choose to reject those tropes as well.

“There’s nothing inevitable about this stuff. We collectively are in control of technology,” Mr. West said. “If this is the wrong path to go down, do something.”

One intriguing alternative is . . .

Continue reading.

And check out this.

Written by LeisureGuy

13 March 2021 at 5:32 pm

How Facebook got addicted to spreading misinformation

leave a comment »

Karen Hao writes in MIT Technology Review:

Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.” . . .

Continue reading. There’s much more.

Later in the article:

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes . . .

And there’s more.

Written by LeisureGuy

12 March 2021 at 12:20 pm

Want to borrow that e-book from the library? Sorry, Amazon won’t let you.

leave a comment »

It is becoming more and more imperative that large tech companies be more closely regulated. Right now they are out of control — like Standard Oil in the early 20th century. Geoffrey A. Fowler reports in the Washington Post:

Mindy Kaling has gone missing from the library.

I was looking forward to reading the comedian’s new story collection, “Nothing Like I Imagined.” So I typed Kaling’s name into the Libby app used by my public library to loan e-books. But “The Office” star’s latest was nowhere to be found.

What gives? In 2020, Kaling switched to a new publisher: Amazon. Turns out, the tech giant has also become a publishing powerhouse — and it won’t sell downloadable versions of its more than 10,000 e-books or tens of thousands of audiobooks to libraries. That’s right, for a decade, the company that killed bookstores has been starving the reading institution that cares for kids, the needy and the curious. And that’s turned into a mission-critical problem during a pandemic that cut off physical access to libraries and left a lot of people unable to afford books on their own.

Many Americans now recognize that a few tech companies increasingly dominate our lives. But it’s sometimes hard to put your finger on exactly why that’s a problem. The case of the vanishing e-books shows how tech monopolies hurt us not just as consumers, but as citizens.

You probably think of Amazon as the largest online bookstore. Amazon helped make e-books popular with the Kindle, now the dominant e-reader. Less well known is that since 2009, Amazon has published books and audiobooks under its own brands including Lake UnionThomas & Mercer and Audible. Amazon is a beast with many tentacles: It’s got the store, the reading devices and, increasingly, the words that go on them.

Librarians have been no match for the beast. When authors sign up with a publisher, it decides how to distribute their work. With other big publishers, selling e-books and audiobooks to libraries is part of the mix — that’s why you’re able to digitally check out bestsellers like Barack Obama’s “A Promised Land.” Amazon is the only big publisher that flat-out blocks library digital collections. Search your local library’s website, and you won’t find recent e-books by Amazon authors Kaling, Dean Koontz or Dr. Ruth Westheimer. Nor will you find downloadable audiobooks for Trevor Noah’s “Born a Crime,” Andy Weir’s “The Martian” and Michael Pollan’s “Caffeine.”

Amazon does generally sell libraries physical books and audiobook CDs — though even print versions of Kaling’s latest aren’t available to libraries because Amazon made it an online exclusive.

It’s hard to measure the hole Amazon is leaving in American libraries. Among e-books, Amazon published very few New York Times bestsellers in 2020; its Audible division produces audiobooks for more big authors and shows up on bestseller lists more frequently. You can get a sense of Amazon’s influence among its own customers from the Kindle bestseller list: In 2020, six of Amazon’s top 10 e-books were published by Amazon. And it’s not just about bestsellers: Amazon’s Kindle Direct Publishing, the self-publishing business that’s open to anyone, produces many books about local history, personalities and communities that libraries have historically sought out.

In testimony to Congress, the American Library Association called digital sales bans like Amazon’s “the worst obstacle for libraries” moving into the 21st century. Lawmakers in New York and Rhode Island have proposed bills that would require Amazon (and everybody else) to sell e-books to libraries with reasonable terms. This week, the Maryland General Assembly will vote on its own bill, after the state Senate passed a version last week. . .

Continue reading. There’s much more, including this chart.

Written by LeisureGuy

10 March 2021 at 2:18 pm

How to Put Out Democracy’s Dumpster Fire

leave a comment »

Anne Applebaum and Peter Pomerantsev write in the Atlantic:

To read the diary of Gustave de Beaumont, the traveling companion of Alexis de Tocqueville, is to understand just how primitive the American wilderness once seemed to visiting Frenchmen. In a single month, December 1831, Tocqueville and Beaumont were on a steamship that crashed; rode a stagecoach that broke an axle; and took shelter in a cabin—one of them bedridden from an unidentified illness—while the nearest doctor was a two-day hike away. Yet they kept meeting people whose resourcefulness they admired, and they kept collecting the observations that eventually led Tocqueville to write Democracy in America—the classic account of the ordering principles, behaviors, and institutions that made democracy function within this sprawling country.

Tocqueville’s interest in American institutions reflected more than mere curiosity: In his native France, a revolution launched with similarly high ideals about equality and democracy had ended badly. His parents had nearly been guillotined during the wave of violence that followed the momentous events of 1789. By contrast, American democracy worked—and he wanted to understand why.

With the wholesale transfer of so much entertainment, social interaction, education, commerce, and politics from the real world to the virtual world—a process recently accelerated by the coronavirus pandemic—many Americans have come to live in a nightmarish inversion of the Tocquevillian dream, a new sort of wilderness. Many modern Americans now seek camaraderie online, in a world defined not by friendship but by anomie and alienation. Instead of participating in civic organizations that give them a sense of community as well as practical experience in tolerance and consensus-building, Americans join internet mobs, in which they are submerged in the logic of the crowd, clicking Like or Share and then moving on. Instead of entering a real-life public square, they drift anonymously into digital spaces where they rarely meet opponents; when they do, it is only to vilify them.

Conversation in this new American public sphere is governed not by established customs and traditions in service of democracy but by rules set by a few for-profit companies in service of their needs and revenues. Instead of the procedural regulations that guide a real-life town meeting, conversation is ruled by algorithms that are designed to capture attention, harvest data, and sell advertising. The voices of the angriest, most emotional, most divisive—and often the most duplicitous—participants are amplified. Reasonable, rational, and nuanced voices are much harder to hear; radicalization spreads quickly. Americans feel powerless because they are.

In this new wilderness, democracy is becoming impossible. If one half of the country can’t hear the other, then Americans can no longer have shared institutions, apolitical courts, a professional civil service, or a bipartisan foreign policy. We can’t compromise. We can’t make collective decisions—we can’t even agree on what we’re deciding. No wonder millions of Americans refuse to accept the results of the most recent presidential election, despite the verdicts of state electoral committees, elected Republican officials, courts, and Congress. We no longer are the America Tocqueville admired, but have become the enfeebled democracy he feared, a place where each person,

withdrawn and apart, is like a stranger to the destiny of all the others: his children and his particular friends form the whole human species for him; as for dwelling with his fellow citizens, he is beside them, but he does not see them; he touches them and does not feel them; he exists only in himself and for himself alone, and if a family still remains for him, one can at least say that he no longer has a native country.

The world’s autocracies have long understood the possibilities afforded by the tools tech companies have created, and have made use of them. China’s leaders have built an internet based on censorship, intimidation, entertainment, and surveillance; Iran bans Western websites; Russian security services have the legal right to obtain personal data from Kremlin-friendly social-media platforms, while Kremlin-friendly troll farms swamp the world with disinformation. Autocrats, both aspiring and actual, manipulate algorithms and use fake accounts to distort, harass, and spread “alternative facts.” The United States has no real answer to these challenges, and no wonder: We don’t have an internet based on our democratic values of openness, accountability, and respect for human rights. An online system controlled by a tiny number of secretive companies in Silicon Valley is not democratic but rather oligopolistic, even oligarchic.

And yet even as America’s national conversation reaches new levels of vitriol, we could be close to a turning point. Even as our polity deteriorates, an internet that promotes democratic values instead of destroying them—that makes conversation better instead of worse—lies within our grasp. Once upon a time, digital idealists were dreamers. In 1996, John Perry Barlow, a lyricist for the Grateful Dead and an early internet utopian, predicted that a new dawn of democracy was about to break: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind,” he declared, a place where “the dreams of Jefferson, Washington, Mill, Madison, DeToqueville [sic], and Brandeis … must now be born anew.”

Those ideas sound quaint—as outdated as that other 1990s idea, the inevitability of liberal democracy. Yet they don’t have to. A new generation of internet activists, lawyers, designers, regulators, and philosophers is offering us that vision, but now grounded in modern technology, legal scholarship, and social science. They want to resurrect the habits and customs that Tocqueville admired, to bring them online, not only in America but all across the democratic world.

How social media made the world crazier

In the surreal interregnum that followed the 2020 election, the price of America’s refusal to reform its internet suddenly became very high. Then-President Donald Trump and his supporters pushed out an entirely false narrative of electoral fraud. Those claims were reinforced on extreme-right television channels, then repeated and amplified in cyberspace, creating an alternative reality inhabited by millions of people where Trump had indeed won. QAnon—a conspiracy theory that had burst out of the subterranean internet and flooded onto platforms such as YouTube, Facebook, and Instagram, convincing millions that political elites are a cabal of globalist pedophiles—spilled into the real world and helped inspire the mobs that stormed the Capitol. Twitter made the extraordinary decision to ban the U.S. president for encouraging violence; the amount of election disinformation in circulation immediately dropped.

Famously, he found many of the answers in state, local, and even neighborhood institutions. He wrote approvingly of American federalism, which “permits the Union to enjoy the power of a great republic and the security of a small one.” He liked the traditions of local democracy too, the “township institutions” that “give the people the taste for freedom and the art of being free.” Despite the vast empty spaces of their country, Americans met one another, made decisions together, carried out projects together. Americans were good at democracy because they practiced democracy. They formed what he called “associations,” the myriad organizations that we now call “civil society,” and they did so everywhere:

Not only do [Americans] have commercial and industrial associations in which all take part, but they also have a thousand other kinds: religious, moral, grave, futile, very general and very particular, immense and very small; Americans use associations to give fêtes, to found seminaries, to build inns, to raise churches, to distribute books, to send missionaries to the antipodes; in this manner they create hospitals, prisons, schools … Everywhere that, at the head of a new undertaking, you see the government in France and a great lord in England, count on it that you will perceive an association in the United States.

Tocqueville reckoned that the true success of democracy in America rested not on the grand ideals expressed on public monuments or even in the language of the Constitution, but in these habits and practices. In France, philosophes in grand salons discussed abstract principles of democracy, yet ordinary Frenchmen had no special links to one another. By contrast, Americans worked together: “As soon as several of the inhabitants of the United States have conceived a sentiment or an idea that they want to produce in the world, they seek each other out; and when they have found each other, they unite.”

In the nearly two centuries that have passed since Tocqueville wrote these words, many of those institutions and habits have deteriorated or disappeared. Most Americans no longer have much experience of “township” democracy. Some no longer have much experience of associations, in the Tocquevillian sense, either. Twenty-five years ago, the political scientist Robert Putnam was already describing the decline of what he called “social capital” in the U.S.: the disappearance of clubs and committees, community and solidarity. As internet platforms allow Americans to experience the world through a lonely, personalized lens, this problem has morphed into something altogether different.

Could these platforms have done more? As a matter of fact, Facebook keeps careful tabs on the toxicity of American discourse. Long before the election, the company, which conducts frequent, secret tests on its News Feed algorithm, had begun to play with different ways to promote more reliable information. Among other things, it created a new ranking system, designed to demote spurious, hyper-partisan sources and to boost “authoritative news content.” Shortly after Election Day, the ranking system was given greater weight in the platform’s algorithm, resulting in a purportedly “nicer News Feed”—one more grounded in reality. The change was part of a series of “break-glass measures” that the company announced would be put in place in periods of “heightened tension.” Then, a few weeks later, it was undone. After the Capitol insurrection, on January 6, the change was restored, in advance of Joe Biden’s inauguration. A Facebook spokesperson would not explain to us exactly when or why the company made those decisions, how it defines “heightened tension,” or how many of the other “break-glass measures” are still in place. Its published description of the ranking system does not explain how its metrics for reliable news are weighted, and of course there is no outside oversight of the Facebook employees who are making decisions about them. Nor will Facebook reveal anything about the impact of this change. Did conversation on the site become calmer? Did the flow of disinformation cease or slow down as a result? We don’t know.

The very fact that this kind of shift is possible points to a brutal truth: Facebook can make its site “nicer,” not just after an election but all the time. It can do more to encourage civil conversation, discourage disinformation, and reveal its own thinking about these things. But it doesn’t, because Facebook’s interests are not necessarily the same as the interests of the American public, or any democratic public. Although the . . .

Continue reading. There’s much, much more.

Written by LeisureGuy

9 March 2021 at 2:07 pm

A hearing-aid check-up every couple of years is a good idea (only for those who wear hearing aids, of course)

leave a comment »

I wear hearing aids because I have some hearing loss and those with uncorrected hearing loss tend to suffer cognitive decline and social isolation (see this earlier post).

I replaced my first pair a couple of years ago, and the audiologist suggested (via email) that it would be a good idea to come in for a check-up. (No fee involved — although the hearing aids I got seemed expensive, the cost does cover extras. For example, I get free batteries, replacement ear domes (that cover the tiny hearig-aid speakers), cleaning sticks (removes wax guard over speaker with one end of stick, other end applies new wax guard). And I got the free check-up.

The check-up began with a new hearing test, and my sensitivity to high-frequency sounds had diminished somewhat, a natural effect of continued aging. (I’ve been trying to stop it, but not altogether successfully.) That was accommodated by reprogramming the hearing aids to add more boost to those particular frequencies.

But before doing that, the audiologist updated the firmware in the hearing aids. It seemed to be an extensive update — it took several minutes — but most of the changes were under the hood. One clear change was the start-up sound. When you insert the hearing aids and then turn them on, they let you know they’re on by making a little sound. My Oticon hearing aids played a brief melodic sequence of 5 notes. These hearing aids, by Resound, made a sound “ding-ding-ding-ding-ding-ding-ding-ding” (8 dings). After the update, the power-up sound is now a single “beep” in thead  mid-range — a marked improvement, IMO.

After the visit, I once again heard high frequencies clearly, which makes the little sounds of daily life crisp and definite. It’s like getting a new pair of hearing aids. I had mentioned to the audiologist that I liked the hearing aids, but I did get feedback when wearing headphones, but I could put up with it. So when she gave me ear domes, she gave me a new type of ear dome — the kind I had used had four small perforations, and the new ones are not perforated. That totally solved the feedback issue.

Altogether, I feel as though I now have new hearing aids for a total cost of CAD 0 (US$0).

Written by LeisureGuy

9 March 2021 at 11:04 am

Why Roy’s Rule — Turn it off and then back on again — so often works

with 2 comments

In the (wonderful) series The IT Crowd (which you can watch on Netflix), the senior IT guy Roy Trenneman (played by Chris O’Dowd) responds to any problem, “Have you tried turning it off and back on again?”

Yesterday I was downloading some new books from Standard Ebooks onto my Kindle, and when I looked at the “Downloaded” list there were two titles greyed out — not the books I had just downloaded, but two from some time ago. I did a search and found the actual downloads on the device. I could open those files and all was well, but the two ghost files remained as an irritant: I could not open them, I could not select them, and I could not delete them.

I was set to go through the minor ordeal of contacting Amazon support and working through the problem with them, when I remembered Roy. So I turned the Kindle off — really off, holding down the off switch until I got an option to Restart the device. I selected that, and the Kindle totally rebooted, and when it was done the ghost files were gone.

Why does this work so often?

A computer program — whether an app or the OS itself — has an orderly shutdown procedure for when you exit the app. However, such an orderly exit may not have occurred. There may have been a power failure or a system crash or a bug in the program that caused it to crash or some other program running through memory overwriting things and causing an abrupt system crash — a variety of things might have disrupted an orderly shutdown. Therefore software programs on start-up cannot assume that everything is in perfect order and (assuming they are have been well designed, written, and tested) will go through a housekeeping routine on startup to make sure everything is in order and to clean up anything that is not.

By turning the device off and then back on again, your force the activation of that initial housekeeping, which restores everything to what it should be (as best the software can). Thus when my Kindle was restarted, the housekeeping on startup found extraneous ghost files and removed them.

It’s a great series, BTW, and well worth watching: four seasons of six episodes each.

Written by LeisureGuy

6 March 2021 at 10:06 am

Apple Threatens North Dakota, Suffers Crushing Loss in Arizona: “A Lot of It is Just Fear”

leave a comment »

Matt Stoller writes in Big:

Two weeks ago, Apple and Google managed to defeat a major bill in North Dakota to force competition in app stores. This week, the Arizona House of Representatives defied the tech giants and passed the very same bill. I’m writing about why this series of battles, and what is a very clear defeat for big tech, matters.

Plus, a Gore-Tex monopoly, how Google bought and ruined Waze, a roll-up of hiking media and tools, and the surprising new attack on big tech power and Chinese dominance from Wall Street regulator nominee Rohit Chopra. . . .

Small States and Big Tech Fight Over $64 Billion

There’s a new front in the battle over big tech power: the states. We’re used to countries taking on big tech power, through Congressional investigations, antitrust cases from the Federal government, or Facebook and foreign countries like Australia fighting who has to pay for news. But now, state legislators, who were the key anti-monopolists of the 19th century, are starting to flex their muscles once again.

Just today, the Arizona House of Representatives passed a bill, 31-29, barring the use app store monopolies to take large fees from app makers. This vote was a significant victory for anti-monopolists, but a bizarre one. Democrats were the main stumbling block, state legislators were scared, and the battle actually started in North Dakota. Moreover, the fight is not over, as the bill will be debated in the Arizona Senate and the Governor’s office, and it will also go to other states as legislators elsewhere are emboldened.

So what exactly happened? And what does it mean?

Steve Jobs Built the App Store to Monopolize

App stores are markets, placed you use to get access to a whole set of independent applications. Since Apple’s creation of its app store in 2008, businesses have built an entire ecosystem of mobile programs, a truly wonderful set of tools, games, and storytelling mechanisms to foster commerce and culture. To give you a sense of the significance of this revenue stream, Apple made $64 billion of gross revenue from its app store last year. This is big business.

The problem with app stores is that the two main ones are under the tight control of the two firms, Apple and Google, that make the core software to runs smartphones. You can only buy apps for your iPhone through Apple’s app store, because Apple bars rivals from setting up app stores on the iPhone. Moreover, you have to use Apple’s payment system to buy apps, because Apple also bars rival payment networks from its app store. Google has slightly less control for phones that run its mobile operating system, Android, though essentially the same setup.

Together, Google and Apple have 99% of the smartphone market. Because it’s basically impossible to sell mobile apps without going through their app stores, both can charge high prices – 30% of the take – for developers who want to sell apps on phones they control. For a sense of perspective, a credit card charges 2-3% to a merchant for access to a payment network, and this price is ten times that for a comparable feature.

More importantly, control over app stores, and in particular, the feature of the app store that lets you pay for something, is a critical moat for these firms to maintain their market power. Documents revealed in a court case on ebooks proved Steve Jobs explicitly designed Apple’s payment system policies to discourage people from switching from the iPhone. . .

Continue reading. There’s much more.

Written by LeisureGuy

3 March 2021 at 1:36 pm

More signs of Spring

leave a comment »

Another cluster of crocuses out front. Today I had a hearing aid tune-up, and what a difference it makes. The audiologist said that it’s worthwhile stopping in for a check-up ad adjustment every couple of years: one’s hearing changes (generally degradation of hearing higher frequencies), and the hearing aid can be adjusted for a new amplification curve (degree amplified by frequency). Also, the firmware in the hearing aids generally needs an update — and that was the case today, with the update downloaded and installed.

As I left, high frequencies were once again clearly heard, which made sound much crisper — the same sensation I noticed when I first got hearing aids.

Hearing aids — good hearing aids — are expensive, but all this maintenance and testing and updating is free, as are the supplies (cleaning picks, domes over the speaker, batteries). And I have a dehumidifier where my hearing aids spend each night to maximize lifespan of the devices.

Written by LeisureGuy

1 March 2021 at 4:31 pm

AI-generated video script ideas — some of which are good

leave a comment »

Written by LeisureGuy

25 February 2021 at 3:06 pm

Posted in Software, Technology, Video

Sheryl Sandberg and Top Facebook Execs Silenced an Enemy of Turkey to Prevent a Hit to the Company’s Business

leave a comment »

Facebook upper management are a morally corrupt loot, Sandberg perhaps especially. Jack Gillum and Justin Elliott report in ProPublica:

As Turkey launched a military offensive against Kurdish minorities in neighboring Syria in early 2018, Facebook’s top executives faced a political dilemma.

Turkey was demanding the social media giant block Facebook posts from the People’s Protection Units, a mostly Kurdish militia group the Turkish government had targeted. Should Facebook ignore the request, as it has done elsewhere, and risk losing access to tens of millions of users in Turkey? Or should it silence the group, known as the YPG, even if doing so added to the perception that the company too often bends to the wishes of authoritarian governments?

It wasn’t a particularly close call for the company’s leadership, newly disclosed emails show.

“I am fine with this,” wrote Sheryl Sandberg, Facebook’s No. 2 executive, in a one-sentence message to a team that reviewed the page. Three years later, YPG’s photos and updates about the Turkish military’s brutal attacks on the Kurdish minority in Syria still can’t be viewed by Facebook users inside Turkey.

The conversations, among other internal emails obtained by ProPublica, provide an unusually direct look into how tech giants like Facebook handle censorship requests made by governments that routinely limit what can be said publicly. When the Turkish government attacked the Kurds in the Afrin District of northern Syria, Turkey also arrested hundreds of its own residents for criticizing the operation.

Publicly, Facebook has underscored that it cherishes free speech: “We believe freedom of expression is a fundamental human right, and we work hard to protect and defend these values around the world,” the company wrote in a blog post last month about a new Turkish law requiring that social media firms have a legal presence in the country. “More than half of the people in Turkey rely on Facebook to stay in touch with their friends and family, to express their opinions and grow their businesses.”

But behind the scenes in 2018, amid Turkey’s military campaign, Facebook ultimately sided with the government’s demands. Deliberations, the emails show, were centered on keeping the platform operational, not on human rights. “The page caused us a few PR fires in the past,” one Facebook manager warned of the YPG material.

The Turkish government’s lobbying on Afrin-related content included a call from the chairman of the BTK, Turkey’s telecommunications regulator. He reminded Facebook “to be cautious about the material being posted, especially photos of wounded people,” wrote Mark Smith, a U.K.-based policy manager, to Joel Kaplan, Facebook’s vice president of global public policy. “He also highlighted that the government may ask us to block entire pages and profiles if they become a focal point for sharing illegal content.” (Turkey considers the YPG a terrorist organization, although neither the U.S. nor Facebook do.)

The company’s eventual solution was to “geo-block,” or selectively ban users in a geographic area from viewing certain content, should the threats from Turkish officials escalate. Facebook had previously avoided the practice, even though it has become increasingly popular among governments that want to hide posts from within their borders.

Facebook confirmed to ProPublica that it made the decision to restrict the page in Turkey following a legal order from the Turkish government — and after it became clear that failing to do so would have led to its services in the country being completely shut down. The company said it had been blocked before in Turkey, including a half-dozen times in 2016.

The content that Turkey deemed offensive, according to internal emails, included photos on Facebook-owned Instagram of “wounded YPG fighters, Turkish soldiers and possibly civilians.” At the time, the YPG slammed what it understood to be Facebook’s censorship of such material. “Silencing the voice of democracy: In light of the Afrin invasion, YPG experience severe cyberattacks.” The group has published graphic images, including photos of mortally wounded fighters; “this is the way NATO ally Turkey secures its borders,” YPG wrote in one post.

Facebook spokesman Andy Stone provided a written statement in response to questions from ProPublica. . .

Continue reading.

Written by LeisureGuy

24 February 2021 at 2:49 pm

“Mark Changed The Rules”: How Facebook Went Easy On Alex Jones And Other Right-Wing Figures

leave a comment »

It’s difficult not to see Facebook as a serious problem whose root cause is Mark Zuckerberg’s shallowness and immaturity coupled with arrogance and power. Ryan Mac and Craig Silverman report in Buzzfeed News:

In April 2019, Facebook was preparing to ban one of the internet’s most notorious spreaders of misinformation and hate, Infowars founder Alex Jones. Then CEO Mark Zuckerberg personally intervened.

Jones had gained infamy for claiming that the 2012 Sandy Hook elementary school massacre was a “giant hoax,” and that the teenage survivors of the 2018 Parkland shooting were “crisis actors.” But Facebook had found that he was also relentlessly spreading hate against various groups, including Muslims and trans people. That behavior qualified him for expulsion from the social network under the company’s policies for “dangerous individuals and organizations,” which required Facebook to also remove any content that expressed “praise or support” for them.

But Zuckerberg didn’t consider the Infowars founder to be a hate figure, according to a person familiar with the decision, so he overruled his own internal experts and opened a gaping loophole: Facebook would permanently ban Jones and his company — but would not touch posts of praise and support for them from other Facebook users. This meant that Jones’ legions of followers could continue to share his lies across the world’s largest social network.

“Mark personally didn’t like the punishment, so he changed the rules,” a former policy employee told BuzzFeed News, noting that the original rule had already been in use and represented the product of untold hours of work between multiple teams and experts.

“That was the first time I experienced having to create a new category of policy to fit what Zuckerberg wanted. It’s somewhat demoralizing when we have established a policy and it’s gone through rigorous cycles. Like, what the fuck is that for?” said a second former policy employee who, like the first, asked not to be named so they could speak about internal matters.

“Mark called for a more nuanced policy and enforcement strategy,” Facebook spokesperson Andy Stone said of the Alex Jones decision, which also affected the bans of other extremist figures.

Zuckerberg’s “more nuanced policy” set off a cascading effect, the two former employees said, which delayed the company’s efforts to remove right wing militant organizations such as the Oath Keepers, which were involved the Jan. 6 insurrection at the US Capitol. It is also a case study in Facebook’s willingness to change its rules to placate America’s right wing and avoid political backlash.

Internal documents obtained by BuzzFeed News and interviews with 14 current and former employees show how the company’s policy team — guided by Joel Kaplan, the vice president of global public policy, and Zuckerberg’s whims — has exerted outsize influence while obstructing content moderation decisions, stymieing product rollouts, and intervening on behalf of popular conservative figures who have violated Facebook’s rules.

In December, a former core data scientist wrote a memo titled, “Political Influences on Content Policy.” Seen by BuzzFeed News, the memo stated that Kaplan’s policy team “regularly protects powerful constituencies” and listed several examples, including: removing penalties for misinformation from right-wing pages, blunting attempts to improve content quality in News Feed, and briefly blocking a proposal to stop recommending political groups ahead of the US election.

Since the November vote, at least six Facebook employees have resigned with farewell posts that have called out leadership’s failures to heed its own experts on misinformation and hate speech. Four departing employees explicitly cited the policy organization as an impediment to their work and called for a reorganization so that the public policy team, which oversees lobbying and government relations, and the content policy team, which sets and enforces the platform’s rules, would not both report to Kaplan.

Facebook declined to make Kaplan or other executives available for an interview. Stone, the company spokesperson, dismissed concerns about the vice president’s influence.

“Recycling the same warmed over conspiracy theories about the influence of one person at Facebook doesn’t make them true,” he said. “The reality is big decisions at Facebook are made with input from people across different teams who have different perspectives and expertise in different areas. To suggest otherwise is absurd.”

An integrity researcher who worked on Facebook’s efforts to protect the democratic process and rein in radicalization said the company caused direct harm to users by rejecting product changes due to concerns of political backlash.

“Out of fears over potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity,” they wrote in an internal note seen by BuzzFeed News. They quit in August.

Those most affected by Jones’ rhetoric have taken notice, too. Lenny Pozner, whose 6-year-old son Noah was the youngest victim of the Sandy Hook shooting, called the revelation that Zuckerberg weakened penalties facing the Infowars founder “disheartening, but not surprising.” He said the company had made a promise to do better in dealing with hate and hoaxes following a 2018 letter from HONR Network, his organization for survivors of mass casualty events. Yet Facebook continues to fail to remove harmful content.

“At some point,” Pozner told BuzzFeed News, “Zuckerberg has to be held responsible for his role in allowing his platform to be weaponized and for ensuring that the ludicrous and the dangerous are given equal importance as the factual.”

“Different Views On Different Things”

Kaplan’s close relationship with Zuckerberg has led the CEO to weigh politics more heavily when making high-profile content policy enforcement decisions, current and former employees said. Kaplan’s efforts to court the Trump White House over the past four years — from his widely publicized support for Supreme Court nominee Brett Kavanaugh to his interventions on behalf of right-wing influencers in Facebook policy decisions — have also made him a target for civil rights groups and Democratic lawmakers.

In June 2020, three Democratic senators asked in a letter what role Kaplan played “in Facebook’s decision to shut down and de-prioritize internal efforts to contain extremist and hyperpolarizing activity.” Sen. Elizabeth Warren called him out for overseeing a lobbying effort that spends millions of dollars to influence politicians. With a new presidential administration in place and a spate of ongoing antitrust lawsuits, Zuckerberg must now grapple with the fact that his top political adviser may no longer be a Washington, DC asset but a potential liability.

“I think that everybody in DC hates Facebook. They have burned every bridge,” said Sarah Miller, executive director of the American Economic Liberties Project and a former member of Joe Biden’s presidential transition team. Democrats are incensed with the platform’s tolerance of hate speech and misinformation, while “pulling Trump off the platform” has brought new life to Republican gripes with the company, she said.

“Facebook has fires to put out all across the political spectrum,” Miller added. . .

Continue reading. There’s much more, and it’s damning

Written by LeisureGuy

21 February 2021 at 12:55 pm

%d bloggers like this: