Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Represent is a new ProPublica service that informs you about Congress

leave a comment »

From an email I received from ProPublica:

My name is Derek Willis, and I’ve spent a lot of time over the past 18 years thinking about Congress, which could not agree on a spending plan on Friday, shutting down large parts of the federal government. Thinking about Congress is part of my job at ProPublica, where I use data to build websites to help people understand politics and elections.

Sometimes my friends and family ask me to explain what’s going on in Congress, or what they should expect to happen with a particular policy. I’m not an expert on everything Congress does or a policy wonk, but it’s nice to be asked. I’m hoping that Represent, our website about Congress, can be an even better translator for you.

A little more than a third of American adults can name their representative in the House, according to a 2017 survey by Haven Insights. If we don’t know who our elected representatives are, keeping track of them is a pretty big ask.

So, here’s what Represent can do for you:

I live in Montgomery County, Maryland, just outside Washington, D.C. If I want to check in on my representatives, I can plug my address into Represent. It will tell me that Jamie Raskin is my congressman and my senators are Chris Van Hollen and Ben Cardin.

I can go further. For each of them, if I want to know what bills they’ve introduced or cosponsored, Represent has that information. There’s also a feed of activity—votes, press releases and news stories—that helps me keep up with what each of them is doing and saying. I can compare their voting records to other members of the House and Senate to see how they fit into the larger political context, too.

If I’m interested in a topic, like DACA, or opioids, I can search Represent’s collection of press releases (here’s what Congress is saying about the shutdown) or database of bills to see what lawmakers are doing. And I can even look at or search lobbying registrations to see who is trying to influence lawmakers on issues I care about, like homelessness.

Being able to see not only who represents me but what they are doing in the job helps me make a more informed decision when I vote. We built Represent so that it can help you, too.

So please check it out. And if you see anything interesting, let us know.

Written by LeisureGuy

20 January 2018 at 10:10 am

Post-work: the radical idea of a world without jobs

leave a comment »

I think the idea is that there will be a lot fewer jobs, not that there will be no jobs at all. But many jobs will vanish, and Andy Beckett in the Guardian takes a look at what might result:

Work is the master of the modern world. For most people, it is impossible to imagine society without it. It dominates and pervades everyday life – especially in Britain and the US – more completely than at any time in recent history. An obsession with employability runs through education. Even severely disabled welfare claimants are required to be work-seekers. Corporate superstars show off their epic work schedules. “Hard-working families” are idealised by politicians. Friends pitch each other business ideas. Tech companies persuade their employees that round-the-clock work is play. Gig economy companies claim that round-the-clock work is freedom. Workers commute further, strike less, retire later. Digital technology lets work invade leisure.

In all these mutually reinforcing ways, work increasingly forms our routines and psyches, and squeezes out other influences. As Joanna Biggs put it in her quietly disturbing 2015 book All Day Long: A Portrait of Britain at Work, “Work is … how we give our lives meaning when religion, party politics and community fall away.”

And yet work is not working, for ever more people, in ever more ways. We resist acknowledging these as more than isolated problems – such is work’s centrality to our belief systems – but the evidence of its failures is all around us.

As a source of subsistence, let alone prosperity, work is now insufficient for whole social classes. In the UK, almost two-thirds of those in poverty – around 8 million people – are in working households. In the US, the average wage has stagnated for half a century.

As a source of social mobility and self-worth, work increasingly fails even the most educated people – supposedly the system’s winners. In 2017, half of recent UK graduates were officially classified as “working in a non-graduate role”. In the US, “belief in work is crumbling among people in their 20s and 30s”, says Benjamin Hunnicutt, a leading historian of work. “They are not looking to their job for satisfaction or social advancement.” (You can sense this every time a graduate with a faraway look makes you a latte.)

Work is increasingly precarious: more zero-hours or short-term contracts; more self-employed people with erratic incomes; more corporate “restructurings” for those still with actual jobs. As a source of sustainable consumer booms and mass home-ownership – for much of the 20th century, the main successes of mainstream western economic policy – work is discredited daily by our ongoing debt and housing crises. For many people, not just the very wealthy, work has become less important financially than inheriting money or owning a home.

Whether you look at a screen all day, or sell other underpaid people goods they can’t afford, more and more work feels pointless or even socially damaging – what the American anthropologist David Graeber called “bullshit jobs” in a famous 2013 article. Among others, Graeber condemned “private equity CEOs, lobbyists, PR researchers … telemarketers, bailiffs”, and the “ancillary industries (dog-washers, all-night pizza delivery) that only exist because everyone is spending so much of their time working”.

The argument seemed subjective and crude, but economic data increasingly supports it. The growth of productivity, or the value of what is produced per hour worked, is slowing across the rich world – despite the constant measurement of employee performance and intensification of work routines that makes more and more jobs barely tolerable.

Unsurprisingly, work is increasingly regarded as bad for your health: “Stress … an overwhelming ‘to-do’ list … [and] long hours sitting at a desk,” the Cass Business School professor Peter Fleming notes in his new book, The Death of Homo Economicus, are beginning to be seen by medical authorities as akin to smoking.

Work is badly distributed. People have too much, or too little, or both in the same month. And away from our unpredictable, all-consuming workplaces, vital human activities are increasingly neglected. Workers lack the time or energy to raise children attentively, or to look after elderly relations. “The crisis of work is also a crisis of home,” declared the social theorists Helen Hester and Nick Srnicek in a paper last year. This neglect will only get worse as the population grows and ages.

And finally, beyond all these dysfunctions, loom the most-discussed, most existential threats to work as we know it: automation, and the state of the environment. Some recent estimates suggest that between a third and a half of all jobs could be taken over by artificial intelligence in the next two decades. Other forecasters doubt whether work can be sustained in its current, toxic form on a warming planet.

Like an empire that has expanded too far, work may be both more powerful and more vulnerable than ever before. We know work’s multiplying problems intimately, but it feels impossible to solve them all. Is it time to start thinking of an alternative?

Our culture of work strains to cover its flaws by claiming to be unavoidable and natural. “Mankind is hardwired to work,” as the Conservative MP Nick Boles puts it in a new book, Square Deal. It is an argument most of us have long internalised.

But not quite all. The idea of a world freed from work, wholly or in part, has been intermittently expressed – and mocked and suppressed – for as long as modern capitalism has existed. Repeatedly, the promise of less work has been prominent in visions of the future. In 1845, Karl Marx wrote that in a communist society workers would be freed from the monotony of a single draining job to “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner”. In 1884, the socialist William Morris proposed that in “beautiful” factories of the future, surrounded by gardens for relaxation, employees should work only “four hours a day”.

In 1930, the economist John Maynard Keynes predicted that, by the early 21st century, advances in technology would lead to an “age of leisure and abundance”, in which people might work 15 hours a week. In 1980, as robots began to depopulate factories, the French social and economic theorist André Gorz declared: “The abolition of work is a process already underway … The manner in which [it] is to be managed … constitutes the central political issue of the coming decades.”

Since the early 2010s, as the crisis of work has become increasingly unavoidable in the US and the UK, these heretical ideas have been rediscovered and developed further. Brief polemics such as Graeber’s “bullshit jobs” have been followed by more nuanced books, creating a rapidly growing literature that critiques work as an ideology – sometimes labelling it “workism” – and explores what could take its place. A new anti-work movement has taken shape. . .

Continue reading.

Written by LeisureGuy

20 January 2018 at 8:54 am

The Fall of Travis Kalanick Was a Lot Weirder and Darker Than You Thought

leave a comment »

Eric Newcomer and Brad Stone write in Businessweek:

A year ago, before the investor lawsuits and the federal investigations, before the mass resignations, and before the connotation of the word “Uber” shifted from “world’s most valuable startup” to “world’s most dysfunctional,” Uber’s executives sat around a hotel conference room table in San Francisco, trying to convince their chief executive officer, Travis Kalanick, that the company had a major problem: him.

The executives were armed that day with something unusual for Uber Technologies Inc.: the results of a survey. Kalanick operated by gut feeling and with a stubborn sense of how people should feel, not how they did. Jeff Jones, Uber’s new president and former chief marketing officer for Target Corp., wanted more substantial insights. Conclusions drawn from the survey were printed and hanging on the walls. About half the respondents had a positive impression of Uber and its convenient ride-hailing app. But if respondents knew anything about Kalanick, an inveterate flouter of both workplace conventions and local transportation laws, they had a decidedly negative view.

As usual with Kalanick, the discussion grew contentious. Jones and his deputies argued that Uber’s riders and drivers viewed the company as made up of a bunch of greedy, self-centered jerks. And as usual, Kalanick retorted that the company had a public-relations problem, not a cultural one.

Then a top executive excused herself to answer a phone call. A minute later, she reappeared and asked Kalanick to step into the hallway. Another executive joined them. They hunched over a laptop to watch a video that had just been posted online by Bloomberg News: grainy, black-and-white dashcam footage of Kalanick in the back seat of an UberBlack on Super Bowl weekend, heatedly arguing over fares with a driver named Fawzi Kamel.

“Some people don’t like to take responsibility for their own shit!” Kalanick can be heard yelling at Kamel. “They blame everything in their life on somebody else!”

As the clip ended, the three stood in stunned silence. Kalanick seemed to understand that his behavior required some form of contrition. According to a person who was there, he literally got down on his hands and knees and began squirming on the floor. “This is bad,” he muttered. “I’m terrible.”

Then, contrition period over, he got up, called a board member, demanded a new PR strategy, and embarked on a yearlong starring role as the villain who gets his comeuppance in the most gripping startup drama since the dot-com bubble. It’s a story that, until now, has never been fully told.

The melodrama began, in a sense, with Donald Trump. On Jan. 27 the newly inaugurated president issued his executive order imposing border restrictions on people from seven Muslim countries. Outrage erupted. People took to the streets; tech workers in Silicon Valley walked out of their offices in symbolic protest. And in New York, a small union called the New York Taxi Workers Alliance declared that there would be no taxi pickups from 6 p.m. to 7 p.m. on Saturday night at John F. Kennedy International Airport.

For Uber, that would create extra demand at the airport, which meant it could charge more—but this would probably cause a backlash. That had happened before when the company let its “surge pricing” algorithms do their thing. So the New York managers decided to be good citizens and suspend surge pricing for the night.

The backlash hit anyway. After years of negative revelations—spying on passengers, dubious driverless-car experiments in San Francisco, the CEO’s bragging about sexual conquests, to name just a few—the public was already inclined to believe the worst of Uber. If the company wasn’t price gouging this time, maybe it was trying to break up the JFK strike. A new hashtag was trending on Twitter: #deleteuber. Users deleted their accounts by the thousands. Lyft Inc., the rival service that branded itself the anti-Uber, capitalized on the moment and donated $1 million to the American Civil Liberties Union.

Further stoking the flames was Kalanick’s decision to join Trump’s business advisory council. Kalanick argued that his participation in the council wasn’t an endorsement of the president; he just wanted a seat at the table, along with Elon Musk, International Business Machines Corp.’s Ginni Rometty, and Walt Disney Co.’s Bob Iger. But intentions didn’t seem to matter. Criticism from riders and drivers intensified, and Kalanick spent days talking to his executives about what to do. They considered whether he should go to the first meeting and find some pretense to object and leave; he even floated the idea of wearing a protest T-shirt to the council meeting, according to people familiar with the discussions.

Ultimately, Kalanick decided the whole thing wasn’t worth the trouble and his minders set up a call so he could politely say no to Trump. A chronic pacer, Kalanick walked away from his desk at the appointed time. The first call from the White House came—and went to Kalanick’s voicemail. Then came the second call. Trump was on the line, and Kalanick walked into a glass-walled conference room to deliver the news. The conversation apparently went as one would expect. Kalanick emerged to tell his colleagues that the president was “super un-pumped.”

In mid-February, an engineer named Susan Fowler penned a blog post, “Reflecting on One Very, Very Strange Year at Uber,” about the sexual harassment she had witnessed in her time at the company. The explosive response to that post prompted Kalanick to hire Eric Holder Jr., the former U.S. attorney general and a partner at the firm Covington & Burling LLP, to lead an investigation into Fowler’s claims. “We will leave no stones unturned,” Holder told Bloomberg News at the start of his inquiry. “This company has one opportunity to get this right.”

But the hits kept coming. The infamous dashcam footage surfaced; a few weeks later, the New York Times reported on a secret Uber technology called Greyball, which the company developed to identify and deny service to riders who had violated the company’s contractual terms. In some cities and countries, Uber managers used Greyball to avoid picking up taxi inspectors and other law enforcement officials who might want to ticket drivers or shut the service down.

A common factor in all these crises was Kalanick’s unrelenting combativeness. In meetings, he would alternately impress and alienate employees, investors, and his board. Over time, he turned a lot of friends into enemies with stories to tell. Jones, the president who had commissioned the public survey, resigned in March after only six months on the job, citing differences over “beliefs and approach to leadership.” In exit interviews with Uber board members, he was more specific, excoriating Kalanick’s shotgun management style and unwillingness to listen. Jones seemed so eager to leave the company that he declined to negotiate an exit package, potentially leaving millions of dollars behind.

Google, too, should have been an ally. . . .

Continue reading.

Written by LeisureGuy

19 January 2018 at 9:25 am

Posted in Business, Law, Technology

Human-Centered Design and the Missile False Alarm in Hawaii

leave a comment »

Scott Robertson writes in Medium:

I wasn’t in the room where it happened, but I experienced the consequence. On Saturday, January 13, 2018, somebody pushed a button in Hawaii that sent an “Emergency Alert” text message to cell phones statewide: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”

The warning was also automatically broadcast on television and radio. It caused widespread panic. Residents and visitors huddled in closets and basements. Tourists vainly sought clarification from their hotels about what to do. People hugged their children and called their loved ones. Thirty minutes later, another alert declared a false alarm.

The visibly shaken Governor appeared on television with the equally discouraged Emergency Management Administrator and declared “This should not have happened.” He said, “An employee pushed the wrong button.” There will be an investigation and officials will be updating procedures, but let me suggest that some human-centered design guidelines and human-computer interaction (HCI) procedures might have helped avoid the whole thing in the first place.

Here’s what I know from reading press reports:

  1. The accident occurred at a shift change.
  2. During the shift change, there is some kind of routine run-through of a procedure very similar to the actual alert procedure.
  3. In an actual alert, there is a button to click on the screen that will cause the warning to be generated. One person is responsible for pressing this button.
  4. There is a confirmation dialog box that follows the button press.
  5. There is no standardized false alarm alert mechanism.

We can talk all day about hindsight, but the human-centered design process in software development is there to provide foresight, and every one of the five points above is a red flag for practitioners experienced in its application.

First, exactly what is human-centered design? It is an approach to system development that prioritizes the experiences of the people who will be using the system. It takes into account how people perceive information in all senses; what people are capable of doing physically with their hands, fingers, eyes, and whatever else they are using to interact with a computer system; how people process information, what they can remember, and what taxes their information processing capabilities or confuses them; how human feelings and emotions affect performance and attention; how the context of people’s activity influences what they think and do; and so on.

A human-centered designer is therefore someone with knowledge about the behavioral, cognitive and and physiological sciences, who also knows about the design of interactive computing systems. They should be part of any development team, and play just as important a role as the best software engineer or programmer.

The Human(s) in the Loop

Now, to the points above.

  1. The shift change. Think to yourself what is happening at a shift change. The activity is simple to describe: one person is leaving and another is taking over their job. From a technical perspective, there will be a bunch of steps for transferring the activities. But think about it more deeply from the human perspective. How do you feel at the end of a shift? Anxious to get going? In a hurry? What’s on your mind? The drive? Picking up the kids? Stopping at the store? This is a classic situation where people are distracted, inattentive, and cognitively overloaded. It is a common place where errors happen.
  2. The routine run-through. A typical shift change involves following a checklist of actions. This sounds like a great idea for making sure that people complete every step, but again think deeply about how humans master routines. The human brain is an automated routine learner, and the point of learning a routine is to free up attention. When you were first learning to drive, every movement was an agonizing effort and all attention was on the placement of your hands, the pressure of your foot on a pedal, where you should be looking, and what the next step was. Once you became an expert, however, you could drive “without thinking.” Now, when you drive your attention can be elsewhere and you can sometimes complete a journey without even remembering what you are doing. But this routinization comes with a cost. Have you ever driven to the wrong place because you weren’t paying attention? Your driving was excellent, but the whole process was on automatic and the destination was wrong. And, by the way, the wrong destination was a well-practiced one like home. If the run-through at the shift change was almost exactly the same as the actual process for generating an alert, the chances of triggering an alert by mistake because of routinization were increased.
  3. A button click sends an emergency alert to the entire state. There is a mismatch here between the simplicity of the action and magnitude of the consequence. While speed is of the essence, there is a tradeoff between simplicity and outcome. There are many ways around this problem, for example just labeling the button with appropriate verbiage, including a warning icon, and using a danger color (i.e. red) might be enough. Requiring a confirmation as a second step is another approach, which was used here but was ineffective (we will see why in #4 below). Perhaps the most obvious fix here is to require a second person to verify the action.
  4. A confirmation dialog didn’t work. How many times have you seen “Are you sure? Yes, No”? Plenty, no doubt. How many times have you found yourself swearing after pressing “Yes” too hastily? This is one of the most common questions that an IT consultant asks, “Why did you press to confirm?” And the most common and frustrating answer is, “I don’t know.” Well, the reason is the same as #2 above: routinization. When you are on automatic, you don’t see and you don’t think, at least not consciously. Routinization will combine any commonly occurring sequence of actions into a single action without your awareness. That means that pressing the “Send Alert” button (or whatever it is) and pressing the “Yes” to confirm button are actually not two actions in the human mind, but really just one action. After happening together a few times, the two button presses are programmed into a single movement by the brain, and this movement cannot even be stopped once it starts. Hence the befuddlement afterwards, “I don’t know why I did that.”
  5. There is no standardized false alarm mechanism. Everyone makes mistakes.  . .

Continue reading.

There’s quite a bit more, and it’s interesting.

Written by LeisureGuy

17 January 2018 at 9:24 am

Posted in Software, Technology

AlphaZero and the Protected Passed Pawn

leave a comment »

I thought this commentary was particularly good, though I realize tastes differ. Fascinating game.

Written by LeisureGuy

12 January 2018 at 11:16 pm

Harvard Study Shows Why Big Telecom Is Terrified of Community-Run Broadband

leave a comment »

Karl Bode writes in Motherboard:

A new study out of Harvard once again makes it clear why incumbent ISPs like Comcast, Verizon and AT&T are so terrified by the idea of communities building their own broadband networks.

According to the new study by the Berkman Klein Center for Internet and Society at Harvard University, community-owned broadband networks provide consumers with significantly lower rates than their private-sector counterparts.

The study examined data collected from 40 municipal broadband providers and private throughout 2015 and 2016. Pricing data was collected predominately by visiting carrier websites, where pricing is (quite intentionally) often hidden behind prequalification walls, since pricing varies dramatically based on regional competition.

In many markets, analysts couldn’t make direct comparisons with a private ISP, either because the ISP failed to meet the FCC’s 25 Mbps down, 3 Mbps up standard definition of broadband (a problem for countless telcos who refuse to upgrade aging DSL lines), or because the ISP prequalification website terms of service “deterred or prohibited” data collection.

But out of the 27 markets where they could make direct comparisons, researchers found that in 23 cases, the community-owned ISPs’ pricing was lower when the service costs and fees were averaged over four years.

“When considering entry-level broadband service—the least-expensive plan that provides at least 25/3 Mbps service—23 out of 27 community-owned [fiber to the home] providers we studied charged the lowest prices in their community when considering the annual average cost of service over a four-year period, taking into account installation and equipment costs and averaging any initial teaser rates with later, higher, rates,” they noted. . .

Continue reading.

Written by LeisureGuy

12 January 2018 at 1:18 pm

How to Fix Facebook—Before It Fixes Us

leave a comment »

Roger McNamee writes in the Washington Monthly:

In early 2006, I got a call from Chris Kelly, then the chief privacy officer at Facebook, asking if I would be willing to meet with his boss, Mark Zuckerberg. I had been a technology investor for more than two decades, but the meeting was unlike any I had ever had. Mark was only twenty-two. He was facing a difficult decision, Chris said, and wanted advice from an experienced person with no stake in the outcome.

When we met, I began by letting Mark know the perspective I was coming from. Soon, I predicted, he would get a billion-dollar offer to buy Facebook from either Microsoft or Yahoo, and everyone, from the company’s board to the executive staff to Mark’s parents, would advise him to take it. I told Mark that he should turn down any acquisition offer. He had an opportunity to create a uniquely great company if he remained true to his vision. At two years old, Facebook was still years away from its first dollar of profit. It was still mostly limited to students and lacked most of the features we take for granted today. But I was convinced that Mark had created a game-changing platform that would eventually be bigger than Google was at the time. Facebook wasn’t the first social network, but it was the first to combine true identity with scalable technology. I told Mark the market was much bigger than just young people; the real value would come when busy adults, parents and grandparents, joined the network and used it to keep in touch with people they didn’t get to see often.

My little speech only took a few minutes. What ensued was the most painful silence of my professional career. It felt like an hour. Finally, Mark revealed why he had asked to meet with me: Yahoo had made that billion-dollar offer, and everyone was telling him to take it.

It only took a few minutes to help him figure out how to get out of the deal. So began a three-year mentoring relationship. In 2007, Mark offered me a choice between investing or joining the board of Facebook. As a professional investor, I chose the former. We spoke often about a range of issues, culminating in my suggestion that he hire Sheryl Sandberg as chief operating officer, and then my help in recruiting her. (Sheryl had introduced me to Bono in 2000; a few years later, he and I formed Elevation Partners, a private equity firm.) My role as a mentor ended prior to the Facebook IPO, when board members like Marc Andreessen and Peter Thiel took on that role.

In my thirty-five-year career in technology investing, I have never made a bigger contribution to a company’s success than I made at Facebook. It was my proudest accomplishment. I admired Mark and Sheryl enormously. Not surprisingly, Facebook became my favorite app. I checked it constantly, and I became an expert in using the platform by marketing my rock band, Moonalice, through a Facebook page. As the administrator of that page, I learned to maximize the organic reach of my posts and use small amounts of advertising dollars to extend and target that reach. It required an ability to adapt, because Facebook kept changing the rules. By successfully adapting to each change, we made our page among the highest-engagement fan pages on the platform.

My familiarity with building organic engagement put me in a position to notice that something strange was going on in February 2016. The Democratic primary was getting under way in New Hampshire, and I started to notice a flood of viciously misogynistic anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I knew how to build engagement organically on Facebook. This was not organic. It appeared to be well organized, with an advertising budget. But surely the Sanders campaign wasn’t stupid enough to be pushing the memes themselves. I didn’t know what was going on, but I worried that Facebook was being used in ways that the founders did not intend.

A month later I noticed an unrelated but equally disturbing news item. A consulting firm was revealed to be scraping data about people interested in the Black Lives Matter protest movement and selling it to police departments. Only after that news came out did Facebook announce that it would cut off the company’s access to the information. That got my attention. Here was a bad actor violating Facebook’s terms of service, doing a lot of harm, and then being slapped on the wrist. Facebook wasn’t paying attention until after the damage was done. I made a note to myself to learn more.

Meanwhile, the flood of anti-Clinton memes continued all spring. I still didn’t understand what was driving it, except that the memes were viral to a degree that didn’t seem to be organic. And, as it turned out, something equally strange was happening across the Atlantic.

When citizens of the United Kingdom voted to leave the European Union in June 2016, most observers were stunned. The polls had predicted a victory for the “Remain” campaign. And common sense made it hard to believe that Britons would do something so obviously contrary to their self-interest. But neither common sense nor the polling data fully accounted for a crucial factor: the new power of social platforms to amplify negative messages.

Facebook, Google, and other social media platforms make their money from advertising. As with all ad-supported businesses, that means advertisers are the true customers, while audience members are the product. Until the past decade, media platforms were locked into a one-size-fits-all broadcast model. Success with advertisers depended on producing content that would appeal to the largest possible audience. Compelling content was essential, because audiences could choose from a variety of distribution mediums, none of which could expect to hold any individual consumer’s attention for more than a few hours. TVs weren’t mobile. Computers were mobile, but awkward. Newspapers and books were mobile and not awkward, but relatively cerebral. Movie theaters were fun, but inconvenient.

When their business was limited to personal computers, the internet platforms were at a disadvantage. Their proprietary content couldn’t compete with traditional media, and their delivery medium, the PC, was generally only usable at a desk. Their one advantage—a wealth of personal data—was not enough to overcome the disadvantage in content. As a result, web platforms had to underprice their advertising.

Smartphones changed the advertising game completely. It took only a few years for . . .

Continue reading. There’s a lot more.

Later in the article:

Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy. The result is that the algorithms favor sensational content over substance. Of course, this has always been true for media; hence the old news adage “If it bleeds, it leads.” But for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members. To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.

It took Brexit for me to begin to see the danger of this dynamic.

And later:

I realized that the problems I had been seeing couldn’t be solved simply by, say, Facebook hiring staff to monitor the content on the site. The problems were inherent in the attention-based, algorithm-driven business model. And what I suspected was Russia’s meddling in 2016 was only a prelude to what we’d see in 2018 and beyond. The level of political discourse, already in the gutter, was going to get even worse.

See also “Social Media Has Hijacked Our Brains and Threatens Global Democracy.”

Written by LeisureGuy

5 January 2018 at 11:42 am

%d bloggers like this: