Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

A Good Tactic: Set Up Credit Card Alerts

leave a comment »

They help track your spending and can catch fraudulent charges

Starting well before the pandemic, I discontinued using cash for in-person transactions, instead using my credit card. Partly this was because I use a “dividend” card, which at the end of the year refunds a (small) portion of what I spent. 

As I describe in my article on personal budgeting and money management, I do not allow credit card charges to accumulate. The reason: an unpaid credit card balance means that the amount shown as available in my checking account is a mirage. That much is not available, because some of it is already committed to pay the credit card balance. I learned through bitter experience that small charges can quickly total a lot, so I pay charges when they occur.

In effect, I use my credit card like a debit card. When I charge something, then on that same day I pay for the charge from my checking account by using online banking. Thus my credit balance stays at zero (and my checking balance shrinks as I pay the charges — and I always know exactly how much money I actually have available). I don’t use a debit card because (a) my debit card doesn’t refund any portion of what I purchase and (b) credit cards have buyer protection built into the agreement and limit the loss I might suffer.

When I’m out shopping, I always save receipts for purchases. When I get home, I use those to remind me to pay the charges I just made. That generally worked, but occasionally I would forget to get a receipt.

The winning tactic

To make sure I do not miss any charges, I started using an option my bank offers. When I sign in to the bank’s website, the main menu includes “Manage My Alerts,” and when I click that, I see the choice “transaction alerts,” listing a variety of alerts I can set on transactions. I checked the box to get an email whenever a credit card charge is made in excess of a limit (which the customer specifies). I specified a limit of $1, so I get an email whenever a charge is made that’s in excess of $1. In practice, that means I get an email for every charge.

When I make a purchase online, I’m at the computer, so I don’t actually need a reminder — I pay the credit card charge at once from my checking account. Emails also notify me of regular scheduled payments on my credit card (Netflix, for example), a useful reminder. But mostly the alerts serve as reminders when I return home from shopping, so I don’t forget to “reimburse” my credit card account for all charges I made while I was out.

Catching fraud when it happens

Today I realized another benefit from the alerts. I received emails for five transactions from two distant merchants that I didn’t know and from whom I had bought nothing. One merchant had made two transactions: a charge and a refund in the same amount, neither of which I had instigated. The other merchant (in a different country) had two charges (for identical amounts) and a refund for only one.

It should be noted that for some merchants and organizations the name shown on the credit card charge is not the merchant or organization’s name. In this case, you will see a warning at the time of purchase: “Charges will show on your credit cards as…” And, of course, you generally know what you charged, so you probably will recall the charge by the amount paid. However, the five transactions I was notified about were from companies and locations totally unfamiliar for me and the amounts did not reflect any purchases I had made.

I immediately called the bank, which deactivated the card on the spot and dismissed the fraudulent charge. They said a new card would be issued and would arrive in a week to ten days, and would that be all right? 

I said that it would not, because — as I described above — I use the card for every purchase I make, and I would certainly need groceries sooner than that. They then said the card would arrive within two days.

Use the alerts

I highly recommend using such alerts. Look for what alerts your bank offers. They probably will offer a transaction alert to notify you whenever your credit card is used for a charge above some limit. Set the limit to $1 to be notified of every charge. This will help you track your spending and — more important — alert you immediately to fraudulent charges.

Written by LeisureGuy

17 January 2021 at 3:07 pm

When Tech Antitrust Failed: Books and book prices

leave a comment »

Shira Ovide reports in the NY Times:

If you’ve wondered recently why prices for e-books seem high, let me tell you why a failure of antitrust law might be (partly) to blame.

A government antitrust lawsuit a decade ago that was intended to push down prices helped lead instead to higher ones.

The outcome suggests that the U.S. government’s lawsuits against Google and Facebook and a just-announced Connecticut antitrust investigation into Amazon’s e-book business may not have the desired effects, even if the governments win. It turns out that trying to change allegedly illegal corporate behavior can backfire.

Cast your mind back to 2012. The second “Twilight” movie was big. And the Justice Department sued Apple and five of America’s leading book publishers in the name of protecting consumers and our wallets.

Book publishers were freaked out about Amazon’s habit of pricing many popular Kindle books at $9.99 no matter what the book companies thought the price should be. Amazon was willing to lose money on e-books, but the publishers worried that this would devalue their products.

The government said that to strike back at Amazon, the book companies and Apple made a deal. Publishers could set their own e-book prices on Apple’s digital bookstore, and they essentially could block discounts by any bookseller, including Amazon.

To the government this looked like a conspiracy to eliminate competition over prices — a big no-no under antitrust laws. Eventually the book publishers settled and Apple lost in court.

Later, Amazon, Apple and other e-book sellers agreed to let publishers enforce e-book prices. The arrangements were legally kosher because they were separately negotiated between each publisher and bookseller. (I can’t answer why Amazon agreed to this.)

The government won but the publishers got what they wanted with e-books. Bookstores can choose to take a loss to heavily discount a print book, but they typically can’t with digital editions. The $10 mass-market e-book is mostly gone.

How did an antitrust case meant to lower prices instead possibly lead to higher prices? Christopher L. Sagers, a law professor at Cleveland State University who wrote a book about the e-books litigation, told me that he believes it’s a failure of corporate antitrust laws.

Professor Sagers and others believe that because a few major book publishers release most mass-market titles, they have the power to keep prices high. He laments that the antitrust laws have failed to stop industries from getting so concentrated. In other words, he thinks it’s bad for all of us that a book-publishing monopoly is trying to fight Amazon’s monopoly.

“American antitrust is basically a failure and this case was a microcosm,” he told me.

Somehow this newsletter keeps coming back to this debate. An influential view — particularly among left-leaning economists, politicians and scholars — is that U.S. antitrust laws or the way they’re applied are flawed. They believe that the government has failed to stop the increasing corporate concentration and mergers in industries like airlines, banking and technology, which has led to higher prices, worse products and income inequality. . .

Continue reading.

It’s clear that printed books incur substantial costs that do not apply to ebooks: materials, production labor, distribution/shipping costs, and so on. The ebook versions should not have those costs in their price.

Written by LeisureGuy

15 January 2021 at 12:12 pm

How the Trump terrorists were so quickly identified

leave a comment »

And also:

Written by LeisureGuy

11 January 2021 at 10:32 pm

Superspreader Down: How Trump’s Exile from Social Media Alters the Future of Politics, Security, and Public Health

leave a comment »

Peter W. Singer writes at Defense One:

By the numbers, no person in human history has shared more conspiracy theories with a greater number of people than Donald J. Trump. Among all the momentous events of the last week, the silencing of his social-media megaphones is a “yuge” moment not just for American politics but a host of issues from public health to national security.

In researching LikeWar, Emerson Brooking’s and my book on the weaponization of social media, I actually went back and read every single @realdonaldtrump tweet, going back to his very first: a May 4, 2009, announcement of his upcoming appearance on the Letterman show. As you sift through the more than 57,000 tweets that follow, the sheer scale of the lies and insults becomes mind-numbing. (I joke about my “information warfare PTSD.”) Yet what is also notable is how many conspiracy theories Trump both started or massively elevated long before becoming president. They ranged from well-known lies like birtherism to other ones that are even more despicable in retrospect, like fueling anti-vaccine myths.

Most importantly, we found that Trump was spectacularly effective in persuading others to spread his conspiracy theories. Our research showed that, just like in public health, superspreaders are the key to virality. The path to making the internet less toxic is placing limits on these superspreaders, be they ISIS propagandists or right-wing extremists. Instead of trying to police everyone, we must focus on key nodes that affect everyone.

Banning Trump is obviously the headline event for social media, but it reflects a larger policy shift by the companies that created and run these now-essential networks. These firms are now making content moderation decisions based increasingly not just on whether a user or a post violated their rules, but what effect these might have on people off the network. This was already shifting as firms adjusted to reduce COVID-19 misinformation, but hit its culmination in Trump’s ejection.

Over the last year, and seen most explosively in the violent seizing of the Capitol, the political context changed, both on social media and in the real world. But Trump didn’t seem to understand it. Or, maybe, having never been held accountable from birth onwards, the outgoing president thought he could keep on operating the same way: crossing a line, and getting away with it. Importantly, Twitter decided he had crossed a final line. He had not just repeatedly broken the platform’s rules on election-fraud claims. Now, even after all the events at the Capitol, he had used his return to Twitter after an initial suspension to immediately break the pledge of a “peaceful transition” that he had made in a stilted video released just the night before.

What too many in media and politics are missing, but what Twitter and the other platforms couldn’t ignore, was Trump’s announcement that he would not be participating in the Inaugural events. With that, he didn’t just go back on his pledge of peaceful transition, but threw gasoline on the fire yet again. There were already a series of extremist militia events planned for Jan. 17 in various state capitals. (The storming of the U.S. Capitol was not isolated; last week saw armed pro-Trump mobs also attempt or succeed in breaking into state legislatures and governors’ homes in Georgia, Nevada, South Carolina, Utah, and Washington state.) Even more worrisome, security analysts had picked up online discussion of a “Million Militia March” set for Jan. 20 in Washington, D.C. Its purpose, at least in the chatter, is not just to disrupt Biden’s inauguration, but also to seek violent payback on police for the supposed “martyrdom” of the rioter killed in the Capitol. Twitter officials concluded that Trump’s tweets “are likely to inspire others to replicate the violent acts that took place on January 6, 2021.”

Whether Trump intended the dual dog whistle or not, it was heard that way by both the “patriots” whom he’d told he “loved” even as they rampaged and chanted “Hang Mike Pence, Hang Mike Pence,” and by the platform companies that own the networks he needed for his rabble-rousing messages. And for them, as it should be for the rest of us, Trump had lost the benefit of the doubt.

The reverberations of Trump’s deplatforming as part of this larger shift will shake out for not just the coming days, but over the long term — and in everything from terrorism to public health. The reason is that it fundamentally alters the playing field.

Everything in the social media ecosystem was once  . . .

Continue reading.

Written by LeisureGuy

11 January 2021 at 3:35 pm

What’s required to restore electronic security to the Capitol

leave a comment »

Read this thread by @jacobian, which begins:

So much this. A physical breach is a nightmare scenario for infosec.

On the off-chance that any of my followers are involved in this — I do have some experience in scenarios like this and would be happy to help. If I can be of assistance hit me up.
Just to give folks who aren’t in the field an idea what we’re talking about:

– we must assume that foreign agents were among the rioters
– snooping devices can be implanted into anything with a power cord
– so every device in the capitol is now a potential foreign asset 
So, just for starters:

– all computers need to be inventoried, inspected inside and out, and the OS paved/rebuilt
– keyboards, mice, &c might now have implants, they probably should be tossed (see eg keelog.com/forensic-keylo… which looks like a usb cable but is in fact a logger)
Then everything with a power source needs to be audited. This means lamps. Thermostats. Those cute little portrait lights on top of photos. The vacuum cleaner in the storage closet. Even outlets — a fav trick of one Red Team I know is a fake outlet cover that hides a mic. 

Continue reading.

Written by LeisureGuy

10 January 2021 at 3:22 pm

Why Parler is doomed

leave a comment »

A very interesting Twitter thread by David Troy in a very interesting new app that presents Twitter threads in readable form. The thread begins:

THREAD: Now that @amazon @awscloud has announced they will no longer host @parler_app, many have speculated that they will just “find another host.”

Here is why that’s not so simple and what it will likely mean for the app’s future. First, let’s look at where things are… 

2/ Google and Apple have removed the app from their app stores, effectively terminating growth on mobile devices. People can still access the (not good) web UI until Amazon terminates them today. CEO Matze has said they may be down “up to a week” while they find new hosting. 
3/ Translated to English, that’s code for “we have no idea what’s going to happen next.” No US cloud provider (Microsoft, Google, IBM, Digital Ocean) is likely to touch this, as it could be seen as providing material support for sedition. No CEO or counsel wants to get near this. 
4/ They could potentially “roll their own” data center by buying servers and putting them in a co-location facility. But that’s a single point of failure, and many colo providers would be just as likely to decline their business. It would be hard and risky to pursue this. 
5/ Imagine, after they sweated like pigs to get this hardware all setup, if they get told their colo provider is booting them. That’s a lot of metal to then move somewhere else. Meanwhile, the user base is deteriorating because chaos and dying apps. They will go to other venues. 
6/ They also have . . .

Continue reading. There’s much more.

And also read his earlier thread about Parler and its intimate connections with Russia. That is a must-read.

Written by LeisureGuy

10 January 2021 at 11:00 am

Interesting recipe search engine

leave a comment »

Type in the name (or part of the name) of a recipe and get results from a 2,000,000-recipe database.

You’ll see a long list of hits, which you can then filter by ingredient. The filter, however, does not seem to allow for excluding an ingredient — for example, you cannot get a set of cornbread recipes that do not contain flour, though you can get a set of cornbread recipes that (for example) do contain bacon. The filter seems to work with OR as the connector, not AND — for example, if you search “cornbread” and filter for  both bacon and buttermilk, you get a list of recipes that contain either or both of those, not a list of only recipes that contain both. I would say the filter function requires more work.

Another limitation (naturally enough) is that the list of filter items is limited. There’s a “show more” option, but even that list is limited. If you want a cornbread recipe that contains chorizo, for example, you will not see “chorizo” listed among the filter choices. However, an easy workaround is to include the desired ingredient in the recipe search term. For example, entering “cornbread chorizo” as the search term will list cornbread recipes that include chorizo.

Similarly, if you want a soup that contains kale, don’t look to the filter (“kale” is not listed, even with “show more”). Instead, just enter “kale soup” as the search term. You can extend this: enter “soup kale chorizo” and get a list of recipes that contain both kale and chorizo.

It’s a quick way to sift through 2,000,000 recipes. Here it is.

Written by LeisureGuy

10 January 2021 at 5:58 am

Facebook fails Georgia

leave a comment »

Judd Legum writes in Popular Information:

Over the last two weeks, Facebook has repeatedly allowed a top Republican Super PAC, American Crossroads, to run dishonest attacks against Democratic Senate candidate Raphael Warnock — in violation of Facebook’s own misinformation rules. As a result, hundreds of thousands of Georgians have been exposed to misinformation about Warnock on Facebook in the critical days leading up to the January 5 run-off election.

Internal Facebook communications concerning the American Crossroads ads, obtained by Popular Information, reveal dysfunction and confusion about Facebook’s advertising policies, even among executives purportedly in charge of such matters.

Beginning on Election Day, November 3, Facebook banned all political ads on the platform. But it partially lifted the ban on December 16 to allow ads about the Georgia runoffs targeting Facebook users in Georgia. The announcement said that Facebook would activate its “Elections Operations Center” to “.fight…misinformation” about the Georgia runoffs in “real time.”

On December 17, American Crossroads, a Republican Super PAC run by Karl Rove and funded by Mitch McConnell’s political operation, began running an ad with a short snippet of Warnock saying, “God damn America.” The ad presents Warnock’s statement as an expression of his own views, saying his comments represented “anti-American hate.” This is blatantly dishonest. . .

Continue reading. There’s much more.

Written by LeisureGuy

4 January 2021 at 10:31 am

How to Get Rich Sabotaging Nuclear Weapons Facilities

leave a comment »

Matt Stoller writes in BIG:

Happy new year. Today I’m going to write about the Russian hack of American nuclear facilities, and why a billionaire private equity executive just profiled in the Wall Street Journal as a dealmaker extraordinaire is responsible. Plus some short blurbs on:

  • The Problem with Amazon competitor Shopify
  • Ticketmaster’s Grotesque Settlement with the Department of Justice
  • Economists Non-Surprising But Important Findings about Debt-Fueled Private Equity and Covid
  • Big Tech and Diversity
  • Appliance Parts Monopolization?

Happy New Year! The password is 12345

My Password Is “Password”

Roughly a month ago, the premier cybersecurity firm FireEye warned authorities that it had been penetrated by Russian hackers, who made off with critical tools it used to secure the facilities of corporations and governments around the world.

The victims are the most important institutional power centers in America, from the FBI to the Department of Treasury to the Department of Commerce, as well as private sector giants Cisco Systems, Intel, Nvidia, accounting giant Deloitte, California hospitals, and thousands of others. As more information comes out about what happened, the situation looks worse and worse. Russians got access to Microsoft’s source code and into the Federal agency overseeing America’s nuclear stockpile. They may have inserted code into the American electrical grid, or acquired sensitive tax information or important technical and political secrets.

Cybersecurity is a very weird area, mostly out of sight yet potentially very deadly. Anonymous groups can turn off power plants, telecom grids, or disrupt weapons labs, as Israel did when it used a cyber-weapon to cripple Iranian nuclear facilities in 2010. Bank regulators have to now consult with top military leaders about whether deposit insurance covers incidents where hackers destroy all bank records, and what that would mean operationally. It’s not obvious whether this stuff is war or run-of-the-mill espionage, but everyone knows that the next war will be chock full of new tactics based on hacking the systems of one’s adversary, perhaps using code placed in those systems during peacetime.

And that makes this hack quite scary, even if we don’t see the effect right now. Mark Warner, one of the smarter Democratic Senators and the top Democrat on the Intelligence Committee, said “This is looking much, much worse than I first feared,” also noting “The size of it keeps expanding.” Political leaders are considering reprisals against Russia, though it’s likely they will not engage in much retaliation we can see on the surface. It’s the biggest hack since 2016, when an unidentified group stole the National Security Agency’s “crown jewels” spy tools. It is, as Wired put it, a “historic mess.”

There is a lot of finger-pointing going on in D.C. and in cybersecurity circles about what happened and why. There are all of the standard questions that military and cyber lawyers love, like whether this hack is war, espionage, or something legally ambiguous. Policymakers are revisiting the longstanding policy of having the National Security Agency focus on offensive hacking instead of securing defensive capacity.

The most interesting part of the cybersecurity problem is that it isn’t purely about government capacity at all; private sector corporations maintain critical infrastructure that is in the “battle space.” Private firms like Microsoft are being heavily scrutinized; I had one guest-post from last January on why the firm doesn’t manage its security problems particularly well, and another on how it is using its market power to monopolize the cybersecurity market with subpar products. And yet these companies have no actual public obligations, or at least, nothing formal. They are for-profit entities with little liability for the choices they make that might impose costs onto others.

Indeed, cybersecurity risk is akin to pollution, a cost that the business itself doesn’t fully bear, but that the rest of society does. The private role in cybersecurity is now brushing up against the libertarian assumptions of much of the policymaking world; national security in a world where private software companies handle national defense simply cannot long co-exist with our monopoly and financier-dominated corporate apparatus.

All of which brings me to what I think is the most compelling part of this story. The point of entry for this major hack was not Microsoft, but a private equity-owned IT software firm called SolarWinds. This company’s products are dominant in their niche; 425 out of the Fortune 500 use Solar Winds. As Reuters reported about the last investor call in October, the CEO told analysts that “there was not a database or an IT deployment model out there to which [they] did not provide some level of monitoring or management.” While there is competition in this market, SolarWinds does have market power. IT systems are hard to migrate from, and this lock-in effect means that customers will tolerate price hikes or quality degradation rather than change providers. And it does have a large market share; as the CEO put it, “We manage everyone’s network gear.”

SolarWinds sells a network management package called Orion, and it was through Orion that the Russians invaded these systems, putting malware into updates that the company sent to clients. Now, Russian hackers are extremely sophisticated sleuths, but it didn’t take a genius to hack this company. It’s not just that criminals traded information about how to hack SolarWinds systems; one security researcher alerted the company last year that “anyone could access SolarWinds’ update server by using the password “solarwinds123.’”

Using passwords ripped form the movie Spaceballs is one thing, but it appears that lax security practice at the company was common, systemic, and longstanding. The company puts its engineering in the hands of cheaper Eastern Europe coders, where it’s easier for Russian engineers to penetrate their product development. SolarWinds didn’t bother to hire a senior official to focus on security until 2017, and then only after it was forced to do so by European regulations. Even then, SolarWinds CEO, Kevin Thompson, ignored the risk. As the New York Times noted, one security “adviser at SolarWinds, said he warned management that year that unless it took a more proactive approach to its internal security, a cybersecurity episode would be “catastrophic.” The executive in charge of security quit in frustration. Even after the hack, the company continued screwing up; SolarWinds didn’t even stop offering compromised software for several days after it was discovered.

This level of idiocy seems off-the-charts, but it’s not that the CEO is stupid. Far from it. “Employees say that under Mr. Thompson,” the Times continued, “an accountant by training and a former chief financial officer, every part of the business was examined for cost savings and common security practices were eschewed because of their expense.” The company’s profit tripled from 2010 to 2019. Thompson calculated that his business could run more profitably if it chose to open its clients to hacking risk, and he was right.

And yet, not every software firm operates like SolarWinds. Most seek to make money, but few do so with such a combination of malevolence, greed, and idiocy. What makes SolarWinds different? The answer is the specific financial model that has invaded the software industry over the last fifteen years, a particularly virulent strain of recklessness typically called private equity.

I’ve written a lot about private equity. By ‘private equity,’ I mean financial engineers, financiers who raise large amounts of money and borrow even more to buy firms and loot them. These kinds of private equity barons aren’t specialists who help finance useful products and services, they do cookie cutter deals targeting firms they believe have market power to raise prices, who can lay off workers or sell assets, and/or have some sort of legal loophole advantage. Often they will destroy the underlying business. The giants of the industry, from Blackstone to Apollo, are the children of 1980s junk bond king and fraudster Michael Milken. They are essentially are super-sized mobsters who burn down businesses for the insurance money.

In private equity takeovers of software, the gist is the same, with the players a bit different. It’s not Apollo and Blackstone, it’s Vista Equity Partners, Thomas Bravo, and Silver Lake, but it’s the same cookie cutter style deal flow, the same financing arrangements, and the same business model risks. But in this case, the private equity owner of SolarWinds burned down far more than just the firm.

Arson for Profit

In October, the Wall Street Journal profiled the man who owns SolarWinds, a Puerto Rican-born billionaire named Orlando Bravo of Thomas Bravo partners. Bravo’s PR game is solid; he was photographed beautifully, a slightly greying fit man with a blue shirt and off-white rugged pants in front of modern art, a giant vase and fireplace in the background of what is obviously a fantastically expensive apartment. Though it was mostly a puff piece of a silver fox billionaire, the article did describe Bravo’s business model.

Thoma Bravo identifies software companies with a loyal customer base but middling profits and transforms them into moneymaking engines by retooling pricing, shutting down unprofitable business lines and adding employees in cheaper labor markets.

The firm then guides its companies to use the profits they generate to do add-on acquisitions, snapping up smaller rivals with offerings that they could spend months and millions of dollars trying to replicate.

As I put it at the time, Bravo’s business model is to buy niche software companies, combine them with competitors, offshore work, cut any cost he can, and raise prices. The investment thesis is clear: power. Software companies have immense pricing power over their customers, which means they can raise prices to locked-in customers, or degrade quality (which is the same thing in terms of the economics of the firm). As Robert Smith, one of his competitors in the software PE game, put it, “Software contracts are better than first-lien debt. You realize a company will not pay the interest payment on their first lien until after they pay their software maintenance or subscription fee. We get paid our money first. Who has the better credit? He can’t run his business without our software.”

SolarWinds represents this thesis perfectly. The company was . ..

Continue reading.

Written by LeisureGuy

3 January 2021 at 11:20 am

A Canadian ‘Buy Local’ Effort Fights Amazon on Its Own Turf: Not-Amazon.ca

leave a comment »

Geneva Abdul reports in the NY Times about an initiative that could grow rapidly.

The snow was falling outside Ali Haberstroh’s apartment in late November when the idea came to her.

At the time, Canada was nearing a second lockdown to curb rising coronavirus cases. In anticipation, the owner of a vintage clothing store in Toronto who is a friend of Ms. Haberstroh’s had put together a list of other local vintage shops offering curbside pickup and deliveries in lieu of being able to open their doors.

“It was a wake-up call,” Ms. Haberstroh, 27, said of the list, which reminded her how enormous retailers like Walmart, Costco and Amazon had thrived during the pandemic while many smaller, local businesses had been shut. “I thought if there is one tiny thing I can do to help, then I should get on it.”

Inspired to build a more comprehensive list, Ms. Haberstroh promptly created an Instagram post, tagging independent businesses and shopkeepers across Toronto. Included was a new website, Not-Amazon.ca — a URL that she had bought for $2.99.

Introduced as a local list to help keep small businesses alive, Not Amazon was created “so you don’t have to give any money to Amazon this year!” the post read.

What began as a Google spreadsheet with more than 160 businesses collated initially from Ms. Haberstroh’s memory and research became a directory of hundreds that have a website and a high-quality photo and offer nationwide shipping, curbside pickup or delivery.

So far, the website has garnered more than half a million page views and grown to include 4,000 businesses across Toronto, Calgary, Halifax and Vancouver. The site is now submission-based, and thousands of businesses are awaiting Ms. Haberstroh’s approval.

“In a big city like Toronto, where it feels like most businesses are local, I think it’s so easy to think these things will be here forever,” said Ms. Haberstroh, who works as a social media manager at a marketing firm and plans to expand Not Amazon to even more cities. “You don’t think that they’re going to go anywhere.”

Small and medium-size businesses contribute more than 50 percent to Canada’s gross domestic product. But since the pandemic lockdowns, 40 percent of small businesses have reported layoffs while 20 percent have deferred rent payments, according to government data.

At the same time, Amazon and big-box retailers with more robust e-commerce platforms have far outpaced small competitors, turning online shopping from a convenience into a necessity for consumers worldwide.

Ms. Haberstroh’s attempt to even the playing field has been welcomed by small-business owners like Tannis and Mara Bundi, twin sisters who opened the Green Jar in Toronto last December. The store specializes in bulk items, like soap and honey, that customers buy to refill their own containers, reducing single-use plastics and household waste.

When the pandemic took hold in March, the sisters swiftly focused on their online operations and offered pickup and delivery, but even as restrictions eased, business remained touch and go. Since being on the Not Amazon site, the Green Jar has seen online orders rise 500 percent and has been “incredibly busy,” Tannis Bundi said.

“This type of initiative really gave an opportunity for small businesses to be seen and appreciated,” she said. “Large corporations, like Amazon, they’re making millions and millions of dollars, and there’s a disconnect and a detachment. As a small business I have a much smaller carbon footprint, I have a vested interest in my community, and I’m more likely to invest back into my community through charity and hiring locally.”

Amazon declined to comment for this article.

Local campaigns by independent sellers have also  . . .

Continue reading.

Written by LeisureGuy

2 January 2021 at 3:38 pm

Facebook managers trash their own ad targeting

leave a comment »

Sam Biddle reports in the Intercept:

FACEBOOK IS CURRENTLY waging a PR campaign purporting to show that Apple is seriously injuring American small businesses through its iOS privacy features. But at the same time, according to allegations in recently unsealed court documents, Facebook has been selling them ad targeting that is unreliable to the point of being fraudulent.

The documents feature internal Facebook communications in which managers appear to admit to major flaws in ad targeting capabilities, including that ads reached the intended audience less than half of the time and that data behind a targeting criterion was “all crap.” Facebook says the material is presented out of context.

The documents emerged from a suit currently seeking class-action certification in federal court. The suit was filed by the owner of Investor Village, a small business that operates a message board on financial topics. Investor Village said in court filings that it decided to buy narrowly targeted Facebook ads because it hoped to reach “highly compensated and educated investors” but “had limited resources to spend on advertising.” But nearly 40 percent of the people who saw Investor Village’s ad either lacked a college degree, did not make $250,000 per year, or both, the company claims. In fact, not a single Facebook user it surveyed met all the targeting criteria it had set for Facebook ads, it says.

The complaint features Facebook documents indicating that the company knew its advertising capabilities were overhyped and underperformed.

A “February 2016 internal memorandum” sent from an unnamed Facebook manager to Andrew Bosworth, a Zuckerberg confidant and powerful company executive who oversaw ad efforts at the time, reads, “[I]nterest precision in the US is only 41%—that means that more than half the time we’re showing ads to someone other than the advertisers’ intended audience. And it is even worse internationally. … We don’t feel we’re meeting advertisers’ interest accuracy expectations today.”

The lawsuit goes on to quote unnamed “employees on Facebook’s ad team” discussing their targeting capabilities circa June 2016:

One engineer celebrated that detailed targeting accounted for “18% of total ads revenue,” and $14.8 million on June 17th alone. Using a smiley emoticon, an engineering manager responded, “Love this chart! Although if the most popular option is to combine interest and behavior, and we know for a fact our behavior is almost all crap, does this mean we are misleading advertiser [sic] a bit? :)” That manager proceeded to suggest further examination of top targeting criteria to “see if we are giving advertiser [sic] false hope.”

“Interest” and “behavior” are two key facets of the data dossiers Facebook compiles on us for advertisers; according to the company, the former includes things you like, “from organic food to action movies,” while the latter consists of “behaviors such as prior purchases and device usage.”

The complaint also cites unspecified internal communications in which “[p]rivately, Facebook managers described important targeting data as ‘crap’ and admitted accuracy was ‘abysmal.’”

Facebook has said in its court filings that . . .

Continue reading. There’s more.

Written by LeisureGuy

2 January 2021 at 3:29 pm

Recommended free book(s)

leave a comment »

Some time back I blogged about how truly excellent E. Nesbit’s The Enchanted Castle is, and how I was able to download a copy from the Gutenberg project and use Calibre to convert it to Amazon Kindle and move it to the device.

Today I saw that StandardEblooks.org has that title (and that’s where the link above takes you), along with some more Edith Nesbit’s books. And indeed, their library of free well-edited books grows apace. I don’t make many donations, but I did donate to this effort.

Since Virginia Woolf’s Mrs. Dalloway comes into the public domain this year, I’m hoping we’ll see that. (They already offer two Woolf novels.

I’ve already suggested Scaramouche, so I’ll mention it again. It’s free and fun. Could you ask for more?

Well, of course you could. How about a nice serving of Charles Dickens? or Jane Austen? or Anthony Trollope? or Rudyard Kipling? ….  and they are a great many more.

As I’ve previously noted, the (free) program Calibre is extremely useful in managing your ebook library. I wouldn’t be without it.

Written by LeisureGuy

1 January 2021 at 2:29 pm

Posted in Books, Software, Technology

The way we train AI is fundamentally flawed

leave a comment »

Will Douglas Heaven writes in MIT Technology Review:

It’s no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings. This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world, a problem known as data shift. For example, an AI trained to spot signs of disease in high-quality medical images will struggle with blurry or cropped images captured by a cheap camera in a busy clinic.

Now a group of 40 researchers across seven different teams at Google have identified another major cause for the common failure of machine-learning models. Called “underspecification,” it could be an even bigger problem than data shift. “We are asking more of machine-learning models than we are able to guarantee with our current approach,” says Alex D’Amour, who led the study.

Underspecification is a known issue in statistics, where observed effects can have many possible causes. D’Amour, who has a background in causal reasoning, wanted to know why his own machine-learning models often failed in practice. He wondered if underspecification might be the problem here too. D’Amour soon realized that many of his colleagues were noticing the same problem in their own models. “It’s actually a phenomenon that happens all over the place,” he says.

D’Amour’s initial investigation snowballed and dozens of Google researchers ended up looking at a range of different AI applications, from image recognition to natural language processing (NLP) to disease prediction. They found that underspecification was to blame for poor performance in all of them. The problem lies in the way that machine-learning models are trained and tested, and there’s no easy fix.

The paper is a “wrecking ball,” says Brandon Rohrer, a machine-learning engineer at iRobot, who previously worked at Facebook and Microsoft and was not involved in the work.

Same but different

To understand exactly what’s going on, we need to back up a bit. Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you’re done.

What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but—and this is the crucial part—these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don’t affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world.

In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.

This is not the same as data shift, where training fails to produce a good model because the training data does not match real-world examples. Underspecification means something different: even if a training process can produce a good model, it could still spit out a bad one because it won’t know the difference. Neither would we.

The researchers looked at the impact of underspecification on a number of different applications. In each case they used the same training processes to produce multiple machine-learning models and then ran those models through stress tests designed to highlight specific differences in their performance.

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training test—suggesting that they were equally accurate—their performance varied wildly in the stress test.

The stress test used . . .

Continue reading. There’s more.

Later in the article:

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. “It pokes some significant holes in the fundamental assumptions we’ve been making.”

Written by LeisureGuy

28 December 2020 at 10:04 am

Beyond AlphaZero: New AI capabilities in MuZero

leave a comment »

DeepMind’s blog reports:

In 2016, we introduced AlphaGo, the first artificial intelligence (AI) program to defeat humans at the ancient game of Go. Two years later, its successor – AlphaZero – learned from scratch to master Go, chess and shogi. Now, in a paper in the journal Nature, we describe MuZero, a significant step forward in the pursuit of general-purpose algorithms. MuZero masters Go, chess, shogi and Atari without needing to be told the rules, thanks to its ability to plan winning strategies in unknown environments.

For many years, researchers have sought methods that can both learn a model that explains their environment, and can then use that model to plan the best course of action. Until now, most approaches have struggled to plan effectively in domains, such as Atari, where the rules or dynamics are typically unknown and complex.

MuZero, first introduced in a preliminary paper in 2019, solves this problem by learning a model that focuses only on the most important aspects of the environment for planning. By combining this model with AlphaZero’s powerful lookahead tree search, MuZero set a new state of the art result on the Atari benchmark, while simultaneously matching the performance of AlphaZero in the classic planning challenges of Go, chess and shogi. In doing so, MuZero demonstrates a significant leap forward in the capabilities of reinforcement learning algorithms.

Generalising to unknown models

The ability to plan is an important part of human intelligence, allowing us to solve problems and make decisions about the future. For example, if we see dark clouds forming, we might predict it will rain and decide to take an umbrella with us before we venture out. Humans learn this ability quickly and can generalise to new scenarios, a trait we would also like our algorithms to have.

Researchers have tried to tackle this major challenge in AI by using two main approaches: lookahead search or model-based planning.

Systems that use lookahead search, such as AlphaZero, have achieved remarkable success in classic games such as checkers, chess and poker, but rely on being given knowledge of their environment’s dynamics, such as the rules of the game or an accurate simulator. This makes it difficult to apply them to messy real world problems, which are typically complex and hard to distill into simple rules.

Model-based systems aim to address this issue by learning an accurate model of an environment’s dynamics, and then using it to plan. However, the complexity of modelling every aspect of an environment has meant these algorithms are unable to compete in visually rich domains, such as Atari.  Until now, the best results on Atari are from model-free systems, such as DQNR2D2 and Agent57. As the name suggests, model-free algorithms do not use a learned model and instead estimate what is the best action to take next.

MuZero uses a different approach to overcome the limitations of previous approaches. Instead of trying to model the entire environment, MuZero just models aspects that are important to the agent’s decision-making process. After all, knowing an umbrella will keep you dry is more useful to know than modelling the pattern of raindrops in the air.

Specifically, MuZero models three elements of the environment that are critical to planning:

  • The value: how good is the current position?
  • The policy: which action is the best to take?
  • The reward: how good was the last action?

These are all learned using a  . . .

Continue reading. There’s more.

Julian Schrittwieser has a blog post that delves further into this development:

To celebrate the publication of our MuZero paper in Nature, I’ve written a high level description of the MuZero algorithm. My focus here is to give you an intuitive understanding and general overview of the algorithm; for the full details please read the paper. Please also see our official DeepMind blog post, it has great animated versions of the figures!

MuZero is a very exciting step forward – it requires no special knowledge of game rules or environment dynamics, instead learning a model of the environment for itself and using this model to plan. Even though it uses such a learned model, MuZero preserves the full planning performance of AlphaZero – opening the door to applying it to many real world problems!

It’s all just statistics

MuZero is a machine learning algorithm, so naturally the first thing to understand is how it uses neural networks. From AlphaGo and AlphaZero, it inherited the use of policy and value networks1: . . .

Both the policy and the value have a very intuitive meaning:

  • The policy, written p(s,a)p(s,a), is a probability distribution over all actions aa that can be taken in state ss. It estimates which action is likely to be the optimal action. The policy is similar to the first guess for a good move that a human player has when quickly glancing at a game.

  • The value v(s)v(s) estimates the probability of winning from the current state ss: averaging over all possible future possibilities, weighted by how likely they are, in what fraction of them would the current player win?

Each of these networks on their own is already very powerful: If you only have a policy network, you could simply always play the move it predicts as most likely and end up with a very decent player. Similarly, given only a value network, you could always choose the move with the highest value. However, combining both estimates leads to even better results.

Planning to Win

Similar to AlphaGo and AlphaZero before it, MuZero uses Monte Carlo Tree Search2, short MCTS, to aggregate neural network predictions and choose actions to apply to the environment.

MCTS is an iterative, best-first tree search procedure. Best-first means expansion of the search tree is guided by the value estimates in the search tree. Compared to classic methods such as breadth-first (expand the entire tree up to a fixed depth before searching deeper) or depth-first (consecutively expand each possible path until the end of the game before trying the next), best-first search can take advantage of heuristic estimates (such as neural networks) to find promising solutions even in very large search spaces.

MCTS has three main phases: simulation, expansion and backpropagation. By repeatedly executing these phases, MCTS incrementally builds a search tree over future action sequences one node at a time. In this tree, each node is a future state, while the edges between nodes represent actions leading from one state to the next.

Before we dive into the details, let me introduce a schematic representation of such a search tree, including the neural network predictions made by MuZero: . ..

Continue reading. Again, there’s more.

Written by LeisureGuy

27 December 2020 at 6:37 am

Posted in Business, Daily life, Software, Technology

Tagged with

Backstory to Apple’s new M1 System on a Chip: How an obscure British PC maker invented ARM and changed the world

leave a comment »

Image by Jason Korchinsky

Jason Torchinsky has in Ars Technica a fascinating article that includes videos. His article begins:

Let’s be honest: 2020 sucks. So much of this year has been a relentless slog of bad news and miserable events that it’s been hard to keep up. Yet most of us have kept up, and the way most of us do so is with the small handheld computers we carry with us at all times. At least in America, we still call these by the hilariously reductive name “phones.”

We can all use a feel-good underdog story right now, and luckily our doomscrolling 2020 selves don’t have to look very far. That’s because those same phones, and so much of our digital existence, run on the same thing: the ARM family of CPUs. And with Apple’s release of a whole new line of Macs based on their new M1 CPU—an ARM-based processor—and with those machines getting fantastic reviews, it’s a good time to remind everyone of the strange and unlikely source these world-controlling chips came from.

If you were writing reality as a screenplay, and, for some baffling reason, you had to specify what the most common central processing unit used in most phones, game consoles, ATMs, and other innumerable devices was, you’d likely pick one from one of the major manufacturers, like Intel. That state of affairs would make sense and fit in with the world as people understand it; the market dominance of some industry stalwart would raise no eyebrows or any other bits of hair on anyone.

But what if, instead, you decided to make those CPUs all hail from a barely-known company from a country usually not the first to come to mind as a global leader in high-tech innovations (well, not since, say, the 1800s)? And what if that CPU owed its existence, at least indirectly, to an educational TV show? Chances are the producers would tell you to dial this script back a bit; come on, take this seriously, already.

And yet, somehow, that’s how reality actually is.

In the beginning, there was TV

The ARM processor, the bit of silicon that controls over 130 billion devices all over the world and without which modernity would effectively come to a crashing halt, has a really strange origin story. Its journey is peppered with bits of seemingly bad luck that ended up providing crucial opportunities, unexpected technical benefits that would prove absolutely pivotal, and a start in some devices that would be considered abject failures.

But everything truly did sort of get set in motion by a TV show—a 1982 BBC program called The Computer Programme. This was an attempt by the BBC to educate Britons about just what the hell all these new fancy machines that looked like crappy typewriters connected to your telly were all about.

The show was part of a larger Computer Literacy Project started by the British government and the BBC as a response to fears that the UK was deeply and alarmingly unprepared for the new revolution in personal computing that was happening in America. Unlike most TV shows, the BBC wanted to feature a computer on the show that would be used to explain fundamental computing concepts and teach a bit of BASIC programming. The concepts included graphics and sound, the ability to connect to teletext networks, speech synthesis, and even some rudimentary AI. As a result, the computer needed for the show would have to be pretty good—in fact, the producers’ demands were initially so high that nothing on the market really satisfied the BBC’s aspirations.

So, the BBC put out a call to the UK’s young computer industry, which was then dominated by Sinclair, a company that made its fortune in calculators and tiny televisions. Ultimately, it was a much smaller upstart company that ended up getting the lucrative contract: Acorn Computers.

An Acorn blooms

Acorn was a Cambridge-based firm that started in 1979 after developing computer systems originally designed to run fruit machines—we call them slot machines—then turning them into small hobbyist computer systems based on 6502 processors. That was the same CPU family used in the Apple II, Atari 2600, and Commodore 64 computers, among many others. This CPU’s design will become important later, so, you know, don’t forget about it.

Acorn had developed a home computer called the Atom, and when the BBC opportunity arose, they started plans for the Atom’s successor to be developed into what would become the BBC Micro.

The BBC’s demanding list of features ensured the resulting machine would be quite powerful for the era, though not quite as powerful as Acorn’s original Atom-successor design. That Atom successor would have featured two CPUs, a tried-and-true 6502 and an as-yet undecided 16-bit CPU.

Acorn later dropped that CPU but kept an interface system, called the Tube, that would allow for additional CPUs to be connected to the machine. (This too will become more important later.)

The engineering of the BBC Micro really pushed Acorn’s limits, as it was a pretty state-of-the-art machine for the era. This resulted in some fascinatingly half-ass but workable engineering decisions, like having to replicate the placement of an engineer’s finger on the motherboard with a resistor pack in order to get the machine to work.

Nobody ever really figured out why the machine only worked when a finger was placed on a certain point on the motherboard, but once they were able to emulate the finger touch with resistors, they were just satisfied it worked, and moved on.

Here, listen to one of the key engineers tell you himself: . . .

Continue reading. There’s much more, and it’s fascinating (to me, at any rate).

Written by LeisureGuy

24 December 2020 at 1:16 pm

Trump moved cyber security budget to pay for his wall before major hacking assault

leave a comment »

Stuti Mishra reports in the Independent:

A former FBI deputy has alleged that President Trump has been diverting money from cybersecurity resources to build a wall at a time when the “nation is under attack”.

Speaking to MSNBC on Thursday about a report published in Politico that revealed that hackers accessed systems at the National Nuclear Security Administration, Frank Figliuzzi, a former FBI deputy director for counterintelligence, said that the reason such attacks are occurring is that the budget for cybersecurity under the Trump administration had been squeezed in order to prioritise other things.

“Make no mistake, our nation is under attack and it appears to be ongoing,” said Mr Figliuzzi. “How does something like this happen of this magnitude? Where 300,000 clients of a private company are potentially impacted including the most sensitive agencies in our government, it is because the Russias were able to find a single point of failure in our supply chain.”

“Meaning this product that comes from SolarWinds is a network management product used by too many, quite frankly, of all government agencies and too many of our top telecommunications companies. Ten of which were compromised as far as we know — so far. So, it is a larger issue, Nicolle, of supply chain management.”

He also said that it’s more than merely an intelligence failure but rather “it’s a national defense failure.”

“This is the defence of our nation and systems and failure to oversee our supply chain in a form of allowing one company to service so many of our government agencies,” he said.

“The Russians found that weakness and exploited it and we’re still learning the extent of the damage and Natasha reported that hour now our nuclear components have been impacted and one of the words that jumped out there the reporting is damaged.”

On Thursday, Politico reported that the Energy Department and National Nuclear Security Administration . . .

Continue reading.

Written by LeisureGuy

23 December 2020 at 8:24 am

Blob Opera: Watch, Listen, and Experiment

leave a comment »

Google engineers having fun. Go to this page, then click on everything to see what happens and click-and-drag the blobs. More entertaining than you might expect.

More experiments here.

Written by LeisureGuy

21 December 2020 at 10:32 am

“The Young Visiters,” a wonderful novel by Daisy Ashford, 9 years old at the time

leave a comment »

I have just begun reading The Young Visiters. You can buy it from the Amazon bookstore for $5.85 for a Kindle version, but I preferred to download it from the Gutenberg Project for free, but note that what they call “Kindle format” is MOBI. That was once the format Kindle used, but quite some time ago Kindle switched to AZW3. No problem, though. I used Calibre (free) to convert it from MOBI to AZW3 (and also get a better cover by editing the metadata), and then used Calibre to load it onto my Kindle.

The book was published in 1919, so it seems appropriate to read it in 2020. think. The opening seems full of verve, dash, and charm:

Mr Salteena was an elderly man of 42 and was fond of asking peaple to stay with him. He had quite a young girl staying with him of 17 named Ethel Monticue. Mr Salteena had dark short hair and mustache and wiskers which were very black and twisty. He was middle sized and he had very pale blue eyes. He had a pale brown suit but on Sundays he had a black one and he had a topper every day as he thorght it more becoming. Ethel Monticue had fair hair done on the top and blue eyes. She had a blue velvit frock which had grown rarther short in the sleeves. She had a black straw hat and kid gloves.

One morning Mr Salteena came down to brekfast and found Ethel had come down first which was strange. Is the tea made Ethel he said rubbing his hands. Yes said Ethel and such a quear shaped parcel has come for you Yes indeed it was a quear shape parcel it was a hat box tied down very tight and a letter stuffed between the string. Well well said Mr Salteena parcels do turn quear I will read the letter first and so saying he tore open the letter and this is what it said

Written by LeisureGuy

20 December 2020 at 12:33 pm

Posted in Books, Software, Technology

Crime Shouldn’t Pay: Why Big Tech Executives Should Face Jail

leave a comment »

Matt Stoller reports in BIG:

The Antitrust Bonanza Against Google and Facebook

It’s hard not to be excited about the multiple antitrust suits filed this week against Google. This past Wednesday, Texas Attorney General Ken Paxton, along with 10 other state AGs, accused Google of monopolizing online advertising, arguing it used coercive tactics to seize control of the plumbing that underpins all ad-financed internet content, and illegally divided up the online ad market with Facebook. The next day, Colorado’s Phil Weiser and Nebraska’s Doug Peterson led 38 states in accusing Google of manipulating its search results to disfavor specialized competitors like Yelp, as well as blocking competitors who seek to enter new search markets like those of voice assistants or internet-enabled cars.

The suits themselves are stunning.

The Texas case reveals new details about how online advertising markets function, drawing from Dina Srinivasan’s critical research on how advertising sales has been transformed into a complex financial market run by Google. While the complaint alleges that Google has engaged in monopolization, it also alleges a different violation, that Facebook and Google are in a cartel to violate user privacy and fix prices in advertising markets. The complaint reveals that after Facebook bought WhatsApp, which pledged to its users (and the FTC) strict privacy controls, “Facebook signed an exclusive agreement with Google, granting Google access to millions of Americans’ end-to-end encrypted WhatsApp messages, photos, videos, and audio files.” If true, that’s a remarkable set of illegal acts, by both Google and Facebook, as well as a betrayal of their users.

The complaint also asserts Google divided the ad market with Facebook, offering Facebook advantages in buying and selling ads through Google services if Facebook withdrew from head-to-head competition in other markets. This collusion is meaningful from a legal perspective. The Sherman Act has two parts. Section Two prohibits monopolization, but monopolization cases are very hard to bring and quite expensive, and require elaborate models. Section One prohibits cartels and price-fixing as conspiracies. Cartel cases are much easier – just show an agreement to collaborate on fixing prices, and you’re done. In fact cartels are so much easier to prosecute that price-fixing is the only area that enforcers actually bring criminal charges. And worrisome for Google, Texas is alleging cartel behavior.

The Colorado-led suit, while not unearthing anything earth-shattering, is also quite useful. Enforcers there are addressing not just today’s search markets on desktop and mobile platforms, but where search is heading in the future, platforms like internet-enabled cars and voice assistants. It’s a smart way to ensure that antitrust enforcement blocks monopolization at the creation of new markets, which is when it’s easiest to generate competition.

Since October, enforcers have brought four strong suits against Google and Facebook, two of the largest corporations in the world. And the demanded remedies for these civil violations are tough. Enforcers are asking for injunctive relief to stop the bad behavior, break-ups of these companies to end the structural conflicts, as well as monetary damages and civil fines.These few months represent perhaps the toughest spate of antitrust action since the post-World War II era, when Harry Truman restarted antitrust cases after their suspension during the war.

It’s not just these suits; Apple is facing a major attack from Epic Games and a broad coalition who seeks to destroy its app store monopoly, and Congress is gearing up to smash monopolies through legislative efforts. As Steven Perlstein noted in the Washington Post, this effort is more than just an attack on Google and Facebook, but a “legal shot across the bow of dominant firms in other highly concentrated industries — pharmaceuticals, telecommunications, financial services — who are now on notice that their nonstop acquisitions and hardball business practices could invite similar challenge.”

Even Europeans are getting more aggressive, with European member of Parliament Paul Tang winning a vote to ban personalized advertising in the EU, as well as offering increasing criticism of EU Competition enforcer Margarethe Vestager for lagging behind the tougher approach in the United States.

It’s hard not to see the sharp turn in competition policy without a recognition that something has gone very very right in policy circles on the question of monopoly.

And yet, there’s a fly in the ointment, as we move from the theoretical idea of antitrust against dominant corporations to the actual implementation. And that is, the suits are going to take a very long time.

Crime Pays

Judge Amit Mehta, who is hearing the case filed in October, said that the DOJ and Google will likely go to trial in late 2023. That’s nearly three years from now. The reason for the delay makes sense; both sides must gather documents, do research, file and debate procedural motions, interview executives and stakeholders, and build complex economic models for the trial. Still, three years is three years, and that’s a minimum. The trial itself could stretch out, with a remedy phase, and then there will be appeals. When all is said and done, it could be five years before there’s a remedy, or even longer.

One problem with such a lengthy period is that the longer monopolistic behavior goes on, the more damage, in this case to publishers whose ad revenue is being stolen, and small and medium size businesses whose property is being appropriated. We can quantify the additional damages, somewhat. Google has $170B in revenue this year, and is growing on average at 10-20% a year. If we take the lower end of 10%, Google will add another $100 billion to its revenue by 2025. That’s a lot of money. Facebook is at $80 billion of revenue this year, but it is growing faster, so the net increase of revenue is a roughly similar amount. In other words, if the claims of the government are credible, then the lengthy case, while perhaps necessary, is also enabling these monopolists to steal an additional $100 billion apiece.

Monopolization isn’t just illegal, it is in fact a crime, an appropriation of the rights and property of others by a dominant actor. The lengthy trial is essentially akin to saying that bank robbers getting to keep robbing banks until they are convicted, and can probably keep the additional loot. There are ways of a judge issuing preliminary orders to stop bad behavior in the interim, so it’s not inevitable that these corporations get to continue what they are doing. However, judges don’t tend to like issuing such orders, though hopefully enforcers will ask for them and Mehta will make an exception.

But the monetary cost is not the most dangerous part of the delay. What’s more frightening is the political corruption that Google and Facebook are enabling. Thousands of newspapers have fallen apart over the past ten years, and over the next three, thousands more will collapse. Aside from killing pro-social institutions like newspapers, these platforms have been inducing significant harms society-wide, from enabling ethnic cleansing abroad and divisiveness in Western democracies, to undermining our economy writ large. The end state is frightening. Indeed, here’s what the Texas complaint alleges is Google’s long-term goal.

Google’s current dominance is merely a preview of its future plans. Google has an appetite for total dominance, and its latest ambition is to transform the free and open architecture of the internet. Google’s plan is to create a walled garden around the internet in which it controls websites and mobile applications. Google calls its emerging venture the [redacted], a world in which publisher content is operated by Google…

Google’s documented plan is to capture online publishers on the open internet and transform them into content creators generating revenue for Google on a completely closed platform—like YouTube content creators.

Google has total power over YouTube creators, the ability to demonetize them, to censor them, promote them or not. And that’s Google’s goal for all speakers and businesses online, to turn us all into serfs working – and speaking – at Google’s pleasure. It’s hard to argue that waiting five years for a remedy is sufficient to address this incredible threat to our wallets and more importantly our liberties.

Now, this is not to say these cases won’t have an effect until they are concluded. I’ve pointed out that the DOJ suit is already having an effect, in that Google is now facing potential competition and changing its behavior to stop the most egregious exclusionary behavior. As scandal after scandal emerges around Google and Facebook, the political need for these companies to mitigate their behavior will increase.

Legal scholar Tim Wu calls such a phenomenon the Policeman at the Elbow theory of antitrust enforcement, and there’s a lot of precedent. In the 1990s and early 2000s, during and after the antitrust case against Microsoft, Microsoft executives became far more cautious in product development, seeking legal advice to ensure they were not behaving in anti-competitive ways. Bill Gates recently noted that the reason people use iPhones and Android phones is because Microsoft was distracted by the case and so lost out on making Windows Mobile the standard. There’s a similar story with IBM, which unbundled software in the 1960s as a result of an antitrust case, thus enabling the creation of the modern software industry.

Because of the policeman at the elbow watching their every move, Google and Facebook are going to have to incorporate legal advice into their product development patterns, and doing so is likely to increase opportunities for competitors.

Still, there’s a difference between 2020 and earlier periods. Bill Gates, as hard as it is to believe, was more law-abiding than Zuckerberg and Pichai, because the rule of law was much stronger decades ago. Given legal uncertainty around the point of antitrust laws, as well as the political defiance of these leaders, it’s likely that Mark Zuckerberg and Sundar Pichai are going to work extremely hard to corrupt antitrust enforcement. They will do everything they can to *not* change their bad behavior, whether that’s ensuring products continue to exclude others, continuing a merger spree with more acquisitions (as Facebook just did), or even attempting to use their platforms to manipulate elections (as Uber and Lyft did with Prop 22 in California). We are in an era of elite lawlessness, so the normal effects of antitrust may not work until powerful leaders start to be afraid of getting caught breaking the law.

And that’s what brings me to why we need to start talking about crime and punishment.

The Problem of Elite Lawlessness

In the late 1930s, we had a similar period of elite lawlessness, a battle over the very legitimacy of the public’s right to vote for political change in the form of the New Deal. In the election of 1936, the banks, newspapers, and big business leaders combined through a variety of well-funded interest groups in an attempt to thwart FDR’s reelection. They lost. Two years later, in the midst of a recession, these leaders sought again to roll back New Deal rules as an unconstitutional seizures of power.

Enter Thurman Arnold, the greatest antitrust enforcer in American history. . .

Continue reading. There’s much more at the link and no paywall.

Written by LeisureGuy

20 December 2020 at 9:49 am

“I started crying”: Inside Timnit Gebru’s last days at Google—and what happens next

leave a comment »

Karen Hao writes in MIT Technology Review:

By now, we’ve all heard some version of the story. On December 2, after a protracted disagreement over the release of a research paper, Google forced out its ethical AI co-lead, Timnit Gebru. The paper was on the risks of large language models, AI models trained on staggering amounts of text data, which are a line of research core to Google’s business. Gebru, a leading voice in AI ethics, was one of the only Black women at Google Research.

The move has since sparked a debate about growing corporate influence over AI, the long-standing lack of diversity in tech, and what it means to do meaningful AI ethics research. As of December 15, over 2,600 Google employees and 4,300 others in academia, industry, and civil society had signed a petition denouncing the dismissal of Gebru, calling it “unprecedented research censorship” and “an act of retaliation.”

Gebru is known for foundational work in revealing AI discrimination, developing methods for documenting and auditing AI models, and advocating for greater diversity in research. In 2016, she cofounded the nonprofit Black in AI, which has become a central resource for civil rights activists, labor organizers, and leading AI ethics researchers, cultivating and highlighting Black AI research talent.

Losing her job didn’t slow Gebru down. The following week, she took part in several workshops at NeurIPS, the largest annual AI research conference, which over 20,000 people attended this year. It was “therapeutic,” she says, to see how the community she’d helped build showed up and supported one another. Now, another week later, she’s just winding down and catching her breath—and trying to make sense of it all.

On Monday, December 14, I caught up with Gebru via Zoom. She recounted what happened during her time at Google, reflected on what it meant for the field and AI ethics research, and gave parting words of advice to those who want to keep holding tech companies accountable. You can also listen to a special episode of our podcast, In Machines We Trust, for highlights from the interview. (Google declined a request for comment on the contents of this interview.)

The following has been edited and condensed.

I wanted to first check in with how you’re doing.

I feel like I haven’t really had the time to process everything that happened and its repercussions emotionally. I’m just sort of going and going and going. So I feel like I’ll probably fall apart at some point when there’s a little bit of a lull. But right now I’m just highly concerned about my team and the people supporting me, and the types of risks they’re taking, and making sure that they’re not retaliated against.

There have been so many accounts of what has happened. I wanted to start from a much earlier point in this story. What made you originally choose to work at Google, and what was Google like back then?

I think Samy [Bengio, a director at Google AI] and Jeff [Dean, the SVP of Google Research] were at the Black in AI workshop [at NeurIPS in 2017]. They were asking me what I did, and they said, “Oh yeah, you should come work at Google.” I wasn’t planning on it. I was doing my postdoc at the time at Microsoft Research [MSR]. I hadn’t figured out what I was going to do next. But I knew I wanted to go back to the Bay Area, and they were creating an office in Accra, Ghana. I thought it would be good for me to help with that.

I had a lot of reservations. I was in New York City at MSR, and there were a lot of vocal women there—Danah Boyd, Hannah Wallach, Jen [Chayes], Kate Crawford, Mary Gray. There weren’t really women of color. The only Black women I know out of all of Microsoft Research are Danielle Bellgrave in the UK and Shawndra Hill in New York. But still, even the men were very supportive. I was very hesitant to go to an environment where I knew Google Research was not well known for its advocacy for women. There were a number of issues that I had heard through my whisper networks. In fact, when I said I was going to go to Google Research, a number of people actually sat me down. So I was just already dreading it, like “Oh, man, okay, what am I going into?”

They did not disappoint. It was just constant fighting. I was trying to approach it as talking to people, trying to educate them, trying to get them to see a certain point of view. I kept on thinking that they could do better, you know? With Samy, he has become such a huge advocate. People were complaining that this organization [Google Research] hired just 14% women. Samy, my manager, hired 39% women. It wasn’t like he had any incentive to do that whatsoever. He was the only reason I feel like this didn’t happen to me before. It’s probably because he was protecting us. And by protecting us, he would get in trouble himself. If other leaders are tone-policing you, and you’re too loud, you’re like a troublemaker—we all know that’s what happens to people like me—then if someone defends you, they’re obviously going to also be a problem for the other leaders.

So that was my two years at Google. I actually thought that maybe we were making progress until the last incident, because our team grew. It went from almost disintegrating—two months into my time at Google, my co-lead, Meg Mitchell, was going to quit. But then we expanded our team, and we are now, like, 12 people. So I thought that we were inching forward.

There was so much talk about diversity and inclusion, but so much hypocrisy. I’m one of 1.6% Black women at Google. In [Google] Research, it’s not 1.6%—it’s way lower. I was definitely the first Black woman to be a research scientist at Google. After me, we got two more Black women. That’s, like, out of so many research scientists. Hundreds and hundreds. Three out of God knows how many.

So at some point I was just like, you know what? I don’t even want to talk about diversity. It’s just exhausting. They want to have meetings with you, they don’t listen to you, and then they want to have meetings with you again. I’ve written a million documents about a million diversity-related things—about racial literacy and machine learning [ML], ML fairness initiatives, about retention of women, and the issues. So many documents and so many emails.

So it’s just been one thing after another. There’s not been a single vacation I took inside Google where I wasn’t in the middle of some issue or another. It’s just never been peace of mind. Imagine somebody’s shooting at you with a gun and you’re screaming. And instead of trying to stop the person who’s shooting at you with a gun, they’re trying to stop you from screaming. That’s how it felt. It was just so painful to be in that position over and over and over again.

You successfully built one of the most diverse teams in the AI industry. What did it actually take to do that?

We had to battle all sorts of stuff. I had to be a manager, and then people did not want me to be a manager. I was like, “Okay, I’ve started a very well-known nonprofit. Why do you have ‘concerns’ about me being a manager?” Samy didn’t say this to me, but he had to deliver this message: “Does she know that she can get fired for things? Does she know that if she becomes a manager, then she’s going to have to be a representative of Google?” Then people raised concerns about me seeming unhappy at Google. It’s not like, “Oh, there’s a toxic culture that’s making people like her unhappy. So let’s fix that culture.” No, that was not the conversation. The conversation was “She seems to be unhappy, so let’s not make her a manager.”

I was livid at that time. I was so angry. I was asking every other person who became a manager at my level what their experience was. I’m like, “This person became a manager and nobody ever asked them if they knew they were going to be fired for X, Y, and Z. This other person became a manager. Nobody had to talk to them. There was no discussion whatsoever.” For me it wasn’t like that. . .

Continue reading. There’s a lot more — and there’s audio at the link.

Written by LeisureGuy

19 December 2020 at 10:34 pm

%d bloggers like this: