Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

ShadowDragon: Inside the Social Media Surveillance Software That Can Watch Your Every Move

leave a comment »

Michael Kwet reports in the Intercept:

A MICHIGAN STATE POLICE CONTRACT, obtained by The Intercept, sheds new light on the growing use of little-known surveillance software that helps law enforcement agencies and corporations watch people’s social media and other website activity.

The software, put out by a Wyoming company called ShadowDragon, allows police to suck in data from social media and other internet sources, including Amazon, dating apps, and the dark web, so they can identify persons of interest and map out their networks during investigations. By providing powerful searches of more than 120 different online platforms and a decade’s worth of archives, the company claims to speed up profiling work from months to minutes. ShadowDragon even claims its software can automatically adjust its monitoring and help predict violence and unrest. Michigan police acquired the software through a contract with another obscure online policing company named Kaseware for an “MSP Enterprise Criminal Intelligence System.”

The inner workings of the product are generally not known to the public. The contract, and materials published by the companies online, allow a deeper explanation of how this surveillance works, provided below.

ShadowDragon has kept a low profile but has law enforcement customers well beyond Michigan. It was purchased twice by the U.S. Immigration and Customs Enforcement agency in the last two years, documents show, and was reportedly acquired by the Massachusetts State Police and other police departments within the state.

Michigan officials appear to be keeping their contract and the identities of ShadowDragon and Microsoft from the public. The website does not make the contract available; it instead offers an email address at which to request the document “due to the sensitive nature of this contract.” And the contract it eventually provides has been heavily redacted: The copy given to David Goldberg, a professor at Wayne State University in Detroit had all mentions of ShadowDragon software and Microsoft Azure blacked out. What’s more, Goldberg had to file a Freedom of Information Act request to obtain the contract. When the state website did offer the contract, it was unredacted, and I downloaded it before it was withdrawn.

Last year, The Intercept published several articles detailing how a social media analytics firm called Dataminr relayed tweets about the George Floyd and Black Lives Matter protests to police. The same year, I detailed at The Intercept how Kaseware’s partner Microsoft helps police surveil and patrol communities through its own offerings and a network of partnerships.

This new revelation about the Michigan contract raises questions about what digital surveillance capabilities other police departments and law enforcement agencies in the U.S. might be quietly acquiring. And it comes at a time when previously known government social media surveillance is under fire from civil rights and liberties advocates like MediaJustice and the American Civil Liberties Union. It also raises the specter of further abuses in Michigan, where the FBI has been profiling Muslim communities and so-called Black Identity Extremists. In 2015, it was revealed that for years, the state police agency was using cell site simulators to spy on mobile phones without disclosing it to the public.

“Social media surveillance technologies, such as the software acquired by Michigan State Police, are often introduced under the false premise that they are public safety and accountability tools. In reality, they endanger Black and marginalized communities,” Arisha Hatch, vice president and chief of campaigns at civil rights nonprofit Color of Change, wrote in an email.

Michigan State Police spokesperson Shanon Banner said in an email that “the investigative tools available to us as part of this contract are only used in conjunction with criminal investigations, following all state and federal laws.” The founder of ShadowDragon, Daniel Clemens, wrote that the company provides only information that is publicly available and does not “build products with predictive capabilities.”

A Shadowy Industry

Kaseware and ShadowDragon are part of a shadowy industry of software firms that exploit what they call “open source intelligence,” or OSINT: the trails of information that people leave on the internet. Clients include intelligence agencies, government, police, corporations, and even schools.

Kaseware, which is partnered to ShadowDragon and Microsoft, provides a platform for activities that support OSINT and other elements of digital policing, like data storage, management, and analysis. Its capabilities range from storing evidence to predictive policing. By contrast, the two ShadowDragon products acquired by the Michigan State Police are more narrowly tailored for the surveillance of people using social media, apps, and websites on the internet. They run on the Kaseware platform.

To understand how Kaseware and ShadowDragon work together, let us consider each in turn, starting with ShadowDragon. . .

Continue reading.

Written by Leisureguy

23 September 2021 at 8:14 pm

File Not Found; or, Ignorance Not Always Bliss.

with one comment

Monica Chin writes in the Verge:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Professors have varied recollections of when they first saw the disconnect. But their estimates (even the most tentative ones) are surprisingly similar. It’s been an issue for four years or so, starting — for many educators — around the fall of 2017.

That’s approximately when Lincoln Colling, a lecturer in the psychology department at the University of Sussex, told a class full of research students to pull a file out of a specific directory and was met with blank stares. It was the same semester that Nicolás Guarín-Zapata, an applied physicist and lecturer at Colombia’s Universidad EAFIT, noticed that students in his classes were having trouble finding their documents. It’s the same year that posts began to pop up on STEM-educator forums asking for help explaining the concept of a file.

Guarín-Zapata is an organizer. He has an intricate hierarchy of file folders on his computer, and he sorts the photos on his smartphone by category. He was in college in the very early 2000s — he grew up needing to keep papers organized. Now, he thinks of his hard drives like filing cabinets. “I open a drawer, and inside that drawer, I have another cabinet with more drawers,” he told The Verge. “Like a nested structure. At the very end, I have a folder or a piece of paper I can access.”

Guarín-Zapata’s mental model is commonly known as directory structure, the hierarchical system of folders that modern computer operating systems use to arrange files. It’s the idea that a modern computer doesn’t just save a file in an infinite expanse; it saves it in the “Downloads” folder, the “Desktop” folder, or the “Documents” folder, all of which live within “This PC,” and each of which might have folders nested within them, too. It’s an idea that’s likely intuitive to any computer user who remembers the floppy disk.

More broadly, directory structure connotes physical placement — the idea that a file stored on a computer is located somewhere on that computer, in a specific and discrete location. That’s a concept that’s always felt obvious to Garland but seems completely alien to her students. “I tend to think an item lives in a particular folder. It lives in one place, and I have to go to that folder to find it,” Garland says. “They see it like one bucket, and everything’s in the bucket.”

That tracks with how Joshua Drossman, a senior at Princeton, has understood computer systems for as long as he can remember. “The most intuitive thing would be the laundry basket where you have everything kind of together, and you’re just kind of pulling out what you need at any given time,” he says, attempting to describe his mental model.

As an operations research and financial engineering major, Drossman knows how to program — he’s been trained to navigate directories and folders throughout his undergraduate years, and he understands their importance in his field. But it’s still not entirely natural, and he sometimes slips. About halfway through a recent nine-month research project, he’d built up so many files that he gave up on keeping them all structured. “I try to be organized, but there’s a certain point where there are so many files that it kind of just became a hot mess,” Drossman says. Many of his items ended up in one massive folder.

Peter Plavchan, an associate professor of physics and astronomy at George Mason University, has seen similar behavior from his students and can’t quite wrap his head around it. “Students have had these computers in my lab; they’ll have a thousand files on their desktop completely unorganized,” he told The Verge, somewhat incredulously. “I’m kind of an obsessive organizer … but they have no problem having 1,000 files in the same directory. And I think that is fundamentally because of a shift in how we access files.”

Aubrey Vogel, a journalism major at Texas A&M, has had similar experiences to Drossman. She’s encountered directory structure before; she shared a computer with her grandfather, who showed her how to save items in folders, as a child. But as she’s grown up, she’s moved away from that system — she now keeps one massive directory for schoolwork and one for her job. Documents she’s not sure about go in a third folder called “Sort.”

“As much as I want them to be organized and try for them to be organized, it’s just a big hot mess,” Vogel says of her files. She adds, “My family always gives me a hard time when they see my computer screen, and it has like 50 thousand icons.”

Why have mental models changed?  . . ..

Continue reading. There’s more.

I am surprised that this is a problem. I had no idea that people using computers would not understand directories, folders, and files. That seems so weird, but (as one student pointed out above) these are people who keep all their clothes in one big pile and rummage through it to find socks, underwear, shirts, and so on: no organization at all. I wonder whether their minds work the same way: disorganized and muddled.

My own mental model might be: a file is a book; the shelf on which it rests is a folder, and that is contained in another folder (the bookcase), which holds multiple folders (its various shelves). There’s a bigger folder — the room — which contains multiple bookcases (each a folder).

A shelf might contain a single book, or several books, or many books. And so on.

Written by Leisureguy

23 September 2021 at 11:42 am

Posted in Daily life, Education, Software

Tagged with

My new default search engine: Duck Duck Go

leave a comment »

I’ve mentioned that I’ve switched to Vivaldi as my default browser — it’s not only terrific, it’s free — and just recently I used Preferences to make Duck Duck Go my default search engine. I’ve been a Google guy for a long time, but in recent years the proportion of search results that are paid ads — and often ads that don’t turn out to have what I’m looking for — has increased to the point of frustration. Duck Duck Go just delivers the goods, with no ads. That’s a benefit, quite apart from DDG’s strong privacy policy (and in fact is a side-effect of that policy).

On smartphones DDG also acts as a privacy-oriented browser (blocking trackers), but Vivaldi does that already — and I use my MacBookk Air M1 for browsing, not my phone.

There is another privacy-first browser, Neeva, but that is not yet available here, and that, after 3 free months, is US$60 per year. DDG is free, which I find beneficial. I will make a contribution to DDG from time to time, but not US$60 per year — more like CAD 20 per year.

Written by Leisureguy

17 September 2021 at 12:53 pm

Answer to U.S. labor shortage? ‘Hidden’ workforce

leave a comment »

Interesting point on how algorithmic sorting of job applications hides good potential hires. Christina Pazzaneses interviews in the Harvard Gazette Joseph B. Fuller about a new report that found that businesses could plug critical labor shortage by tapping into 27 million workers who are “hidden” from corporate hiring processes. The article begins:

Since business has picked up with the COVID vaccine rollout, record numbers of employers have struggled to find workers. In August, half of U.S. small business owners had jobs they wanted to fill, a historic high, according to a trade group survey; 91 percent said there were few or no qualified applicants. The reasons for this labor-employment mismatch are complex and not fully understood, economists say.

 A new report says there is a “hidden” workforce of 27 million people in the U.S. who would gladly, and capably, fill those jobs — if given the chance. But because of hiring practices, the applications of this diverse group usually go straight to the rejection pile.

Co-author Joseph B. Fuller ’79, M.B.A. ’81, co-chair of the Managing the Future of Work project at Harvard Business School, says corporate leaders could solve many of their labor problems if they gave these workers a closer look, and gain a real advantage over competitors unwilling to do so, and improve workplace diversity. Interview has been edited for clarity and length.

GAZETTE: What was the impetus for this report?

FULLER: The vast majority of academic research on labor markets is from the supply side. It doesn’t look at the employer as an animated object that makes decisions based on a rationale that may or may not be sound. Before I was a professor at HBS, I was in industry, and it always struck me that there were these anomalies. Communities with lots of people looking for work and employers bemoaning the lack of candidates, but employers essentially acting as if a [qualified] candidate is supposed to present her or himself [for] the job they have on offer for the terms they’re offering. And if that didn’t happen, there was something quote “wrong.” They weren’t very active in addressing it themselves. Why was that?

The second thing is, if you look at the government data, it’s not actionable. [It doesn’t delineate] “this is how many long-term unemployed there are; this is how many discouraged workers there are; this is how many underemployed workers there are.” Huge numbers of people, but very little nuance in explaining why. So, I wanted to understand what’s behind these numbers.

GAZETTE: Many screened out of the application process early are people with felony convictions and people without a college degree. Who else makes up this “hidden” workforce?

FULLER: Veterans tend to be hidden because their skills, and the way those skills are described, don’t match with the skill descriptions employers are seeking. If someone’s looking for a salesperson, they’re looking for sales experience. So, they’re looking for those kinds of keywords in your résumé description of yourself. If they’re not there, you don’t get considered.

People who’ve had gaps in their work history: Half the companies in the United States have a filter to exclude applicants who have not been employed in the last six months or if there’s a gap in their work history of more than six months.

The biggest category is called NEET: Not in Employment, Education or Training. That’s a person who doesn’t have a job, doesn’t have a degree, is not in school. [Automated screening systems don’t] know what to do with them.

A big part of this research effort is to take that number [of 27 million] and break it down into identifiable chunks and give both employers and policymakers some insight into what does it take to get this part of the population into the workforce.

GAZETTE: About 99 percent of Fortune 500 companies use artificial intelligence tracking systems to screen applicants and then winnow them down to a manageable number before starting the interview process. Those systems determine who makes the cut based on specific parameters or keywords. Why such an all or nothing approach? . . .

Continue reading.

Written by Leisureguy

17 September 2021 at 10:28 am

An update on the Vivaldi browser

leave a comment »

The more I use Vivaldi, the more I like it. I have now made it my default browser. When you install, get the full package with Notes and Calendar and the rest. It’s feature-rich, so don’t expect to learn it all at once, but day by day and bit by bit it becomes better and better as you learn more and more how to use it.

Also, important point: it’s free.

Written by Leisureguy

14 September 2021 at 9:11 pm

Posted in Daily life, Software

Google Translate doesn’t know Latin

leave a comment »

A newsletter from Antigone:

The biggest news this week was that the silicon-addled wonks over at Google Translate had finally improved the algorithmical, alchemical wizardry that powered their jaw-droppingly inaccurate ‘Latin translation’ tool. Time-pressed students, brow-mopping professors and tattoo-hungry footballers rushed over en masse to see whether Google had indeed managed to do the seemingly impossible – master the automated translation of Latin.

It’s not for us at Antigone to tell you what to make of the results. Instead, we paste below five random phrases that bubbled up in our minds, followed by the magic that Google Translate wrought upon them. Faced with results such as these, we’ll be surprised if you can keep both eyebrows unraised.

Phrase One: Our favourite dog has run off on a wild goose chase!
Google Translate: nostri ventus canem fugit in fera anser!
Google Translate Translated: Our wind flees the dog – a goose inside the beast!
A haunting image.

Phrase Two: Get your act together, we could be just fine.
Google Translate: adepto vestri actus simul essemus esse sicut bysso.
Google Translate Translated: Had someone acquired your role, we would be together to be just like with cotton.
Hear, hear.

Phrase Three: Are these the best tales I can spin? A boy waiting to begin – a man of no memoirs?
Google Translate: tales sunt optimae<.> nere possum[?] puerum exspectans incipere – quis non commentariis?
Google Translate Translated: Such women are the best. Can I weave while waiting to start on a boy? Who can’t with notebooks?
They make all the difference.

Phrase Four: I enjoy buffets – I wouldn’t say love buffets – but it’s a very reasonable way to eat out.
Google Translate: plaga fruor – colaphos non dicam amores – sed edendi ratio admodum est.
Google Translate Translated: I enjoy a blow – I wouldn’t call fisticuffs my ‘darling’ – but it’s very much a method of eating.
[Appraisal redacted.]

Phrase Five: Back to the drawing board, I reckon.
Google Translate: in tabula extractionem revolvo.
Google Translate Translated: I roll back the extraction on a tablet.
Fair play.

Written by Leisureguy

12 September 2021 at 1:09 pm

The Problem With “Doing Your Own Research”

leave a comment »

Tim Wise writes on Medium:

The internet is a wonderful thing, and also the absolute worst thing ever.

On the one hand, it allows people to access information at the push of a button and then connect with others worldwide, even sharing that information if they’d like to do so.

On the other hand, it allows people to access information at the push of a button and then connect with others worldwide, even sharing that information if they’d like to do so.

Yes, the relative democratization of communication — compared to the days when gatekeepers more tightly limited the voices to which we might be exposed — is a welcome step in the direction of a more open society.

But at the same time, with more information also comes more noise. And with the ability to spread noise like never in human history, cacophony becomes the default position.

It seems wistful to remember the days of antiquity (also known as the 1990s), when getting your opinion heard required writing a letter to the editor of this thing called a newspaper and then waiting several days to see if it would be published. Or perhaps, if you were really ambitious, sending an entire essay or article to a magazine and then waiting for several weeks to discover the same.

As much as we complained about the difficulty of breaking through these mainstream media filters, I’m not sure if what replaced them is better.

Perhaps it would be fine had we even the most rudimentary skills at discerning truth from falsehood. But humans are not much on critical thinking, Americans least of all. We are a nation of image-crazed consumers and wanna-be “influencers,” actively hostile to critical thought and allergic to teaching such skills in school, lest we usurp the authority of parents to brainwash our children the way we see fit.

And so instead of developing the media literacy necessary to separate the factual wheat from the fictional chaff, millions just “do their own research,” by which they mean to tell you they:

1. Own a Google machine;
2. Have a lot of extra time on their hands; and,
3. Don’t actually know what research is.

Pro tip: research is not just a matter of looking stuff up.

It is not what you’re doing when conversing with anonymous people on Reddit, soaking in whatever StarShine77 has decided to offer up that morning.

It is not what you’re doing when scrolling through YouTube videos fed to you by an algorithm that is intentionally programmed to show you more of the same shit you were already watching and absolutely nothing that might contradict it.

It’s not what you’re doing when you pass around memes, with citations at the bottom like “U.S. Research Center,” which is not a real thing, and even if it were, that’s not a fucking citation, Grandpa.

But sadly, this is part of what it means to be American in the 21st century: to confuse having a right to an opinion with having a right to be taken seriously for whatever ass-backward opinion you have.

You’ll hear it all the time: “Well, I have a right to believe whatever I want, and you do too, and I guess we’ll just agree to disagree.”

No, cousin Judy, that’s not the end of it.

You can believe whatever codswallop floats your inner-tube, to be sure, but when it’s utter and complete horseshit, we won’t simply agree to disagree.

Agreeing to disagree is what we do when we debate who was the greatest Major League pitcher of all time, and you say Bob Gibson and I say Sandy Koufax — and we both could be right.

What we’re doing now, Mr. “The COVID vaccine will change your DNA and allow the government to track you,” is not that. It’s me, buying a calming shade of yellow interior wall paint with which to coat your bedroom and Googling “doctors near you that specialize in helping people with delusions.”

The idea that your opinion on a subject is equal to someone else’s, when that someone else has spent years studying and researching it (using more complex methods than refreshing their Facebook feed), is ridiculous.

Expertise is, in fact, a thing.

And yes, I know, sometimes experts disagree. Even physicians sometimes have different takes on the proper course of treatment for a given condition.

That’s why, when faced with such decisions, it’s good to get a second opinion.

But guess what? When you get that second opinion, from whom do you get it?

Another gotdamn doctor who went to a gotdamn medical school.

You do not get that second opinion about whether you need open-heart surgery to address your arterial blockage from KaleMomma420. Or rather, if you do, you deserve whatever happens to you.

Best of all is when . . .

Continue reading.

Written by Leisureguy

11 September 2021 at 6:43 pm

Why Silicon Valley’s Optimization Mindset Sets Us Up for Failure

leave a comment »

Rob Reich, Mehran Sahami, and Jeremy M. Weinstein wrote the book System Error: Where Big Tech Went Wrong and How We Can Rebootand TIME has a column adapted from Chapter 1 of the book.

About the authors:

Reich directs Stanford University’s Center for Ethics in Society and is associate director of its new Institute for Human-Centered Artificial Intelligence. Sahami is a computer science professor at Stanford and helped redesign the undergraduate computer science curriculum. Weinstein launched President Obama’s Open Government Partnership and returned to Stanford in 2015 as a professor of political science, where he now leads Stanford Impact Labs.

The column begins:

n 2013 a Silicon Valley software engineer decided that food is an inconvenience—a pain point in a busy life. Buying food, preparing it, and cleaning up afterwards struck him as an inefficient way to feed himself. And so was born the idea of Soylent, Rob Rhinehart’s meal replacement powder, described on its website as an International Complete Nutrition Platform. Soylent is the logical result of an engineer’s approach to the “problem” of feeding oneself with food: there must be a more optimal solution.

It’s not hard to sense the trouble with this crushingly instrumental approach to nutrition.

Soylent may optimize meeting one’s daily nutritional needs with minimal cost and time investment. But for most people, food is not just a delivery mechanism for one’s nutritional requirements. It brings gustatory pleasure. It provides for social connection. It sustains and transmits cultural identity. A world in which Soylent spells the end of food also spells the degradation of these values.

Maybe you don’t care about Soylent; it’s just another product in the marketplace that no one is required to buy. If tech workers want to economize on time spent grocery shopping or a busy person faces the choice between grabbing an unhealthy meal at a fast-food joint or bringing along some Soylent, why should anyone complain? In fact, it’s a welcome alternative for some people.

But the story of Soylent is powerful because it reveals the optimization mindset of the technologist. And problems arise when this mindset begins to dominate—when the technologies begin to scale and become universal and unavoidable.

That mindset is inculcated early in the training of technologists. When developing an algorithm, computer science courses often define the goal as providing an optimal solution to a computationally-specified problem. And when you look at the world through this mindset, it’s not just computational inefficiencies that annoy. Eventually, it becomes a defining orientation to life as well. As one of our colleagues at Stanford tells students, everything in life is an optimization problem.

The desire to optimize can favor some values over others. And the choice of which values to favor, and which to sacrifice, are made by the optimizers who then impose those values on the rest of us when their creations reach great scale. For example, consider that Facebook’s decisions about how content gets moderated or who loses their accounts are the rules of expression for more than three billion people on the platform; Google’s choices about what web pages to index determine what information most users of the internet get in response to searches. The small and anomalous group of human beings at these companies create, tweak, and optimize technology based on their notions of how it ought to be better. Their vision and their values about technology are . . .

Continue reading.

The concluding paragraphs:

Several years ago, one of us received an invitation to a small dinner. Founders, venture capitalists, researchers at a secretive tech lab, and two professors assembled in the private dining room of a four-star hotel in Silicon Valley. The host—one of the most prominent names in technology—thanked everyone for coming and reminded us of the topic we’d been invited to discuss: “What if a new state were created to maximize science and tech progress powered by commercial models—what would that run like? Utopia? Dystopia?”

The conversation progressed, with enthusiasm around the table for the establishment of a small nation-state dedicated to optimizing the progress of science and technology. Rob raised his hand to speak. “I’m just wondering, would this state be a democracy? What’s the governance structure here?” The response was quick: “Democracy? No. To optimize for science, we need a beneficent technocrat in charge. Democracy is too slow, and it holds science back.”

Written by Leisureguy

11 September 2021 at 6:17 pm

System Error: An interesting discussion about tradeoffs in technology and society

leave a comment »

You can watch this discussion (which appeared in Browser) or read it below.

Uri: Hello. I’m delighted to be here today with three Stanford professors – philosopher Rob Reich, political scientist Jeremy Weinstein and computer scientist Mehran Sahami – who are authors of the new book System Error: Where Big Tech Went Wrong and How We Can Reboot. Thank you all so much for being here today.

We’re going to play a very simple game we call The Last Word, where we ask you to answer difficult questions in a very specific number of words. Rob, we’ll start with you. Could you please tell us what this book is all about in exactly ten words?

Rob: [smiles] Alright: [counts on fingers] Reenergizing democratic institutions through the sensible regulation of Big Tech.

Uri: That was fantastic

Jeremy: Wow

Uri: Obviously the relationship between Big Tech and the democratic process, and our values as a society, is a very prominent topic on everyone’s minds these days, though often with more sound than light. I was wondering if you can tell us about the three perspectives you’re bringing to it, and what you hope to achieve with the book.

Jeremy: So let me start by building on Rob’s ten-word answer: in this moment, many people around the United States and around the world, feel that the effects of technology are washing over them. That it’s a wave that you have no agency in shaping or influencing. And our view is that we need to pivot that discussion and recognise that there’s profound agency that people have – as technologists who design technology, as users of technology, as citizens in a democratic society – and that ultimately the effects of technology are something that we can impact, impact by ensuring that our values are reflected in technology as it’s designed, and impact by shaping the way that government mitigates the harms of technology that is all around us.

Mehran: I think part of the message of the book as well is thinking not only in the big picture but also understanding what are the details of the technology and how they’re impacting people’s lives. So things like automated decision-making that are now using AI techniques to make consequential decisions in people’s lives; what happens with the future of work as AI scales; issues around privacy, as information about us is being gathered online and aggregated; and ultimately something many people are familiar with, the misinformation and disinformation that flows through social networks. So being able to disaggregate those technologies and understand the forces that are at play creates a greater urgency about why we need to do something about them.

Rob: The spirit of the book is after four years of teaching a class together at Stanford – in the belly of the beast of Silicon Valley, as it were – we wanted to try to expand the conversation in trying to reach really talented undergraduates using a technological lens, policy lens, and a philosophy lens to broaden the conversation.

And as Jeremy described, the book has answers of a certain kind to the dilemmas or problems of Big Tech, but they’re not a policy blueprint – “if only Congress would take our answers, things would miraculously get much better” – rather, it’s a way of shaping a new conversation and a new framework for thinking about the trade-offs that are encoded in the various products that Silicon Valley and Big Tech has brought to the world, and ensuring that the decisions that get made in the corporate boardrooms and product development lifecycles of the big tech companies are not the ones that are imposed upon the rest of us, because we haven’t exercised our own agency in trying to shape a technological future worth having.

Uri: I have to say that the book was very uncomfortable for me, as a young person who went through a similar university and had that feeling that these questions of values didn’t come up as much, and that we did all feel a little powerless, like we were a part of a bigger system that shaped us and which was out of our control. Which I think a lot of people feel, and I think that’s something really great about the way you’ve approached this and made us aware of how we’ve been shaped so far, but also an empowering story about what we can do, which I really appreciated.

Rob: Let me just add to that, if I can Uri – I’m a long time Browser reader, subscriber, I have some sense of maybe of the community of people who are likely to be listening. And there’s a sense in which of course it’s important that technological and scientific progress have delivered extraordinary benefits to societies and to individuals. And the question is not about, as it were, a values conversation that the philosopher or the policy maker shows up and says, stop, we need to slow it all down and make sure that we have a broader conversation that effectively brings a halt to technological progress.

To the contrary, the idea is that the interesting aspects of an ethics conversation and a policy conversation are really not about right and wrong, or true and false choices about technology or science, but rather about better and worse social outcomes. The ways in which so many of the technological advances of the past hundred or 200 years, when they are brought to market typically by private companies, and then the market consolidates, they exercise an extraordinary effect on society. And it’s the task of all of us to harness the enormous benefits and then to try to mitigate some of the harms. And that’s a task that goes far beyond the decision-making of people in companies alone.

This is why at the end of the day, I think ethics is an energising way of thinking about technology, not “the moral police have shown up to the technologists and told them when to stop.”

Uri: Absolutely. And well, on that note, Jeremy you are, I believe a philosopher who has spent time in government. I don’t know if that’s a rare beast.

Jeremy: Not a philosopher. I’m a political scientist who spent time in government, which is also a relatively rare beast.

Uri: So I was wondering if you could tell us in exactly five words, what you think are the main challenges in the ways that social values get stymied, or challenged, or fail to be implemented through the process of government?

Jeremy: [thinks]: building consensus around shared goals.

Uri: You are all so good at this, I’m absolutely gobsmacked.

Jeremy: Now can I add two sentences beyond that?

Uri: Please do, please do.

Jeremy: So in the book we write about democracy as a technology. Democracy is the technology that our society and many other societies have chosen to help us navigate really difficult value trade-offs, that as a collective of human beings living together where we can’t have everything we want, not everyone can get the outcomes they want, we have to make some choices.

And you can think about lots of different ways of making those choices. You could think about those choices being made by a single individual, like a king or the Pope, which was one way that societies used to organise themselves. You could think about leaving those decisions to companies, and that’s been a bit of the mode that we’ve been in with Big Tech. And this book is an argument about the role of our democratic institutions in making those choices. And the reason it’s hard to make those choices, and why I chose the words that I did, is that people want different things and they want them very enthusiastically, and they’re very unhappy when they don’t get the things that they want.

So this process of deliberation, and negotiation, and contestation, that’s what politics is all about. And right now we’re at a moment of a tremendous lack of faith in our democratic institutions and an inability to bridge the partisan divides in the United States. But it doesn’t mean that there’s some alternative way to accomplish that underlying task, that is the task of our democracy.

Rob: There’s a mistake that I think I perceive that technologists make sometimes  – and we discussed this in the book some – the important part for any reader to understand if they’re trying to figure out what’s going on in Big Tech: you don’t need to . . .

Continue reading. There’s more.

Written by Leisureguy

8 September 2021 at 2:56 pm

If AT&T Had Managed the Phone Business like Google

leave a comment »

A Bell telephone from the 1920s

Ted Gioia writes at The Honest Broker:

A hundred years ago, 15 million telephones were in use in the United States—but that number would more than double by the end of the decade. Almost the entire network fell under the control of a single corporation, the American Telegraph and Telephone Company (or AT&T), which was somehow allowed to maintain its monopoly until the Department of Justice forced a breakup of the business in the 1980s.

But for most of its history, AT&T had almost total control of telecommunications in the US. As far back as 1907, the president of the company had made his strategy clear when he announced the motto of “one policy, one system, universal service.” The company’s dominance was so extreme, that even the phones in people’s homes were owned by AT&T, and merely leased or lent to users. On some phones you could even see the words molded into the equipment: “BELL SYSTEM PROPERTY—NOT FOR SALE.”

The folks at AT&T thought they were smart. But Silicon Valley folks would laugh at their naïve approach. Today’s tech titans would manage a monopoly of that scale very differently.

So just imagine a time traveling venture capitalist going back one hundred years to present a “Google” type strategy to AT&T’s senior management. Let’s call this visitor from the future “Mister Google.”

Setting: A boardroom in a 1920s style of corporate opulence—with wood paneling, leather chairs, and an imposing mahogany table taking up most of the length of the room. Around it are seated a dozen senior managers in the business attire of that era. At a podium at the head of the table stands Mr. Google, wearing the casual attire of the 21st century. Facing him at the far end of the table is Harry Bates Thayer, who served as President of the American Telephone and Telegraph Company in the early 1920s, dressed impeccably in a finely tailored suit.

MR. GOOGLE: I appreciate your willingness to meet with me, but you’ll be well rewarded for your time. I come with wisdom from the future. And that wisdom is pretty simple: You folks have been doing everything all wrong.

[Hems and haws from the audience.]

Only 30% of the public uses your telephones. We need to get that up to 80% penetration within the next 12 months.

[Sounds of laughter from the room, until AT&T’s vice president of marketing pipes up.]

VP MARKETING: That’s hardly a credible plan, Mister Google. By the way, are you related to Barney Google? [More laughter at this.] How do you propose we get tens of millions of people to install phones in their homes during the course of a single year?

MR. GOOGLE: It’s easy, you’re going to give away the phone for free.

[The laughter has now turned to gasps of shock and amazement.]

CHIEF FINANCIAL OFFICER: Free? Did I hear you say free?

MR. GOOGLE: You heard correctly. You have to give the phones away for free

CHIEF FINANCIAL OFFICER: There’s some catch or trick, no? We give away the phones, but we charge more for the monthly fee? Or we raise rates on long distance calls? Or. . . .

MR. GOOGLE: No, no, no. You don’t understand. Everything is free—the phone, the connection to the network, all the calls. . . .

[Total pandemonium breaks out in the boardroom—some are laughing, others are jeering, a few actually shouting out rancorous words of abuse. It takes a couple minutes before President Thayer can quiet things down. He then speaks for the first time.]

PRESIDENT: My dear Mister Google, this is quite absurd. You asked to speak to our management team with some vague promise of wisdom from the future, like a character in an H.G. Wells story—and you’re now wasting our time with a plan to turn AT&T into a charity, offering free communications as a philanthropic endeavor. Frankly I was expecting more from you. I believe this meeting has come to an end. I’ll ask you to leave promptly and never. . . .

MR GOOGLE: No, no, no—you damned fools. You will make more money with my plan. A whole boatload of money. You idiots are managing your platform all wrong.

VP ENGINEERING: What’s a platform?

MR GOOGLE: [Ignoring the question] We’re going to charge a lot of money—more than you’ve ever charged before. Only it won’t be the users who pay.

CHIEF FINANCIAL OFFICER: If the users don’t pay, whose going to pay for them?

MR GOOGLE: A lot of folks will be happy to pay. Let’s start with the advertisers.

CHIEF FINANCIAL OFFICER: Advertisers? Do you even understand how a phone works, Mister Google? There’s no advertising on a phone call.

MR GOOGLE: Not now, but there will be once we’ve established the new rules of the game. I’m thinking of a YouTube strategy—with maybe ten or twenty seconds of commercials before the phone conversation starts. Perhaps more ads later if the users keep on jabbering.

PRESIDENT: This is just getting stranger and stranger. I’m not sure what a ‘you too’ strategy is, but it sounds more like voodoo to me. You can’t insert ads in a phone call.

MR. GOOGLE: Oh yes you can—if you’re letting people make phone calls for free. They have no choice in the matter, do they? But that’s only the start. We will match the advertising to what customers are discussing on their calls. So if mom is complaining about her back pains, we pitch a healing ointment or some other medicinal product. If dad is calling about his car breaking down, we tell him about the latest Ford Model T.

VP ENGINEERING: But that’s impossible. How do we even know what people are talking about on their phone calls?

MR. GOOGLE: That’s a great question, and it makes clear how little you have done to exploit your platform. You need to monitor every call, and compile a file of information on every customer.

PRESIDENT: [Clearly alarmed] Monitor every call? Are you joking? That’s invading people’s privacy? That’s spying? That’s surveillance?

MR GOOGLE: Not in the least. It’s called . . .

Continue reading.

Written by Leisureguy

8 September 2021 at 12:52 pm

How to block Facebook from snooping on you

leave a comment »

Geoffrey Fowler reports in the Washington Post:

If you ever get that eerie feeling Facebook or Instagram are listening to you, you’re not entirely hallucinating.

Facebook says it’s not literally activating the microphones on our smartphones, but it is tracking what we do in other apps, websites and even real-world stores. That much data can make ads feel as on-point as if it was in the room. In a recent column, I investigated everything Facebook can passively learn about you, even when you stop using it for long stretches.

Don’t be fooled by the kinder, gentler image of Instagram, either: It’s owned by Facebook and does the same kind of tracking as Facebook.

So what can you do about it? If you’re very committed — or a bit techie — there are some steps you can take to try to hide from Facebook’s personal data vacuum.

Help Desk: Ask our tech columnist a question

I polled some of the smartest privacy experts about evasive maneuvers they recommend, including the Electronic Frontier Foundation’s Bennett Cyphers, Disconnect’s Patrick Jackson, former chief technologist for the Federal Trade Commission Ashkan Soltani and Jumbo Privacy’s Pierre Valade. Stopping the snooping entirely would be really difficult, so I focused this advice on steps that could make the biggest impact.

Just remember: These changes only impact what Facebook and Instagram can learn about you outside of their apps. Everything you and your friends do inside the apps — from tapping the “Like” button to posting status updates and profile information — will still feed the company personal information. (And anything you make public can be seen by people and companies alike.)

Here are seven steps to stop Facebook tracking, starting with the nuclear option.

1. Quit Facebook and Instagram

They’ll beg you to stay, and encourage you to just temporarily “deactivate” your account for a while. But if you do fully delete your accounts on both services, Facebook will no longer build out a profile with your activities to target ads.

To completely delete your Facebook account:

  • Click on this link in a browser where you’re logged in to Facebook.
  • Select Permanently Delete Account, then click on Continue to Account Deletion.
  • Click Delete Account, enter your password and continue and say goodbye forever.

Before you do this, you might want to download a copy of the data from your Facebook account. Use this link.

To quit Instagram, it’s a similar process:

  • Click on this link in a web browser where you’re logged in to Instagram.
  • Pick a reason, such as privacy concerns.
  • Tap Delete.

There is one privacy downside to quitting Facebook: The company still receives and collects data about people who don’t have accounts. The only way you can actually see what it knows about you is to maintain an account.

2. Change these Facebook privacy settings

Facebook has lots of bad default settings you should change. But the most important one to combat tracking is called Off-Facebook Activity. (Read a column I wrote about it here.)

Your Off-Facebook Activity settings are easiest to access on the Web by clicking this link.

  • You’ll see a page that shows you the apps, websites and other businesses where Facebook has been tracking you.
  • Tap More Options, then Manage Future Activity, then toggle Future Off-Facebook Activity to off.

While you’re at it, I also recommend changing a setting that gives Facebook permission to connect into other apps and websites. Just know that adjusting this setting would keep you from logging into apps where you used Facebook to set up your account.

  • Access your apps and websites setting page with this link.
  • Tap Turn Off next to apps, websites and games. . . .

Continue reading. (No paywall on this one, thanks to NextDraft.) There are 7 steps in all.

Written by Leisureguy

30 August 2021 at 3:47 pm

Why Facebook Won’t Stop Pushing Propaganda: It’s their business model.

leave a comment »

“I ran because our kids needed to see you don’t have to be white and you don’t have to be a man to run for office in our town.” 
Lynsey Weatherspoon

Monica Bauerlein and Clara Jeffery write in the Atlantic:

Joyce Jones’ Facebook page is almost an archetype of what the social network is supposed to look like: Pictures of her kids, her kids’ friends, her sports teams, her kids’ friends’ sports teams. Videos of her husband’s sermons at New Mount Moriah Baptist Church. Memes celebrating achievement and solidarity, holiday greetings, public health messages. It’s what Mark Zuckerberg extols when he talks about how his company is all about “bringing people together.”

So when Jones decided to run for mayor in her Alabama town last year, it seemed obvious that she’d try to bring people together on Facebook. Her bid to be Montevallo’s first Black mayor, challenging a 12-year City Council incumbent, drew an enthusiastic, diverse crew of volunteers. They put up a campaign page, One Montevallo, and started posting cheery endorsements alongside recycling updates and plugs for drive-in movies.

It was a historic moment for Montevallo, whose population (7,000) is two-thirds white and which sits in Shelby County, the infamous plaintiff in the Supreme Court case that gutted the Voting Rights Act in 2013. It was also a turning point for Jones, who grew up in the shotgun house her father had built on a dirt road far from the neighborhood where her grandmother cleaned houses. “My cousins and I would come with her,” the 45-year-old recalls. “We would do yardwork in the houses that she worked in. We never ever thought that living here was an option.”

“Now I’ve been living here for 17 years. We have a wonderful home. We have raised four wonderful children. And part of what I was being challenged with was: It’s not okay for me to make it out. I have to do something to make sure that other people have every opportunity. I ran because our kids needed to see you don’t have to be white and you don’t have to be a man to run for office in our town.”

But getting her campaign message out was tough. “We’re in a pandemic, so we couldn’t go to churches and meet people,” Jones told me. Montevallo does not have a news outlet of its own, and the Shelby County Reporter, based in nearby Columbiana, has a single staff reporter for the 14 communities it covers. “For us, the fastest way to get news is through social media,” she says.

Jones is not quite sure how the rumors started, but she remembers how fast they spread. Facebook accounts popped up and shared posts to Montevallo community groups, implying she wanted to defund police (she does not). Someone made up a report of a burglary at her home, referencing her landlord’s name—to highlight that she was renting, she believes. Another account dredged up a bounced check she’d written for groceries as her family struggled during the 2008 recession.

“The algorithm, how fast the messages were shared and how quickly people saw them, that was just eye-opening to me,” Jones says. Her campaign would put up posts debunking the rumors, but the corrections were seen far fewer times than the attack posts. “It was so much more vitriolic, and it would get so many hits. It was just lightning fast.”

Soon, Jones noticed a chill around her. “I’d be going to the grocery store and people who would normally speak to you and be nice to you would avoid you. I’d go to a football game and people would avoid me. I was baffled by all that. It’s one thing to not know me, but it’s another to know me my whole life and treat me like the plague.”

One night her then 16-year-old son, who had been hanging out at the park with a group of families he’d grown up with, called to ask her to pick him up. The adults had been talking about her, not realizing he was within earshot. When Jones came to get him, he told her, “For the first time, I felt like the Black kid.”

“What happens on Facebook doesn’t just stay on Facebook,” Jones says. “It comes off social media. You have to live with that.”

There’s a direct connection between Jones’ ordeal, last November’s election, the January 6 insurrection, and the attacks on American democracy that have played out every day since then. That connection is Facebook, specifically, it’s the toxic feedback loop by which the platform amplifies falsehoods and misinformation. That loop won’t end with the belated bans on Donald Trump and others, because the fundamental problem is not that there are people who post violentracist, antidemocratic, and conspiratorial material. It’s that Facebook and other social platforms actively push that content into the feeds of tens of millions of people, making lies viral while truth languishes.

The technical term for this is algorithmic amplification, and it means just that: What you see on Facebook has been amplified, and pushed into your feed, by the company’s proprietary algorithm. When you (or Mother Jones, or Trump) create a post, it’s visible to no one except those who deliberately seek out your page. But within instants, the algorithm analyzes your post, factoring in who you and your connections are, what you’ve looked at or shared before, and myriad other data points. Then it decides whether to show that post in someone else’s News Feed, the primary page you see when you log on. Think of it as a speed-reading robot that curates everything you see.

The way social media companies tell it, their robots are benevolent, serving only your best interests. You’ve clicked on your cousin’s recipes but not your friend’s fitness bragging? Here is more pasta and less Chloe Ting. You’ve shown an interest in Trump and also fanciful pottery? Here are some MAGA garden gnomes. The founding narrative of social media companies is that they merely provide a space for you, dear user, to do and see what you want.

In reality, as the people who work at these companies know quite well, technology reflects the biases of those who make it. And when those who make it are corporations, it reflects corporate imperatives. In Facebook’s case, those imperatives—chief among them, to grow faster than anyone else—have played out with especially high stakes, making the company one of the world’s most significant threats to democracy, human rights, and decency.

Facebook has been proved to be a vehicle for election disinformation in many countries (see: Brexit, Trump, Duterte). It has been an organizing space and megaphone for violent extremism and genocidal hate (see: KenoshaMyanmarSri Lanka, and Afghanistan). Its power is so far-reaching, it shapes elections in small-town Alabama and helps launch mobs into the Capitol. It reaches you whether or not you are on social media, because, as Jones says, what happens on Facebook doesn’t stay on Facebook.

That’s why one of the most significant battles of the coming years is over whether and how government should regulate social media. So far, . . .

Continue reading. There’s much much more.

Written by Leisureguy

30 August 2021 at 2:41 pm

The Beauty of Bézier Curves

leave a comment »

Written by Leisureguy

25 August 2021 at 12:48 pm

Posted in Games, Math, Software, Technology

The Singularity approacheth

leave a comment »

As I’ve mentioned previously, the Singularity in science-fiction is when an AI gets good enough to design a better AI, and then recursion leads to exponential improvement, and the story’s off and running.

Well, it’s here. Matthew Hudson mentions it in an article in the New Yorker, which begins:

Deep learning, the artificial-intelligence technology that powers voice assistants, autonomous cars, and Go champions, relies on complicated “neural network” software arranged in layers. A deep-learning system can live on a single computer, but the biggest ones are spread over thousands of machines wired together into “clusters,” which sometimes live at large data centers, like those operated by Google. In a big cluster, as many as forty-eight pizza-box-size servers slide into a rack as tall as a person; these racks stand in rows, filling buildings the size of warehouses. The neural networks in such systems can tackle daunting problems, but they also face clear challenges. A network spread across a cluster is like a brain that’s been scattered around a room and wired together. Electrons move fast, but, even so, cross-chip communication is slow, and uses extravagant amounts of energy.

Eric Vishria, a general partner at Benchmark, a venture-capital firm in San Francisco, first came to understand this problem in the spring of 2016, while listening to a presentation from a new computer-chip company called Cerebras Systems. Benchmark is known for having made early investments in companies such as Twitter, Uber, and eBay—that is, in software, not hardware. The firm looks at about two hundred startup pitches a year, and invests in maybe one. “We’re in this kissing-a-thousand-frogs kind of game,” Vishria told me. As the presentation started, he had already decided to toss the frog back. “I’m, like, Why did I agree to this? We’re not gonna do a hardware investment,” he recalled thinking. “This is so dumb.”

Andrew Feldman, Cerebras’s co-founder, began his slide deck with a cover slide, then a team slide, catching Vishria’s attention: the talent was impressive. Then Feldman compared two kinds of computer chips. First, he looked at graphics-processing units, or G.P.U.s—chips designed for creating 3-D images. For a variety of reasons, today’s machine-learning systems depend on these graphics chips. Next, he looked at central processing units, or C.P.U.s—the general-purpose chips that do most of the work on a typical computer. “Slide 3 was something along the lines of, ‘G.P.U.s actually suck for deep learning—they just happen to be a hundred times better than C.P.U.s,’ ” Vishria recalled. “And, as soon as he said it, I was, like, facepalm. Of course! Of course!” Cerebras was proposing a new kind of chip—one built not for graphics but for A.I. specifically.

Vishria had grown used to hearing pitches from companies that planned to use deep learning for cybersecurity, medical imaging, chatbots, and other applications. After the Cerebras presentation, he talked with engineers at some of the companies that Benchmark had helped fund, including Zillow, Uber, and Stitch Fix; they told him that they were struggling with A.I. because “training” the neural networks took too long. Google had begun using super-fast “tensor-processing units,” or T.P.U.s—special chips it had designed for artificial intelligence. Vishria knew that a gold rush was under way, and that someone had to build the picks and shovels.

That year, Benchmark and Foundation Capital, another venture-capital company, led a twenty-seven-million-dollar round of investment in Cerebras, which has since raised close to half a billion dollars. Other companies are also making so-called A.I. accelerators; Cerebras’s competitors—Groq, Graphcore, and SambaNova—have raised more than two billion dollars in capital combined. But Cerebras’s approach is unique. Instead of making chips in the usual way—by printing dozens of them onto a large wafer of silicon, cutting them out of the wafer, and then wiring them to one another—the company has made one giant “wafer-scale” chip. A typical computer chip is the size of a fingernail. Cerebras’s is the size of a dinner plate. It is the largest computer chip in the world.

Even competitors find this feat impressive. “It’s all new science,” Nigel Toon, the C.E.O. and co-founder of Graphcore, told me. “It’s an incredible piece of engineering—a tour de force.” At the same time, another engineer I spoke with described it, somewhat defensively, as a science project—bigness for bigness’s sake. Companies have tried to build mega-chips in the past and failed; Cerebras’s plan amounted to a bet that surmounting the engineering challenges would be possible, and worth it. “To be totally honest with you, for me, ignorance was an advantage,” Vishra said. “I don’t know that, if I’d understood how difficult it was going to be to do what they did, I would have had the guts to invest.”

Computers get faster and faster—a remarkable fact that’s easy to take for granted. It’s often explained by means of Moore’s Law: the pattern identified in 1965 by the semiconductor pioneer Gordon Moore, according to which the number of transistors on a chip doubles every year or two. Moore’s Law, of course, isn’t really a law. Engineers work tirelessly to shrink transistors—the on-off switches through which chips function—while also refining each chip’s “architecture,” creating more efficient and powerful designs. . .

. . .  Cerebras’s first task was to address the manufacturing difficulties that bedevil bigger chips. A chip begins as a cylindrical ingot of crystallized silicon, about a foot across; the ingot gets sliced into circular wafers a fraction of a millimetre thick. Circuits are then “printed” onto the wafer, through a process called photolithography. Chemicals sensitive to ultraviolet light are carefully deposited on the surface in layers; U.V. beams are then projected through detailed stencils called reticles, and the chemicals react, forming circuits.

Typically, the light projected through the reticle covers an area that will become one chip. The wafer then moves over and the light is projected again. After dozens or hundreds of chips are printed, they’re laser-cut from the wafer. “The simplest way to think about it is, your mom rolls out a round sheet of cookie dough,” Feldman, who is an avid cook, said. “She’s got a cookie cutter, and she carefully stamps out cookies.” It’s impossible, because of the laws of physics and optics, to build a bigger cookie cutter. So, Feldman said, “We invented a technique such that you could communicate across that little bit of cookie dough between the two cookies.”

In Cerebras’s printing system—developed in partnership with T.S.M.C., the company that manufactures its chips—the cookies overlap at their edges, so that their wiring lines up. The result is a single, “wafer-scale” chip, copper-colored and square, which is twenty-one centimetres on a side. (The largest G.P.U. is a little less than three centimetres across.) Cerebras produced its first chip, the Wafer-Scale Engine 1, in 2019. The WSE-2, introduced this year, uses denser circuitry, and contains 2.6 trillion transistors collected into eight hundred and fifty thousand processing units, or “cores.” (The top G.P.U.s have a few thousand cores, and most C.P.U.s have fewer than ten.)

Aart de Geus, the chairman and co-C.E.O. of the company Synopsys, asked me, “2.6 trillion transistors is astounding, right?” Synopsys provides some of the software that Cerebras and other chipmakers use to make and verify their chip designs. In designing a chip, de Geus said, an engineer starts with two central questions: “Where does the data come in? Where is it being processed?” When chips were simpler, designers could answer these questions at drafting tables, with pencils in hand; working on today’s far more complex chips, they type code that describes the architecture they want to create, then move on to using visual and coding tools. “Think of seeing a house from the top,” de Geus said. “Is the garage close to the kitchen? Or is it close to the bedroom? You want it close to the kitchen—otherwise, you will have to carry groceries all through the house.” He explained that, having designed the floor plan, “you might describe what happens inside a room using equations.”

Chip designs are mind-bogglingly intricate. “There’s multiple layers,” de Geus said, with circuits crisscrossing and running on top of one another, like major expressway interchanges. For Cerebras’s engineers, working at wafer scale, that complexity was heightened. Synopsys’s software offered assistance in the form of artificial intelligence: pattern-matching algorithms recognized frequent problems and suggested solutions; optimization routines nudged rooms into faster, more efficient arrangements. If too many lanes of traffic try to squeeze between two blocks of buildings, the software allows engineers to play Robert Moses, shifting the blocks.

In the end, Feldman said, the mega-chip design offers several advantages. Cores communicate faster when they’re on the same chip: instead of being spread around a room, the computer’s brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that’s ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home. In describing the efficiencies of the wafer-scale chip, Feldman offered an analogy: he asked me to imagine groups of roommates (the cores) in a dormitory (a chip) who want to watch a football game (do computing work). To watch the game, Feldman said, the roommates need beer stored in a fridge (data stored in memory); Cerebras puts a fridge in every room, so that the roommates don’t have to venture to the dorm’s common kitchen or the Safeway. This has the added advantage of allowing each core to work more quickly on different data. “So in my dorm room I can have Bud,” Feldman said. “And in your dorm room you can have Schlitz.”

Finally, Cerebras had to surmount the problem of yield. The firm’s engineers use Trilogy’s trick: redundancy. But here they have an advantage over their predecessors. Trilogy was trying to make a general-purpose chip, with many varied components, and so wiring around a single failed element could require connecting to a distant substitute. On Cerebras’s chip, all the cores are identical. If one cookie comes out wrong, the ones surrounding it are just as good.

In June, in a paper published in Nature, Google developers reported that, for the first time, they’d fully automated a process called “chip floorplanning.” A typical chip can contain thousands of memory blocks, tens of millions of logic gates, and tens of kilometres of microscopic wiring. Using the same techniques that their DeepMind colleagues had used to teach a neural network to win at Go, they’d trained an A.I. to floorplan a tensor-processing unit, arranging these elements while preventing data congestion; when they tested the A.I.’s T.P.U. against one that a team of experts had spent several months creating, they found that the computer’s design, drawn up in a matter of hours, matched or exceeded the humans’ in efficient use of area, power, and wire length. Google is currently using the algorithm to design its next T.P.U.

People in A.I. circles speak of the singularity—a point at which technology will begin improving itself at a rate beyond human control. I asked de Geus if his software had helped design any of the chips that his software now uses to design chips. He said that it had, and showed me a slide deck from a recent keynote he’d given; it ended with M. C. Escher’s illustration of two hands drawing each other, which de Geus had labelled “Silicon” and “Smarts.” When I told Feldman that I couldn’t wait to see him use a Cerebras chip to design a Cerebras chip, he laughed. “That’s like feeding chickens chicken nuggets,” he said. “Ewww.”

Designing and manufacturing the chip turned out to be just half of the challenge. Brains use . . .

Continue reading. There’s more.

Emphasis added.

Written by Leisureguy

20 August 2021 at 6:23 pm

Vivaldi is the browser for me

leave a comment »

I was having Firefox problems after the most recent update, so I went browsing for browsers and happened upon an article in Wired that proclaimed the greatness of Vivaldi 4.0. Nothing loath, I downloaded and installed it, and after using for a couple of days, I have to agree: it’s a wonderful browser.

When you are installing it, you are asked to customize it somewhat. It has a side panel that opens and closes with a click, and this side panel can host any of several functions (selected by clicking the appropriate icon in the left margin):

Notes — I often want to make a note during browsing, so this is a godsend.
Calendar — It’s very convenient to be able to check a date without leaving the browser
Window — You can open another URL in the side panel so that, with the panel open, you have a side-by-side view of two sites.

There are other functions, but be sure to get Notes and Calendar when you customize. (You can add them later — which is what I had to do when I realized I was opening another app to make a note and opening Calendar to check a date).

It will import all you bookmarks etc. from your current browser, so you can hit the ground running.

The more I use it, the more I like it. Worth the switch effort, IMO.

Written by Leisureguy

20 August 2021 at 10:02 am

Scientists rename human genes to stop Microsoft Excel from misreading them as dates

leave a comment »

James Vincent reports in The Verge:

There are tens of thousands of genes in the human genome: minuscule twists of DNA and RNA that combine to express all of the traits and characteristics that make each of us unique. Each gene is given a name and alphanumeric code, known as a symbol, which scientists use to coordinate research. But over the past year or so, some 27 human genes have been renamed, all because Microsoft Excel kept misreading their symbols as dates.

The problem isn’t as unexpected as it first sounds. Excel is a behemoth in the spreadsheet world and is regularly used by scientists to track their work and even conduct clinical trials. But its default settings were designed with more mundane applications in mind, so when a user inputs a gene’s alphanumeric symbol into a spreadsheet, like MARCH1 — short for “Membrane Associated Ring-CH-Type Finger 1” — Excel converts that into a date: 1-Mar.

his is extremely frustrating, even dangerous, corrupting data that scientists have to sort through by hand to restore. It’s also surprisingly widespread and affects even peer-reviewed scientific work. One study from 2016 examined genetic data shared alongside 3,597 published papers and found that roughly one-fifth had been affected by Excel errors.

“It’s really, really annoying,” Dezső Módos, a systems biologist at the Quadram Institute in the UK, told The Verge. Módos, whose job involves analyzing freshly sequenced genetic data, says Excel errors happen all the time, simply because the software is often the first thing to hand when scientists process numerical data. “It’s a widespread tool and if you are a bit computationally illiterate you will use it,” he says. “During my PhD studies I did as well!”

There’s no easy fix, either. Excel doesn’t offer the option to turn off this auto-formatting, and the only way to avoid it is to change the data type for individual columnsEven then, a scientist might fix their data but export it as a CSV file without saving the formatting. Or, another scientist might load the data without the correct formatting, changing gene symbols back into dates. The end result is that while knowledgeable Excel users can avoid this problem, it’s easy for mistakes to be introduced.

Help has arrived, though, in the form of the scientific body in charge of standardizing the names of genes, the HUGO Gene Nomenclature Committee, or HGNC. This week, the HGNC published new guidelines for gene naming, including for “symbols that affect data handling and retrieval.” From now on, they say, human genes and the proteins they expressed will be named with one eye on Excel’s auto-formatting. That means the symbol MARCH1 has now become MARCHF1, while SEPT1 has become SEPTIN1, and so on. A record of old symbols and names will be stored by HGNC to avoid confusion in the future.

So far, the names of some 27 genes have been changed like this over the past year, Elspeth Bruford, the coordinator of HGNC, tells The Verge, but the guidelines themselves weren’t formally announced until this week. . .

Continue reading.

Written by Leisureguy

18 August 2021 at 5:41 pm

How the government can support a free press and cut disinformation

leave a comment »

Martha Minow of the Harvard Law School is interviewed by Christina Pazzanese in The Harvard Gazette:

The mainstream news industry has been in sharp decline since the 1990s, owing to a series of financial and cultural changes brought by the rise of the internet. Amid the closing or shrinking of newspapers, magazines, and other legacy news outlets, Americans have increasingly turned to social media and heavily partisan websites and cable networks as their main sources of news and information, which has led to a proliferation of disinformation and misinformation and fueled polarization.

Given the vital role a free and responsible press plays in American democracy and the unique protections the Constitution provides for it under the First Amendment, is it time for the government to get involved? Is it government’s place to do so? And how could that happen without infringing on that freedom?

In a new book, Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech (Oxford University Press, 2021), Martha Minow, 300th Anniversary University Professor at Harvard Law School, says the First Amendment not only does not preclude the federal government from protecting a free press in jeopardy, it requires that it do so. Minow spoke with the Gazette about some of the ways to potentially clean up social media and bankroll local news, and why arguing on Twitter isn’t a First Amendment right.

Q&A with Martha Minow

GAZETTE: There seems to be broad misunderstanding about what speech is protected by the First Amendment and what is not. Underlying “cancel culture” and complaints about “deplatforming” is a belief that people should not be penalized for saying things online that others find objectionable or that are inaccurate or even false because of their right to freely express themselves. Can you clarify how the First Amendment applies and doesn’t apply to social media platforms, like Twitter or Facebook, and online generally?

MINOW: I wrote a book to examine the challenges and decline of the news industry during a time of exploding misinformation and disinformation, a global pandemic, and great challenges to democracies in the United States and elsewhere. Certainly, one big dimension of this context is [what] some people are calling [an] infodemic: the flood of information that is enabled by the internet, and particularly social media. But it is not just social media. It’s conventional media, particularly cable news, but also some broadcast news.

Most of the sources of communications are private, and private communications are not governed by the First Amendment. Private companies are entitled to edit, elevate, suppress, remove [speech], whether it’s in broadcast, cable, or on a social media platform. Indeed, private companies have First Amendment freedoms against any government intervention. We in America are very fond of rights, and rights maybe are what hold us together more certainly than shared traditions, shared identities. And one of the ways that’s really evolved is how we talk about rights as if it’s a cultural phenomenon or it’s part of our identities. But that kind of informal conversation about “I have First Amendment freedom” may be a metaphor on a social media platform, but it is not a legal right. We sign terms-of-service agreements with platform companies. They’re the ones that control what is communicated and what’s not. That’s much less edited than broadcast or cable or print media. So, we’re living in an unprecedented time of lowered barriers to communicating to mass audiences — almost anybody can have access to a mass audience. But that’s all enabled by private providers and the private providers are not restricted by the First Amendment in what they remove or amplify.

GAZETTE: What are a few of the measures that could effectively hold tech firms to account for what is published and shared on their platforms?

MINOW: When it comes to holding the platform companies responsible for conveying, amplifying, even escalating hateful communications, misinformation, [and] disinformation, there are some techniques, but we have to be careful because if the government is involved, then the First Amendment is front and center. The techniques include eliminating or reducing the immunity currently granted under the [1996] Communications Decency Act, which has a section, Section 230, that treats platform companies differently from any other media and specifically immunizes them from liabilities that apply to all these other entities. They include liabilities for fraud, for defamation, for violating contract terms. [But] even Section 230 does not immunize the platforms from criminal responsibility or from violations of intellectual property rights. So, one very direct step to hold companies responsible would be to either eliminate this immunity or make it conditional. I actually prefer that alternative.

Companies adopt and should adhere to standards of moderation, content moderation rules. They can develop their own, but the idea would be they’d have to announce standards; they’d have to report on them; and they’d have to have processes to act on anyone calling them out for violating their own standards. That’s pretty direct, and it would put them on the same par as all the other media entities that exist in the country.

Another possibility would be to take intellectual property seriously and make the platforms pay when they take or steal or promote information from other news sources. They don’t put the revenues that they gain, particularly from advertising, back into investment in news. It’s not a punishment; it’s simply the idea of holding them responsible like [the] grown-up companies that they are.

You know, the fact of the matter is, the big disinformation source is as much broadcast and cable [television as it is online] and on those, there is a basis for government regulation. The FCC could take that seriously and withhold licenses, remove them, terminate them, for companies that are misleading people, that are labeling as news something that’s entirely opinion. Cable is largely a monopoly. Local communities grant franchises to cable companies; local communities could hold them more responsible. I don’t look forward to a day, I hope we never see it, that the government, at any level, is deciding the content. But when there is scarce opportunity to amplify communications given to private companies, it’s only fair that they should have standards that they then deliver on [by] providing some quality control of what they amplify. There is no right to have your message sent to everybody in the world anywhere. What there is, is a right to be free from government restrictions on your speech. So, one very specific suggestion that I have is that when we deal with digital communications, there could be a delay, and there could be speed bumps. Before people can spread messages to large numbers of people, there could be a delay, they could even use artificial intelligence to monitor it before it can be spread beyond a handful of people.

GAZETTE: The era of self-policing hasn’t worked very well so far, but you say there are things companies can and should be doing right now to act more responsibly and to help support the news. What are a few of those?

MINOW: . . .

Continue reading.

Written by Leisureguy

18 August 2021 at 5:34 pm

He predicted the dark side of the Internet 30 years ago. Why did no one listen?/

leave a comment »

Reed Albergotti reports in the Washington Post:

In 1994 — before most Americans had an email address or Internet access or even a personal computer — Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy.

Charlotte Lee, who studied under Agre as a graduate student at UCLA, and is now a professor of human-centered design and engineering at the University of Washington, said she is still studying his work and learning from it today. She said she wishes he were around to help her understand it even better.

But Agre isn’t available. In 2009, he simply dropped off the face of the earth, abandoning his position at UCLA. When friends reported Agre missing, police located him and confirmed that he was OK, but Agre never returned to the public debate. His closest friends declined to further discuss details of his disappearance, citing respect for Agre’s privacy.

Instead, many of the ideas and conclusions that Agre explored in his academic research and his writing are only recently cropping up at think tanks and nonprofits focused on holding technology companies accountable.

“I’m seeing things Phil wrote about in the ’90s being said today as though they’re new ideas,” said Christine Borgman, a professor of information studies at UCLA who helped recruit Agre for his professorship at the school.

The Washington Post sent a message to Agre’s last known email address. It bounced back. Attempts to contact his sister and other family members were unsuccessful. A dozen former colleagues and friends had no idea where Agre is living today. Some said that, as of a few years ago, he was living somewhere around Los Angeles.

Agre was a child math prodigy who became a popular blogger and contributor to Wired. Now he has been all but forgotten in mainstream technology circles. But his work is still regularly cited by technology researchers in academia and is considered foundational reading in the field of social informatics, or the study of the effects of computers on society.

Agre earned his doctorate at MIT in 1989, the same year the World Wide Web was invented. At that time, even among Silicon Valley venture capitalists betting on the rise of computers, few people foresaw just how deeply and quickly the computerization of everything would change life, economics or even politics.

A small group of academics, Agre included, observed that computer scientists viewed their work in a vacuum largely disconnected from the world around it. At the same time, people outside that world lacked a deep enough understanding of technology or how it was about to change their lives.

By the early 1990s, Agre came to believe the field of artificial intelligence had gone astray, and that a lack of criticism of the profession was one of the main reasons. In those early days of artificial intelligence, most people in AI were focused on complex math problems aimed at automating human tasks, with limited success. Yet the industry described the code they were writing as “intelligent,” giving it human attributes that didn’t actually exist.

His landmark 1997 paper called “Lessons Learned in Trying to Reform AI” is still largely considered a classic, said Geoffrey Bowker, professor emeritus of informatics at University of California, Irvine. Agre noticed that those building artificial intelligence ignored critiques of the technology from outsiders. But Agre argued criticism should be part of the process of building AI. “The conclusion is quite brilliant and has taken us as a field many years to understand. One foot planted in the craftwork in design and the other foot planted in a critique,” Bowker said.

Nevertheless, AI has barreled ahead unencumbered, weaving itself into even “low tech” industries and affecting the lives of most people who use the Internet. It guides people on what to watch and read on YouTube and Facebook, it determines sentences for convicted criminals, allows companies to automate and eliminate jobs, and allows authoritarian regimes to monitor citizens with greater efficiency and thwart attempts at democracy.

Today’s AI, which has largely abandoned the type of work Agre and others were doing in the ’80s and ’90s, is focused on ingesting massive sums of data and analyzing it with the world’s most powerful computers. But as the new form of AI has progressed, it has created problems — ranging from discrimination to filter bubbles to the spread of disinformation — and some academics say that is in part because it suffers from the same lack of self-criticism that Agre identified 30 years ago.

In December, Google’s firing of AI research scientist Timnit Gebru after she wrote a paper on the ethical issues facing Google’s AI efforts, highlighted the continued tension over the ethics of artificial intelligence and the industry’s aversion to criticism.

“It’s such a homogenous field and people in that field don’t see that maybe what they’re doing could be criticized,” said Sofian Audry, a professor of computational media at University of Quebec in Montreal who began as an artificial intelligence researcher. “What Agre says is that it is worthwhile and necessary that the people who develop these technologies are critical,” Audrey said.

Agre grew up in Maryland, where he said he was  . . .

Continue reading. There’s more

Written by Leisureguy

12 August 2021 at 5:46 pm

How Facebook Failed to Stem Racist Abuse of England’s Soccer Players

leave a comment »

Another example of Facebook failure. Mark Zuckerman is not interested in fixing this because he makes money from it. Ryan Mac and Tariq Panja report in the NY Times:

In May 2019, Facebook asked the organizing bodies of English soccer to its London offices off Regent’s Park. On the agenda: what to do about the growing racist abuse on the social network against Black soccer players.

At the meeting, Facebook gave representatives from four of England’s main soccer organizations — the Football Association, the Premier League, the English Football League and the Professional Footballers’ Association — what they felt was a brushoff, two people with knowledge of the conversation said. Company executives told the group that they had many issues to deal with, including content about terrorism and child sex abuse.

A few months later, Facebook provided soccer representatives with an athlete safety guide, including directions on how players could shield themselves from bigotry using its tools. The message was clear: It was up to the players and the clubs to protect themselves online.

The interactions were the start of what became a more than two-year campaign by English soccer to pressure Facebook and other social media companies to rein in online hate speech against their players. Soccer officials have since met numerous times with the platforms, sent an open letter calling for change and organized social media boycotts. Facebook’s employees have joined in, demanding that it to do more to stop the harassment.

The pressure intensified after the European Championship last month, when three of England’s Black players were subjected to torrents of racial epithets on social media for missing penalty kicks in the final game’s decisive shootout. Prince William condemned the hate, and the British prime minister, Boris Johnson, threatened regulation and fines for companies that continued to permit racist abuse. Inside Facebook, the incident was escalated to a “Site Event 1,” the equivalent of a companywide five-alarm fire.

Yet as the Premier League, England’s top division, opens its season on Friday, soccer officials said that the social media companies — especially Facebook, the largest — hadn’t taken the issue seriously enough and that players were again steeling themselves for online hate.

“Football is a growing global market that includes clubs, brands, sponsors and fans who are all tired of the obvious lack of desire from the tech giants to develop in-platform solutions for the issues we are dealing with daily,” said Simone Pound, head of equality, diversity and inclusion for the Professional Footballers’ Association, the players’ union.

The impasse with English soccer is another instance of Facebook’s failing to solve speech problems on its platform, even after it was made aware of the level of abuse. While Facebook has introduced some measures to mitigate the harassment, soccer officials said they were insufficient.

Social media companies aren’t doing enough “because the pain hasn’t become enough for them,” said Sanjay Bhandari, the chair of Kick It Out, an organization that supports equality in soccer.

This season, Facebook is trying again. Its Instagram photo-sharing app rolled out new features on Wednesday to make racist material harder to view, according to a blog post. Among them, one will let users hide potentially harassing comments and messages from accounts that either don’t follow or recently followed them.

“The unfortunate reality is that tackling racism on social media, much like tackling racism in society, is complex,” Karina Newton, Instagram’s global head of public policy, said in a statement. “We’ve made important strides, many of which have been driven by our discussions with groups being targeted with abuse, like the U.K. football community.”

But Facebook executives also privately acknowledge that racist speech against English soccer players is likely to continue. “No one thing will fix this challenge overnight,” Steve Hatch, Facebook’s director for Britain and Ireland, wrote last month in an internal note that The Times reviewed.

Some players appear resigned to the abuse. Four days after the European Championship final, Bukayo Saka, 19, one of the Black players who missed penalty kicks for England, posted on Twitter and Instagram that the “powerful platforms are not doing enough to stop these messages” and called it a “sad reality.”

Yet as the Premier League, England’s top division, opens its season on Friday, soccer officials said that the social media companies — especially Facebook, the largest — hadn’t taken the issue seriously enough and that players were again steeling themselves for online hate.

“Football is a growing global market that includes clubs, brands, sponsors and fans who are all tired of the obvious lack of desire from the tech giants to develop in-platform solutions for the issues we are dealing with daily,” said Simone Pound, head of equality, diversity and inclusion for the Professional Footballers’ Association, the players’ union.

The impasse with English soccer is another instance of Facebook’s failing to solve speech problems on its platform, even after it was made aware of the level of abuse. While Facebook has introduced some measures to mitigate the harassment, soccer officials said they were insufficient.

Social media companies aren’t doing enough “because the pain hasn’t become enough for them,” said Sanjay Bhandari, the chair of Kick It Out, an organization that supports equality in soccer.

This season, Facebook is trying again. Its Instagram photo-sharing app rolled out new features on Wednesday to make racist material harder to view, according to a blog post. Among them, one will let users hide potentially harassing comments and messages from accounts that either don’t follow or recently followed them. . .

Continue reading.

Written by Leisureguy

11 August 2021 at 7:46 pm

New test detects COVID-19 variants in your spit

leave a comment »

An accompanying smartphone app analyzes the fluorescent readout on miSHERLOCK, giving users a clear “Positive” or “Negative” result.
Images courtesy of Wyss Institute at Harvard University

Lindsay Brownell of

With the delta variant wreaking havoc on unvaccinated populations, the ability to quickly diagnose and track emerging variants of the virus is crucial. Researchers have now created a simple, inexpensive, CRISPR-based diagnostic test that allows users to test themselves for multiple variants of the SARS-CoV-2 virus at home, using just a sample of their saliva.

Developed by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University, the Massachusetts Institute of Technology, and several Boston-area hospitals, the diagnostic device, called Minimally Instrumented SHERLOCK (miSHERLOCK), is easy to use and provides results that can be read and verified by an accompanying smartphone app within one hour.

miSHERLOCK successfully distinguished between three different variants of SARS-CoV-2 in experiments, and can be rapidly reconfigured to detect additional variants like delta. The device can be assembled using a 3D printer and commonly available components for about $15, and re-using the hardware brings the cost of individual assays down to $6 each.

“miSHERLOCK eliminates the need to transport patient samples to a centralized testing location and greatly simplifies the sample preparation steps, giving patients and doctors a faster, more accurate picture of individual and community health, which is critical during an evolving pandemic,” said co-first author Helena de Puig, a postdoctoral fellow at the Wyss Institute and MIT.

The diagnostic device is described in a paper published Friday in Science Advances.

From supply chain to SHERLOCK

As an instructor in pediatrics at Boston Children’s Hospital with a specialization in infectious diseases, co-first author Rose Lee has been working on the front lines of the COVID-19 pandemic for over a year. Her experiences in the clinic provided inspiration for the project that would ultimately become miSHERLOCK. . .

Continue reading. There’s more.

Memetic evolution is fast!

Written by Leisureguy

11 August 2021 at 2:08 pm

%d bloggers like this: