Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Facebook Enabled Advertisers to Reach ‘Jew Haters’

leave a comment »

Julia Angwin, Madeleine Varner, and Ariana Tobin report in ProPublica:

Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook’s self-service ad-buying platform had the right audience for you.

Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

After we contacted Facebook, it removed the anti-Semitic categories — which were created by an algorithm rather than by people — and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.

“There are times where content is surfaced on our platform that violates our standards,” said Rob Leathern, product management director at Facebook. “In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Facebook’s advertising has become a focus of national attention since it disclosed last week that it had discovered $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.

Like many tech companies, Facebook has long taken a hands off approach to its advertising business. Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Traditionally, tech companies have contended that it’s not their role to censor the Internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.

Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.

But Facebook apparently did not intensify its scrutiny of its ad buying platform. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a “field of study.” Facebook’s algorithm automatically transforms people’s declared interests into advertising categories.

Here is a screenshot of our ad buying process on the company’s advertising portal: . . .

Continue reading.

Written by LeisureGuy

14 September 2017 at 2:40 pm

Josh Marshall: “The Growing Backlash Against Big Tech”

leave a comment »

Worth reading.

Written by LeisureGuy

12 September 2017 at 4:50 pm

A New Way to Learn Economics

leave a comment »

John Maynard Keynes famously wrote:

Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority [not referring specifically to Donald Trump – LG], who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.

Thus it behooves us all to learn the rudiments of economics. John Cassidy writes in the New Yorker:

With the new school year starting, there is good news for incoming students of economics—and anybody else who wants to learn about issues like inequality, globalization, and the most efficient ways to tackle climate change. A group of economists from both sides of the Atlantic, part of a project called core Econ, has put together a new introductory economics curriculum, one that is modern, comprehensive, and freely available online.

In this country, many colleges encourage Econ 101 students to buy (or rent) expensive textbooks, which can cost up to three hundred dollars, or even more for some hardcover editions. The core curriculum includes a lengthy e-book titled “The Economy,” lecture slides, and quizzes to test understanding. Some of the material has already been used successfully at colleges like University College London and Sciences Po, in Paris.

The project is a collaborative effort that emerged after the world financial crisis of 2008–9, and the ensuing Great Recession, when many students (and teachers) complained that existing textbooks didn’t do a good job of explaining what was happening. In many countries, groups of students demanded an overhaul in how economics was taught, with less emphasis on free-market doctrines and more emphasis on real-world problems.

Traditional, wallet-busting introductory textbooks do cover topics like pollution, rising inequality, and speculative busts. But in many cases this material comes after lengthy explanations of more traditional topics: supply-and-demand curves, consumer preferences, the theory of the firm, gains from trade, and the efficiency properties of atomized, competitive markets. In his highly popular “Principles of Economics,” Harvard’s N. Gregory Mankiw begins by listing a set of ten basic principles, which include “Rational people think at the margin,” “Trade can make everybody better off,” and “Markets are usually a good way to organize economic activity.”

The core approach isn’t particularly radical. (Students looking for expositions of Marxian economics or Modern Monetary Theory will have to look elsewhere.) But it treats perfectly competitive markets as special cases rather than the norm, trying to incorporate from the very beginning the progress economists have made during the past forty years or so in analyzing more complex situations: when firms have some monopoly power; people aren’t fully rational; a lot of key information is privately held; and the gains generated by trade, innovation, and finance are distributed very unevenly. The corecurriculum also takes economic history seriously.

The e-book begins with a discussion of inequality. One of first things students learn is that, in 2014, the “90/10 ratio”—the average income of the richest ten per cent of households divided by the average income of the poorest ten per cent—was 5.4 in Norway, sixteen in the United States, and a hundred and forty-five in Botswana. Then comes a discussion of how to measure standards of living, and a section on the famous “hockey stick” graph, which shows how these standards have risen exponentially since the industrial revolution.

The text stresses that technical progress is the primary force driving economic growth. Citing the Yale economist William Nordhaus’s famous study of the development of electric lighting, it illustrates how standard economic statistics, such as the gross domestic product, sometimes fail to fully account for this progress. Befitting a twenty-first-century text, sections devoted to the causes and consequences of technological innovation recur throughout the e-book, and the information economy receives its own chapter. So do globalization, the environment, and economic cataclysms, such as the Depression and the global financial crisis.

Given the breadth of its coverage, the core curriculum may be challenging to some students, but it takes advantage of being a native online product. . .

Continue reading.

Later in the column:

Unlike most textbooks, the core e-book was produced by a large team of collaborators. More than twenty economists from both sides of the Atlantic and from India, Colombia, Chile, and Turkey contributed to it. (Two of them, Suresh Naidu and Rajiv Sethi, teach at Columbia and Barnard, respectively.) The coördinators of the project were Wendy Carlin, of University College London, Sam Bowles, of the Santa Fe Institute, and Margaret Stevens, of Oxford University. The Institute for New Economic Thinking provided some funding to help get things off the ground.

Written by LeisureGuy

12 September 2017 at 10:39 am

Block chain: Is the GDPR out of date already?

leave a comment »

This is a bit Euro-centric, but it illustrates how decentralization is proceeding apace (as a comment, too, on my preceding post on the rise of the city-state). In general, decentralized systems (e.g., the internet) are more robust than centralized systems.

Kingsley Napley writes at Lexology.com:

The General Data Protection Regulation (“GDPR”) amounts to a significant overhaul of existing data protection regulation and is designed to be ‘technology neutral’. However, how the GDPR will cope with emerging block chain technology and a move towards the decentralisation of data storage remains to be seen.

What is a block chain?

Block chain is the underlying technology behind platforms such as Bitcoin and Ethereum. Whilst block chains are best known for their use in the field of ‘crypto currencies’, they have a broad range of potential applications such as storing medical data, supply chain management or social networking.

The term ‘block chain’ has no single definition but it is generally used to refer to a way of recording transactions across a network of computers. Transactions sent to the network are grouped into ‘blocks’ which are time stamped and linked to the previous block. Linking each block to the previous block confirms the integrity of the chain all the way back to the first block. Information on the block is encrypted and protected through cryptography.

The block chain is stored on a network and no centralised ‘official copy’ exists. The process of adding transactions to the chain is performed by mining ‘nodes’. Mining is essentially a record keeping service whereby miners compete to collect in and verify transactions.

Who are the data controllers?

The GDPR continues to use the existing concepts of data controllers (who determine the purposes for which and the manner in which any personal data are to be processed) and data processors. In addition to introducing penalties for data processors, it imposes even more stringent obligations on the controller of personal data and drastically increases the potential penalties for non-compliance.

In a decentralised system where there is no individual entity in control of the data, it is difficult to identify who the obligations are placed upon and, even once the controller has been identified, enforcement does not seem feasible. For example, in the case of Bitcoin, the miners who verify transactions and build the block chain may be deemed to be the data controllers. Identifying each of these individuals (a recent study found that there are likely to be over 100,000) and then taking action against them is clearly not possible.

What laws apply to a data controller or data processor?

The GDPR seeks to extend the territorial reach of EU data protection law. The Regulation will apply to EU-based controllers and processors or entities processing an EU resident’s personal data in connection with goods or services offered to them or tracking the behaviour of individuals in the EU.

Applications of this technology are broad and in many cases it is simply not possible to ascertain the identity or the location of the data controller, data processor or even the data subject. In such a situation, determining the appropriate choice of law may not be straightforward and regulators may struggle to argue that they have the jurisdiction to take enforcement action.

How does this fit in with the right to be forgotten? . . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 11:32 am

Return of the city-state: Nation-states were late arrivals and may be gone by 2100

leave a comment »

Jamie Bartlett, director of the Centre for the Analysis of Social Media at the think-tank Demos in London, writes in Aeon:

If you’d been born 1,500 years ago in southern Europe, you’d have been convinced that the Roman empire would last forever. It had, after all, been around for 1,000 years. And yet, following a period of economic and military decline, it fell apart. By 476 CE it was gone. To the people living under the mighty empire, these events must have been unthinkable. Just as they must have been for those living through the collapse of the Pharaoh’s rule or Christendom or the Ancien Régime.

We are just as deluded that our model of living in ‘countries’ is inevitable and eternal. Yes, there are dictatorships and democracies, but the whole world is made up of nation-states. This means a blend of ‘nation’ (people with common attributes and characteristics) and ‘state’ (an organised political system with sovereignty over a defined space, with borders agreed by other nation-states). Try to imagine a world without countries – you can’t. Our sense of who we are, our loyalties, our rights and obligations, are bound up in them.

Which is all rather odd, since they’re not really that old. Until the mid-19th century, most of the world was a sprawl of empires, unclaimed land, city-states and principalities, which travellers crossed without checks or passports. As industrialisation made societies more complex, large centralised bureaucracies grew up to manage them. Those governments best able to unify their regions, store records, and coordinate action (especially war) grew more powerful vis-à-vis their neighbours. Revolutions – especially in the United States (1776) and France (1789) – helped to create the idea of a commonly defined ‘national interest’, while improved communications unified language, culture and identity. Imperialistic expansion spread the nation-state model worldwide, and by the middle of the 20th century it was the only game in town. There are now 193 nation-states ruling the world.

But the nation-state with its borders, centralised governments, common people and sovereign authority is increasingly out of step with the world. And as Karl Marx observed, if you change the dominant mode of production that underpins a society, the social and political structure will change too.

The case against the nation-state is hardly new. Twenty years ago, many were prophesising its imminent demise. Globalisation, said the futurists, was chipping away at nation-states’ power to enforce change. Businesses, finance and people could up sticks and leave. The exciting, new internet seemed to herald a borderless, free, identity-less future. And climate change, internet governance and international crime all seemed beyond the nation-state’s abilities. It seemed too small to handle international challenges; and too lumbering to tinker with local problems. Voters were quick to spot all this and stopped bothering to vote, making matters worse. In 1995, two books both titled The End of the Nation State – one by the former French diplomat Jean-Marie Guéhenno, the other by the Japanese organisational theorist Kenichi Ohmae – prophesised that power would head up to multinational bodies such as the European Union or the United Nations, or down to regions and cities.

Reports of its death were greatly exaggerated, and the end-of-the-nation-state theory itself died at the turn of the millennium. But now it’s back, and this time it might be right.

There were only tens of millions of people online in 1995 when the nation-state was last declared dead. In 2015, that number had grown to around 3 billion; by 2020, it will be more than 4 billion. (And more than 20 billion internet-connected devices.) Digital technology doesn’t really like the nation-state. John Perry Barlow’s ‘Declaration of the Independence of Cyberspace’ (1996) sums it up well: the internet is a technology built on libertarian principles. Censorship-free, decentralised and borderless. And now ubiquitous.

This is an enormous pain for the nation-state in all sorts of ways. It’s now possible for the British National Health Service to be targeted by ransomware launched in North Korea, and there are few ways to stop it or bring perpetrators to justice. App technology such as Uber and Deliveroo has helped to produce a sudden surge in the gig economy, which is reckoned to cost the government £3.5 billion a year by 2020-1. There are already millions of people using bitcoin and blockchain technologies, explicitly designed to wrestle control of the money supply from central banks and governments, and their number will continue to grow. It’s also infusing us with new values, ones that are not always national in nature: a growing number of people see themselves as ‘global’ citizens. . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 11:15 am

The Seven Deadly Sins of Predicting the Future of AI

leave a comment »

Rodney Brooks writes:

We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs.

As I write these words on September 2nd, 2017, I note just two news stories from the last 48 hours.

Yesterday, in the New York Times, Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote an opinion piece titled How to Regulate Artificial Intelligence where he does a good job of arguing against the hysteria that Artificial Intelligence is an existential threat to humanity. He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little “the sky is falling” calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better.

Today, there is a story in Market Watch that robots will take half of today’s jobs in 10 to 20 years. It even has a graphic to prove the numbers.

[big graphic here – LG]

The claims are ludicrous. [I try to maintain professional language, but sometimes…] For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? ZERO. How many realistic demonstrations have there been of robots working in this arena? ZERO. Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site.

Mistaken predictions lead to fear of things that are not going to happen. Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them?

Below I outline seven ways of thinking that lead to mistaken predictions about robotics and Artificial Intelligence. We find instances of these ways of thinking in many of the predictions about our AI future. I am going to first list the four such general topic areas of such predictions that I notice, along with a brief assessment of where I think they currently stand.

A. Artificial General Intelligence. Research on AGI is an attempt to distinguish a thinking entity from current day AI technology such as Machine Learning. Here the idea is that we will build autonomous agents that operate much like beings in the world. This has always been my own motivation for working in robotics and AI, but the recent successes of AI are not at all like this.

Some people think that all AI is an instance of AGI, but as the word “general” would imply, AGI aspires to be much more general than current AI. Interpreting current AI as an instance of AGI makes it seem much more advanced and all encompassing that it really is.

Modern day AGI research is not doing at all well on being either general or getting to an independent entity with an ongoing existence. It mostly seems stuck on the same issues in reasoning and common sense that AI has had problems with for at least fifty years. Alternate areas such as Artificial Life, and Simulation of Adaptive Behavior did make some progress in getting full creatures in the eighties and nineties (these two areas and communities were where I spent my time during those years), but they have stalled.

My own opinion is that of course this is possible in principle. I would never have started working on Artificial Intelligence if I did not believe that. However perhaps we humans are just not smart enough to figure out how to do this–see my remarks on humility in my post on the current state of Artificial Intelligence suitable for deployment in robotics. Even if it is possible I personally think we are far, far further away from understanding how to build AGI than many other pundits might say.

[Some people refer to “an AI”, as though all AI is about being an autonomous agent. I think that is confusing, and just as the natives of San Francisco do not refer to their city as “Frisco”, no serious researchers in AI refer to “an AI”.]

B. The Singularity. This refers to the idea that eventually an AI based intelligent entity, with goals and purposes, will be better at AI research than us humans are. Then, with an unending Moore’s law mixed in making computers faster and faster, Artificial Intelligence will take off by itself, and, as in speculative physics going through the singularity of a black hole, we have no idea what things will be like on the other side.

People who “believe” in the Singularity are happy to give post-Singularity AI incredible power, as what will happen afterwards is quite unpredictable. I put the word believe in scare quotes as belief in the singularity can often seem like a religious belief. For some it comes with an additional benefit of being able to upload their minds to an intelligent computer, and so get eternal life without the inconvenience of having to believe in a standard sort of supernatural God. The ever powerful technologically based AI is the new God for them. Techno religion!

Some people have very specific ideas about when the day of salvation will come–followers of one particular Singularity prophet believe that it will happen in the year 2029, as it has been written.

This particular error of prediction is very much driven by exponentialism, and I will address that as one of the seven common mistakes that people make.

Even if there is a lot of computer power around it does not mean we are close to having programs that can do research in Artificial Intelligence, and rewrite their own code to get better and better.

Here is where we are on programs that can understand computer code. We currently have no programs that can understand a one page program as well as a new student in computer science can understand such a program after just one month of taking their very first class in programming. That is a long way from AI systems being better at writing AI systems than humans are.

Here is where we are on simulating brains at the neural level, the other methodology that Singularity worshipers often refer to. For about thirty years we have known the full “wiring diagram” of the 302 neurons in the worm C. elegans, along with the 7,000 connections between them. This has been incredibly useful for understanding how behavior and neurons are linked. But it has been a thirty years study with hundreds of people involved, all trying to understand just 302 neurons. And according to the OpenWorm project trying to simulate C. elegans bottom up, they are not yet half way there. To simulate a human brain with 100 billion neurons and a vast number of connections is quite a way off. So if you are going to rely on the Singularity to upload yourself to a brain simulation I would try to hold off on dying for another couple of centuries.

Just in case I have not made my own position on the Singularity clear, I refer you to my comments in a regularly scheduled look at the event by the magazine IEEE Spectrum. Here is the the 2008 version, and in particular a chart of where the players stand and what they say. Here is the 2017 version, and in particular a set of boxes of where the players stand and what they say. And yes, I do admit to being a little snarky in 2017…

C. Misaligned Values.  . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 11:08 am

How the Netherlands became an agricultural giant

leave a comment »

Frank Viviano reports in the National Geographic:

In a potato field near the Netherlands’ border with Belgium, Dutch farmer Jacob van den Borne is seated in the cabin of an immense harvester before an instrument panel worthy of the starship Enterprise.

From his perch 10 feet above the ground, he’s monitoring two drones—a driverless tractor roaming the fields and a quadcopter in the air—that provide detailed readings on soil chemistry, water content, nutrients, and growth, measuring the progress of every plant down to the individual potato. Van den Borne’s production numbers testify to the power of this “precision farming,” as it’s known. The global average yield of potatoes per acre is about nine tons. Van den Borne’s fields reliably produce more than 20.

That copious output is made all the more remarkable by the other side of the balance sheet: inputs. Almost two decades ago, the Dutch made a national commitment to sustainable agriculture under the rallying cry “Twice as much food using half as many resources.” Since 2000, van den Borne and many of his fellow farmers have reduced dependence on water for key crops by as much as 90 percent. They’ve almost completely eliminated the use of chemical pesticides on plants in greenhouses, and since 2009 Dutch poultry and livestock producers have cut their use of antibiotics by as much as 60 percent.

One more reason to marvel: The Netherlands is a small, densely populated country, with more than 1,300 inhabitants per square mile. It’s bereft of almost every resource long thought to be necessary for large-scale agriculture. Yet it’s the globe’s number two exporter of food as measured by value, second only to the United States, which has 270 times its landmass. How on Earth have the Dutch done it?

Seen from the air, the Netherlands resembles no other major food producer—a fragmented patchwork of intensely cultivated fields, most of them tiny by agribusiness standards, punctuated by bustling cities and suburbs. In the country’s principal farming regions, there’s almost no potato patch, no greenhouse, no hog barn that’s out of sight of skyscrapers, manufacturing plants, or urban sprawl. More than half the nation’s land area is used for agriculture and horticulture.

Banks of what appear to be gargantuan mirrors stretch across the countryside, glinting when the sun shines and glowing with eerie interior light when night falls. They are Holland’s extraordinary greenhouse complexes, some of them covering 175 acres.

These climate-controlled farms enable a country located a scant thousand miles from the Arctic Circle to be a global leader in exports of a fair-weather fruit: the tomato. The Dutch are also the world’s top exporter of potatoes and onions and the second largest exporter of vegetables overall in terms of value. More than a third of all global trade in vegetable seeds originates in the Netherlands. . .

Continue reading. There’s lots more.

The photos at the link are amazing.

Written by LeisureGuy

8 September 2017 at 4:34 pm

%d bloggers like this: