Later On

A blog written for those whose interests more or less match mine.

Archive for September 10th, 2017

New Study—Like Several Old Studies—Favors Fat Over Carbs

leave a comment »

Nicholas Bakalar reports in the NY Times:

High carbohydrate intake is associated with a higher risk of mortality, and high fat intake with a lower risk, researchers report.

An international team of scientists studied diet and mortality in 135,335 people between 35 and 70 years old in 18 countries, following them for an average of more than seven years. Diet information depended on self-reports, and the scientists controlled for factors including age, sex, smoking, physical activity and body mass index. The study is in The Lancet.

Compared with people who ate the lowest 20 percent of carbohydrates, those who ate the highest 20 percent had a 28 percent increased risk of death. But high carbohydrate intake was not associated with cardiovascular death.

People with the highest 20 percent in total fat intake — an average of 35.3 percent of calories from fat — had about a 23 percent reduced risk of death compared with the lowest 20 percent (an average of 10.6 percent of calories from fat). Consuming higher saturated fat, polyunsaturated fat and monounsaturated fat were all associated with lower mortality. Higher fat diets were also associated with a lower risk of stroke.

“Guidelines recommend low saturated fat, and some recommend really low amounts,” said a co-author, Andrew Mente, an epidemiologist at McMaster University in Ontario. “Our study, which captures intake at the lowest levels, shows that this may be harmful.” . . .

Continue reading.

See also: “A Decades-Old Study, Rediscovered, Challenges Advice on Saturated Fat.”

And I always recommend Nina Teicholz’s book The Big Fat Surprise.

Written by LeisureGuy

10 September 2017 at 8:33 pm

Saying goodbyes when you move

with 3 comments

We’ve said goodbye to Passionfish, and I’ve said goodbye to Ocean Sushi and Tillie Gorst’s. Goodbyes will be frequent this coming week, the final week before the move, but let me note that tonight I said goodbye to my birthday (some years back) bottle of Hillrock Single Malt Whisky, Barrel 4, Master distiller Dave Pickwell, which my son gave to me. The final dram was timed perfectly.

But not so perfectly as the 100-sheet pack of cut parchment paper from King Arthur flour. I bought it more than a year ago, and this week I used the final sheet on the final batch of breakfast bites. It came out even! That’s a very good omen. As is, come to think of it, is the bird who flew full-speed into the glass balcony door and knocked itself senseless (so much The Wife thought it dead), but then shook itself off, got to its feet, and ventured forth once more.

It’s a good move, I say. An adventure.

We’re having Nigella Lawson’s Tarragon Chicken again, and even on this second try I find I know a lot more about how to go about it. Chop the tarragon leaves, for example.

It’s in the oven now, and from last time I know it takes 40 minutes.

Written by LeisureGuy

10 September 2017 at 5:01 pm

Posted in Daily life

The Myth of Deep Throat

leave a comment »

Raw ambition more than selfless duty, according to this Politico report by Max Holland:

Columnists, talking heads and op-ed writers are holding open auditions for a role that presumably needs to be filled if we are ever going to get to the bottom of what seems fated to be dubbed, for better or worse, Russiagate: a new Deep Throat.

I get it. In the years since Watergate, the Washington Post’s famous golden source—later revealed to be former FBI No. 2 executive W. Mark Felt—has become practically synonymous with the ideal of the noble leaker. The original Deep Throat “was instrumental in thwarting the conspiracy and bringing [President Richard] Nixon down,” Harry Litman, a former deputy assistant attorney general, approvingly wrote in the Los Angeles Times in May“Was it wrong for Deep Throat, as FBI official Mark Felt was then known, to guide the investigation?” Washington Postcolumnist Margaret Sullivan asked in June, in the midst of a column praising leaks and anonymous sources, and inviting more. New Yorkmagazine columnist Frank Rich has gone a step further and already announced his casting choice: James Comey is today’s Deep Throat.

The unarticulated presumption, which Sullivan, Litman and Rich are not alone in making, is that Felt—the FBI’s deputy director in June 1972, and subsequently the parking-garage interlocutor who steered Bob Woodward and Carl Bernstein to reportorial heights—was an honorable, selfless whistleblower intent on exposing the lawlessness rampant in the Nixon White House. Or, as David Remnick spelled out in the New Yorker—echoing Deep Throat’s original hagiographers, Woodward and Bernstein—Felt “believed that the Nixon administration was corrupt, paranoid and trying to infringe on the independence of the bureau.” The president and his top aides ran, Felt believed, “a criminal operation out of the White House, and [Felt] risked everything to guide” the Post reporters. A new biopic about Felt, starring Liam Neeson, is due out on September 29 and shows every sign of continuing to portray Deep Throat as a profound patriot and dedicated FBI lifer.

But here’s a heretical thought: Mark Felt was no hero. Getting rid of Nixon was the last thing Felt ever wanted to accomplish; indeed, he was banking on Nixon’s continuation in office to achieve his one and only aim: to reach the top of the FBI pyramid and become director. Felt didn’t help the media for the good of the country, he used the media in service of his own ambition. Things just didn’t turn out anywhere close to the way he wanted.

Only recently, more than four decades after Nixon’s downfall, has it become possible to reconstruct Felt’s design and what really happened during those fateful six months following the Watergate break-in. Doing so requires burrowing through a great number of primary documents and government records against the backdrop of a vast secondary literature. Nixon’s surreptitious tape recordings rank first in importance, but only mark the starting point. One has to also research documents from the FBI’s vast Watergate investigation; the bureau’s subsequent internal leak investigation; records from the Watergate Special Prosecution Force; documents from Felt’s own FBI file; and lastly, two unintentionally rewarding books: Mark Felt’s original 1979 memoir, The FBI Pyramid, and the slightly reworked version published in 2006, A G-Man’s Life.

What you’ll end up with is the real story of Deep Throat. And you might be left with this realization:  . . .

Continue reading.

 

Written by LeisureGuy

10 September 2017 at 3:32 pm

Most of America’s landscape is rural. But journalists don’t go there very often.

leave a comment »

Margaret Sullivan has some interesting observations in the Washington Post:

Washington journalists may be obsessed with President Trump’s Russia connections. Silicon Valley reporters may be focused on the next big tech merger. And New York media types may be hyperventilating about Vanity Fair magazine’s editor stepping down.

But while most American journalists stay inside their urban bubble, the Texas Observer’s staff is laboring in the fields.

Poisonous crop-dusting is on their minds. So are the wildfires that result when agricultural land goes fallow. And so are the many communities that don’t have a hospital within several hours’ drive — or even a nearby doctor.

“There are such fantastic stories to be found,” said Forrest Wilder, editor of the Austin-based Observer, which recently became a mostly digital operation, still publishing in print six times a year.

Now, with funding from the Emerson Collective, the nonprofit group founded by Laurene Powell Jobs, the Observer has added a full-time rural reporter, Christopher Collins, to its newsroom staff of about a dozen. The Observer will supplement his work with a network of freelancers.

“We’re going to really look for stories in far-flung, underreported — or unreported — areas,” Wilder said.

There are plenty of opportunities: 3.8 million of some 25 million Texans live in rural areas.

The underreporting in rural areas is a nationwide phenomenon, with an increasing number of journalists clustered in New York, Washington and on the West Coast.

And as metro dailies shrink their staffs, rural bureaus are often one of the first casualties, said Al Cross, who runs the Institute for Rural Journalism and Community Issues at the University of Kentucky.

“Larger regional papers used to do this as a public service even though there was no advertising base for it,” Cross told me.

Now, with less revenue coming in as a result of print advertising’s sharp decline, and with year after year of newsroom buyouts taking their toll, rural reporting has taken a hit.

“It’s triage,” Cross said of newsroom decisions about what to cover. Rural reporting rarely is seen as the most critical mission.

But that leaves huge swaths of the United States without coverage. And the buying-up of small papers by chains, more beholden to stockholders than to local concerns, has hollowed out the journalism even more.

The term “news deserts” aptly describes the results: In many communities, there’s no one to cover government meetings, hold officials accountable, or report on events, large or small.

That’s a real problem. Consider the succinct question-and-answer on the Kentucky institute’s website:

“Why is rural journalism important? Because 16 percent of Americans, some 63 million people, are rural, and so is three-fourths of the national landscape.” . . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 2:44 pm

Posted in Daily life, Media

Block chain: Is the GDPR out of date already?

leave a comment »

This is a bit Euro-centric, but it illustrates how decentralization is proceeding apace (as a comment, too, on my preceding post on the rise of the city-state). In general, decentralized systems (e.g., the internet) are more robust than centralized systems.

Kingsley Napley writes at Lexology.com:

The General Data Protection Regulation (“GDPR”) amounts to a significant overhaul of existing data protection regulation and is designed to be ‘technology neutral’. However, how the GDPR will cope with emerging block chain technology and a move towards the decentralisation of data storage remains to be seen.

What is a block chain?

Block chain is the underlying technology behind platforms such as Bitcoin and Ethereum. Whilst block chains are best known for their use in the field of ‘crypto currencies’, they have a broad range of potential applications such as storing medical data, supply chain management or social networking.

The term ‘block chain’ has no single definition but it is generally used to refer to a way of recording transactions across a network of computers. Transactions sent to the network are grouped into ‘blocks’ which are time stamped and linked to the previous block. Linking each block to the previous block confirms the integrity of the chain all the way back to the first block. Information on the block is encrypted and protected through cryptography.

The block chain is stored on a network and no centralised ‘official copy’ exists. The process of adding transactions to the chain is performed by mining ‘nodes’. Mining is essentially a record keeping service whereby miners compete to collect in and verify transactions.

Who are the data controllers?

The GDPR continues to use the existing concepts of data controllers (who determine the purposes for which and the manner in which any personal data are to be processed) and data processors. In addition to introducing penalties for data processors, it imposes even more stringent obligations on the controller of personal data and drastically increases the potential penalties for non-compliance.

In a decentralised system where there is no individual entity in control of the data, it is difficult to identify who the obligations are placed upon and, even once the controller has been identified, enforcement does not seem feasible. For example, in the case of Bitcoin, the miners who verify transactions and build the block chain may be deemed to be the data controllers. Identifying each of these individuals (a recent study found that there are likely to be over 100,000) and then taking action against them is clearly not possible.

What laws apply to a data controller or data processor?

The GDPR seeks to extend the territorial reach of EU data protection law. The Regulation will apply to EU-based controllers and processors or entities processing an EU resident’s personal data in connection with goods or services offered to them or tracking the behaviour of individuals in the EU.

Applications of this technology are broad and in many cases it is simply not possible to ascertain the identity or the location of the data controller, data processor or even the data subject. In such a situation, determining the appropriate choice of law may not be straightforward and regulators may struggle to argue that they have the jurisdiction to take enforcement action.

How does this fit in with the right to be forgotten? . . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 11:32 am

Return of the city-state: Nation-states were late arrivals and may be gone by 2100

leave a comment »

Jamie Bartlett, director of the Centre for the Analysis of Social Media at the think-tank Demos in London, writes in Aeon:

If you’d been born 1,500 years ago in southern Europe, you’d have been convinced that the Roman empire would last forever. It had, after all, been around for 1,000 years. And yet, following a period of economic and military decline, it fell apart. By 476 CE it was gone. To the people living under the mighty empire, these events must have been unthinkable. Just as they must have been for those living through the collapse of the Pharaoh’s rule or Christendom or the Ancien Régime.

We are just as deluded that our model of living in ‘countries’ is inevitable and eternal. Yes, there are dictatorships and democracies, but the whole world is made up of nation-states. This means a blend of ‘nation’ (people with common attributes and characteristics) and ‘state’ (an organised political system with sovereignty over a defined space, with borders agreed by other nation-states). Try to imagine a world without countries – you can’t. Our sense of who we are, our loyalties, our rights and obligations, are bound up in them.

Which is all rather odd, since they’re not really that old. Until the mid-19th century, most of the world was a sprawl of empires, unclaimed land, city-states and principalities, which travellers crossed without checks or passports. As industrialisation made societies more complex, large centralised bureaucracies grew up to manage them. Those governments best able to unify their regions, store records, and coordinate action (especially war) grew more powerful vis-à-vis their neighbours. Revolutions – especially in the United States (1776) and France (1789) – helped to create the idea of a commonly defined ‘national interest’, while improved communications unified language, culture and identity. Imperialistic expansion spread the nation-state model worldwide, and by the middle of the 20th century it was the only game in town. There are now 193 nation-states ruling the world.

But the nation-state with its borders, centralised governments, common people and sovereign authority is increasingly out of step with the world. And as Karl Marx observed, if you change the dominant mode of production that underpins a society, the social and political structure will change too.

The case against the nation-state is hardly new. Twenty years ago, many were prophesising its imminent demise. Globalisation, said the futurists, was chipping away at nation-states’ power to enforce change. Businesses, finance and people could up sticks and leave. The exciting, new internet seemed to herald a borderless, free, identity-less future. And climate change, internet governance and international crime all seemed beyond the nation-state’s abilities. It seemed too small to handle international challenges; and too lumbering to tinker with local problems. Voters were quick to spot all this and stopped bothering to vote, making matters worse. In 1995, two books both titled The End of the Nation State – one by the former French diplomat Jean-Marie Guéhenno, the other by the Japanese organisational theorist Kenichi Ohmae – prophesised that power would head up to multinational bodies such as the European Union or the United Nations, or down to regions and cities.

Reports of its death were greatly exaggerated, and the end-of-the-nation-state theory itself died at the turn of the millennium. But now it’s back, and this time it might be right.

There were only tens of millions of people online in 1995 when the nation-state was last declared dead. In 2015, that number had grown to around 3 billion; by 2020, it will be more than 4 billion. (And more than 20 billion internet-connected devices.) Digital technology doesn’t really like the nation-state. John Perry Barlow’s ‘Declaration of the Independence of Cyberspace’ (1996) sums it up well: the internet is a technology built on libertarian principles. Censorship-free, decentralised and borderless. And now ubiquitous.

This is an enormous pain for the nation-state in all sorts of ways. It’s now possible for the British National Health Service to be targeted by ransomware launched in North Korea, and there are few ways to stop it or bring perpetrators to justice. App technology such as Uber and Deliveroo has helped to produce a sudden surge in the gig economy, which is reckoned to cost the government £3.5 billion a year by 2020-1. There are already millions of people using bitcoin and blockchain technologies, explicitly designed to wrestle control of the money supply from central banks and governments, and their number will continue to grow. It’s also infusing us with new values, ones that are not always national in nature: a growing number of people see themselves as ‘global’ citizens. . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 11:15 am

The Seven Deadly Sins of Predicting the Future of AI

leave a comment »

Rodney Brooks writes:

We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs.

As I write these words on September 2nd, 2017, I note just two news stories from the last 48 hours.

Yesterday, in the New York Times, Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote an opinion piece titled How to Regulate Artificial Intelligence where he does a good job of arguing against the hysteria that Artificial Intelligence is an existential threat to humanity. He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little “the sky is falling” calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better.

Today, there is a story in Market Watch that robots will take half of today’s jobs in 10 to 20 years. It even has a graphic to prove the numbers.

[big graphic here – LG]

The claims are ludicrous. [I try to maintain professional language, but sometimes…] For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? ZERO. How many realistic demonstrations have there been of robots working in this arena? ZERO. Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site.

Mistaken predictions lead to fear of things that are not going to happen. Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them?

Below I outline seven ways of thinking that lead to mistaken predictions about robotics and Artificial Intelligence. We find instances of these ways of thinking in many of the predictions about our AI future. I am going to first list the four such general topic areas of such predictions that I notice, along with a brief assessment of where I think they currently stand.

A. Artificial General Intelligence. Research on AGI is an attempt to distinguish a thinking entity from current day AI technology such as Machine Learning. Here the idea is that we will build autonomous agents that operate much like beings in the world. This has always been my own motivation for working in robotics and AI, but the recent successes of AI are not at all like this.

Some people think that all AI is an instance of AGI, but as the word “general” would imply, AGI aspires to be much more general than current AI. Interpreting current AI as an instance of AGI makes it seem much more advanced and all encompassing that it really is.

Modern day AGI research is not doing at all well on being either general or getting to an independent entity with an ongoing existence. It mostly seems stuck on the same issues in reasoning and common sense that AI has had problems with for at least fifty years. Alternate areas such as Artificial Life, and Simulation of Adaptive Behavior did make some progress in getting full creatures in the eighties and nineties (these two areas and communities were where I spent my time during those years), but they have stalled.

My own opinion is that of course this is possible in principle. I would never have started working on Artificial Intelligence if I did not believe that. However perhaps we humans are just not smart enough to figure out how to do this–see my remarks on humility in my post on the current state of Artificial Intelligence suitable for deployment in robotics. Even if it is possible I personally think we are far, far further away from understanding how to build AGI than many other pundits might say.

[Some people refer to “an AI”, as though all AI is about being an autonomous agent. I think that is confusing, and just as the natives of San Francisco do not refer to their city as “Frisco”, no serious researchers in AI refer to “an AI”.]

B. The Singularity. This refers to the idea that eventually an AI based intelligent entity, with goals and purposes, will be better at AI research than us humans are. Then, with an unending Moore’s law mixed in making computers faster and faster, Artificial Intelligence will take off by itself, and, as in speculative physics going through the singularity of a black hole, we have no idea what things will be like on the other side.

People who “believe” in the Singularity are happy to give post-Singularity AI incredible power, as what will happen afterwards is quite unpredictable. I put the word believe in scare quotes as belief in the singularity can often seem like a religious belief. For some it comes with an additional benefit of being able to upload their minds to an intelligent computer, and so get eternal life without the inconvenience of having to believe in a standard sort of supernatural God. The ever powerful technologically based AI is the new God for them. Techno religion!

Some people have very specific ideas about when the day of salvation will come–followers of one particular Singularity prophet believe that it will happen in the year 2029, as it has been written.

This particular error of prediction is very much driven by exponentialism, and I will address that as one of the seven common mistakes that people make.

Even if there is a lot of computer power around it does not mean we are close to having programs that can do research in Artificial Intelligence, and rewrite their own code to get better and better.

Here is where we are on programs that can understand computer code. We currently have no programs that can understand a one page program as well as a new student in computer science can understand such a program after just one month of taking their very first class in programming. That is a long way from AI systems being better at writing AI systems than humans are.

Here is where we are on simulating brains at the neural level, the other methodology that Singularity worshipers often refer to. For about thirty years we have known the full “wiring diagram” of the 302 neurons in the worm C. elegans, along with the 7,000 connections between them. This has been incredibly useful for understanding how behavior and neurons are linked. But it has been a thirty years study with hundreds of people involved, all trying to understand just 302 neurons. And according to the OpenWorm project trying to simulate C. elegans bottom up, they are not yet half way there. To simulate a human brain with 100 billion neurons and a vast number of connections is quite a way off. So if you are going to rely on the Singularity to upload yourself to a brain simulation I would try to hold off on dying for another couple of centuries.

Just in case I have not made my own position on the Singularity clear, I refer you to my comments in a regularly scheduled look at the event by the magazine IEEE Spectrum. Here is the the 2008 version, and in particular a chart of where the players stand and what they say. Here is the 2017 version, and in particular a set of boxes of where the players stand and what they say. And yes, I do admit to being a little snarky in 2017…

C. Misaligned Values.  . .

Continue reading.

Written by LeisureGuy

10 September 2017 at 11:08 am

%d bloggers like this: