Later On

A blog written for those whose interests more or less match mine.

Archive for November 2013

Interesting take on what the 2012 election reveals about US government

leave a comment »

Elias Isquith has a very interesting column in Salon. From that column:

. . . While most people at least intuitively understand that big-time political campaigns are financed largely by the very wealthy, Ferguson and his co-authors’ paper reveals the degree to which these national operations are funded by a vanishingly small number of people. “We really are dealing with a system that is of by and for the one percent — or the one-and-a-half percent,” Ferguson told Salon in a recent interview. And the numbers bear him out. Assuming that contributions over $500 come largely from the one percent, the paper finds that no less than 59 percent of Obama’s funding, and 79 percent of Romney’s, emanates from that small sliver of society. This contrasts rather jarringly with the popular image of the 2012 campaign as one pitting Obama’s middle-class constituency against Romney’s plutocratic backers. It was more of a plutocrat vs. plutocrat affair.

Even on that score, however, the lines of demarcation are fuzzy at best. It’s undeniable that Romney was more popular among big business than Obama, but the differences between the two were smaller than you’d imagine. In fact, the authors “suspect” that “the president probably enjoyed substantially higher levels of support within big business than most other modern Democratic presidential candidates, even those running for reelection.” Obama got walloped when it comes to what you could call Koch brother industries — oil, gas, plastics, etc. — but he did OK with Wall Street and, especially, the telecom and tech industries.

It’s that last point — Obama’s popularity among the industries that make up the surveillance state — that forms the most surprising and relevant takeaway of the paper. In the wake of the ongoing revelations from Edward Snowden of a national security state-turned-surveillance behemoth, the level of financial support the president enjoys from the industries working with the government to spy on Americans starts to make sense. But to compare this synchronicity to Obama’s 2008 campaign, and its pledges to rein in and civilize the Bush-Cheney post-9/11 national security leviathan, is to risk vertigo.

The distressing conclusion to be drawn from all this is that those interested in truly curtailing the surveillance state will find few friends within the two-party system. Democrats, after all, were supposed to be the ones who were more cautious, pragmatic, and civil liberties-minded when it came to surveillance. Voicing a sentiment that’s no doubt still held by many, if not most, Ferguson told Salon that, prior to his research, he “thought there was more distance between the Democrats and the Republicans on the National Security State.” That distance, if it ever was significant, is certainly gone now. . .

Read the whole thing.

Written by LeisureGuy

30 November 2013 at 1:14 pm

Posted in Business, Election, Politics

More complex than it needs to be: Why?

leave a comment »

I was thinking about the two previous math articles, and the awe-inspiring vistas of mathematics, and realized that I had a kind of math nerd aspect to my personality, which led me to contemplate the complexity of a person’s personality. I know that people have many similarities, and thus we can talk about, e.g., how “psychology” in general works. But still: each person has a complex personality, and each personality (and psychology) is unique—easy to see: how many of you have met the same identical personality in two bodies: the same “person”, just different bodies? No one? Right.

So: why all this complexity? Certainly anything that complex comes with a cost—I suppose the cost of pushing uphill against entropy. To create and maintain such a complexity has to cost something, and evolutionary bookkeeping suggests that there must be a benefit that repays the cost: perhaps all that personal complexity is just the human version of the bowerbird: they build displays, we build personalities, and both activities are to attract mates and reproduce.

I can see that could move toward complexity in Darwin’s algorithm: like the peacock’s tail-feather display, our personalities: both complex, both mainly to attract mates.

So I guess in an evolutionary context, it’s not “more complex than it needs to be”: it has to be that complex to compete.

Written by LeisureGuy

30 November 2013 at 12:23 pm

Posted in Daily life

Crowd-sourcing shrinking the prime gap

leave a comment »

I blogged earlier about the initial discovery, but lots has happened since, reported in Quanta magazine by Erica Klarreich:

On May 13, an obscure mathematician — one whose talents had gone so unrecognized that he had worked at a Subway restaurant to make ends meet — garnered worldwide attention and accolades from the mathematics community for settling a long-standing open question about prime numbers, those numbers divisible by only one and themselves. Yitang Zhang, a lecturer at the University of New Hampshire, showed that even though primes get increasingly rare as you go further out along the number line, you will never stop finding pairs of primes separated by at most 70 million. His finding was the first time anyone had managed to put a finite bound on the gaps between prime numbers, representing a major leap toward proving the centuries-old twin primes conjecture, which posits that there are infinitely many pairs of primes separated by only two (such as 11 and 13).

In the months that followed, Zhang found himself caught up in a whirlwind of activity and excitement: He has lectured on his work at many of the nation’s preeminent universities, has received offers of jobs from top institutions in China and Taiwan and a visiting position at the Institute for Advanced Study in Princeton, N.J., and has been told that he will be promoted to full professor at the University of New Hampshire.

Related ArticleUnheralded Mathematician Bridges the Prime Gap

Meanwhile, Zhang’s work raised a question: Why 70 million? There is nothing magical about that number — it served Zhang’s purposes and simplified his proof. Other mathematicians quickly realized that it should be possible to push this separation bound quite a bit lower, although not all the way down to two.

By the end of May, mathematicians had uncovered simple tweaks to Zhang’s argument that brought the bound below 60 million. A May 30 blog post by Scott Morrison of the Australian National University in Canberra ignited a firestorm of activity, as mathematicians vied to improve on this number, setting one record after another. By June 4, Terence Tao of the University of California, Los Angeles, a winner of the Fields Medal, mathematics’ highest honor, had created a “Polymath project,” an open, online collaboration to improve the bound that attracted dozens of participants.

For weeks, the project moved forward at a breathless pace. “At times, the bound was going down every thirty minutes,” Tao recalled. By July 27, the team had succeeded in reducing the proven bound on prime gaps from 70 million to 4,680.

Now, a preprint posted to arXiv.org on November 19 by James Maynard, a postdoctoral researcher working on his own at the University of Montreal, has upped the ante. Just months after Zhang announced his result, Maynard has presented an independent proof that pushes the gap down to 600. A new Polymath project is in the planning stages, to try to combine the collaboration’s techniques with Maynard’s approach to push this bound even lower.

“The community is very excited by this new progress,” Tao said.

Maynard’s approach applies not just to pairs of primes, but to triples, quadruples and larger collections of primes. He has shown that you can find bounded clusters of any chosen number of primes infinitely often as you go out along the number line. (Tao said he independently arrived at this result at about the same time as Maynard.)

Zhang’s work and, to a lesser degree, Maynard’s fits the archetype of the solitary mathematical genius, working for years in the proverbial garret until he is ready to dazzle the world with a great discovery. The Polymath project couldn’t be more different — fast and furious, massively collaborative, fueled by the instant gratification of setting a new world record.

For Zhang, working alone and nearly obsessively on a single hard problem brought a huge payoff. Would he recommend that approach to other mathematicians? “It’s hard to say,” he said. “I choose my own way, but it’s only my way.”

Tao actively discourages young mathematicians from heading down such a path, whichhe has called “a particularly dangerous occupational hazard” that has seldom worked well, except for established mathematicians with a secure career and a proven track record. However, he said in an interview, the solitary and collaborative approaches each have something to offer mathematics.

“It’s important to have people who are willing to work in isolation and buck the conventional wisdom,” Tao said. Polymath, by contrast, is “entirely groupthink.” Not every math problem would lend itself to such collaboration, but this one did.

Combing the Number Line

Zhang proved his result by going fishing for prime numbers using a mathematical tool called a k-tuple, which you can visualize as a comb with some of its teeth snapped off. If you position such a comb along the number line starting at any chosen spot, the remaining teeth will point to some collection of numbers.

Zhang focused on snapped combs whose remaining teeth satisfy a divisibility property called “admissibility.” He showed that if you go fishing for primes using any admissible comb with at least 3,500,000 teeth, there are infinitely many positions along the number line where the comb will catch at least two prime numbers. Next, he showed how to make an admissible comb with at least 3,500,000 remaining teeth by starting with a 70-million-tooth comb and snapping off all but its prime teeth. Such a comb must catch two primes again and again, he concluded, and the primes it catches are separated by at most 70 million.

The finding is “a fantastic breakthrough,” said Andrew Granville, of the University of Montreal. “It’s a historic result.”

Zhang’s work involved three separate steps, each of which offered potential room for improvement on his 70 million bound. First, Zhang invoked some very deep mathematics to figure out where prime fish are likely to be hiding. Next, he used this result to figure out how many teeth his comb would need in order to guarantee that it would catch at least two prime fish infinitely often. Finally, he calculated how large a comb he had to start with so that enough teeth would be left after it had been snapped down to admissibility.

The fact that these three steps could be separated made improving Zhang’s bound an ideal project for a crowd-sourced collaboration, Tao said. “His proof is very modular, so we could parallelize the project, and people with different skills squeezed out what improvements they could.”

The Polymath project quickly attracted people with the right skills, perhaps more efficiently than if the project had been organized from the top down. “A Polymath project brings together people who wouldn’t have thought of coming together,” Tao said.

Prime Fishing Grounds

Of Zhang’s three steps, the first to admit improvement was the last one, in which he found an admissible comb with at least 3,500,000 teeth. Zhang had shown that a comb of length 70 million would do the trick, but he hadn’t tried particularly hard to make his comb as small as possible. There was plenty of room for improvement, and researchers who were good at computational mathematics soon started a friendly race to find small admissible combs with a given number of teeth.

Andrew Sutherland, of the Massachusetts Institute of Technology, quickly became a sort of de facto admissible-comb czar. Sutherland, who focuses on computational number theory, had been traveling during Zhang’s announcement and hadn’t paid particular attention to it. But when he checked in at a Chicago hotel and mentioned to the clerk that he was there for a mathematics conference, the clerk replied, “Wow, 70 million, huh?”

“I was floored that he knew about it,” Sutherland said. He soon discovered that there was plenty of scope for someone with his computational skills to help improve Zhang’s bound. “I had lots of plans for the summer, but they went by the wayside.”

For the mathematicians working on this step, the ground kept shifting underfoot. Their task changed every time the mathematicians working on the other two steps managed to reduce the number of teeth the comb would require. “The rules of the game were changing on a day-to-day basis,” Sutherland said. “While I was sleeping, people in Europe would post new bounds. Sometimes, I would run downstairs at 2 a.m. with an idea to post.”

The team eventually came up with the Polymath project’s record-holder — a 632-tooth comb whose width is 4,680 — using a genetic algorithm that “mates” admissible combs with each other to produce new, potentially better combs.

Maynard’s finding, which involves a 105-tooth comb whose width is 600, renders these giant computations obsolete. But the team’s effort was not a wasted one: . .

Continue reading. And a very important point is made in a sidebar:

Wrong in Public

The entire Polymath project is available onlinefor anyone who wants to see “how the sausage is made,” Tao said. The blog discussion threads offer a unique glimpse into mathematics research, which usually happens behind closed doors.

In particular, Tao said, the online posts and comments make clear how much trial and error goes into developing mathematical ideas. Polished research papers often give the impression that their authors have never made a misstep. But in truth, Tao said, “great mathematicians make stupid mistakes, and this is a process that people often hide, because it is embarrassing.”

One of the bedrock principles of the Polymath approach is that participants should throw any idea out to the crowd immediately, without stopping to ponder whether it is any good. “There’s an explicit license to be wrong in public,” Morrison said. “It goes against a lot of people’s instincts, but it makes the project much more efficient when we’re more relaxed about saying stupid things.”

Written by LeisureGuy

30 November 2013 at 10:54 am

Posted in Math

Settling mathematical problems with infinity through a new postulate

leave a comment »

Natalie Wolchover has an article interesting to the mathematically inclined on efforts to shore up problems with infinity (cf. the continuum hypothesis):

In the course of exploring their universe, mathematicians have occasionally stumbled across holes: statements that can be neither proved nor refuted with the nine axioms, collectively called “ZFC,” that serve as the fundamental laws of mathematics. Most mathematicians simply ignore the holes, which lie in abstract realms with few practical or scientific ramifications. But for the stewards of math’s logical underpinnings, their presence raises concerns about the foundations of the entire enterprise.

“How can I stay in any field and continue to prove theorems if the fundamental notions I’m using are problematic?” asks Peter Koellner, a professor of philosophy at Harvard University who specializes in mathematical logic.

Chief among the holes is the continuum hypothesis, a 140-year-old statement about the possible sizes of infinity. As incomprehensible as it may seem, endlessness comes in many measures: For example, there are more points on the number line, collectively called the “continuum,” than there are counting numbers. Beyond the continuum lie larger infinities still — an interminable progression of evermore enormous, yet all endless, entities. The continuum hypothesis asserts that there is no infinity between the smallest kind — the set of counting numbers — and what it asserts is the second-smallest — the continuum. It “must be either true or false,” the mathematical logician Kurt Gödel wrote in 1947, “and its undecidability from the axioms as known today can only mean that these axioms do not contain a complete description of reality.”

The decades-long quest for a more complete axiomatic system, one that could settle the infinity question and plug many of the other holes in mathematics at the same time, has arrived at a crossroads. During a recent meeting at Harvard organized by Koellner, scholars largely agreed upon two main contenders for additions to ZFC: forcing axioms and the inner-model axiom “V=ultimate L.”

“If forcing axioms are right, then the continuum hypothesis is false,” Koellner said. “And if the inner-model axiom is right, then the continuum hypothesis is true. You go through a whole list of issues in other fields, and the forcing axioms will answer those questions one way, and ultimate L will answer them a different way.”

According to the researchers, choosing between the candidates boils down to a question about the purpose of logical axioms and the nature of mathematics itself. Are axioms supposed to be the grains of truth that yield the most pristine mathematical universe? In that case, V=ultimate L may be most promising. Or is the point to find the most fruitful seeds of mathematical discovery, a criterion that seems to favor forcing axioms? “The two sides have a somewhat divergent view of what the goal is,” said Justin Moore, a mathematics professor at Cornell University.

Axiomatic systems like ZFC provide rules governing collections of objects called “sets,” which serve as the building blocks of the mathematical universe. Just as ZFC now arbitrates mathematical truth, adding an extra axiom to the rule book would help shape the future of the field — particularly its take on infinity. But unlike most of the ZFC axioms, the new ones “are not self-evident, or at least not self-evident at this stage of our knowledge, so we have a much more difficult task,” said Stevo Todorcevic, a mathematician at the University of Toronto and the French National Center for Scientific Research in Paris.

Proponents of V=ultimate L say that establishing an absence of infinities between the integers and the continuum promises to bring order to the chaos of infinite sets, of which there are, unfathomably, an infinite variety. But the axiom may have minimal consequences for traditional branches of mathematics.

Hugh Woodin, 58, is the leading proponent of an axiom called V=ultimate L that could help decide the fuller nature of infinity.

“Set theory is in the business of understanding infinity,” said Hugh Woodin, who is a mathematician at the University of California, Berkeley; the architect of V=ultimate L; and one of the most prominent living set theorists. The familiar numbers relevant to most mathematics, Woodin argues, “are an insignificant piece of the universe of sets.”

Meanwhile, forcing axioms, which deem the continuum hypothesis false by adding a new size of infinity, would also extend the frontiers of mathematics in other directions. They are workhorses that regular mathematicians “can actually go out and use in the field, so to speak,” Moore said. “To me, this is ultimately what foundations [of mathematics] should be doing.”

New advances in the study of V=ultimate L and newfound uses of forcing axioms, especially one called “Martin’s maximum” after the mathematician Donald Martin, have energized the debate about which axiom to adopt. And there’s a third point of view that disagrees with the debate’s very premise. According to some theorists, there are myriad mathematical universes, some in which the continuum hypothesis is true and others in which it is false — but all equally worth exploring. Meanwhile, “there are some skeptics,” Koellner said, “people who for philosophical reasons think set theory and the higher infinite doesn’t even make any sense.”

Infinite Paradoxes

Infinity has ruffled feathers in mathematics almost since the field’s beginning. The controversy arises not from the notion of potential infinity —the number line’s promise of continuing forever — but from the concept of infinity as an actual, complete, manipulable object.

“What truly infinite objects exist in the real world?” asks Stephen Simpson, a mathematician and logician at Pennsylvania State University. Taking a view originally espoused by Aristotle, Simpson argues that actual infinity doesn’t really exist and so it should not so readily be assumed to exist in the mathematical universe. He leads an effort to wean mathematics off actual infinity, by showing that the vast majority of theorems can be proved using only the notion of potential infinity. “But potential infinity is almost forgotten now,” Simpson said. “In the ZFC set theory mindset, people tend not to even remember that distinction. They just think infinity means actual infinity and that’s all there is to it.”

Infinity was boxed and sold to the mathematical community in the late 19th century by the German mathematician Georg Cantor. Cantor invented a branch of mathematics dealing with sets — collections of elements that ranged from empty (the equivalent of the number zero) to infinite. His “set theory” was such a useful language for describing mathematical objects that within decades, it became the field’s lingua franca. A nine-item list of rules called Zermelo-Fraenkel set theory with the axiom of choice, or ZFC, was established and widely adopted by the 1920s. Translated into plain English, one of the axioms says two sets are equal if they contain the same elements. Another simply asserts that infinite sets exist.Assuming actual infinity leads to unsettling consequences. . .

Continue reading.

Written by LeisureGuy

30 November 2013 at 9:28 am

Posted in Math

Colleges are teaching economics backwards

leave a comment »

Mike Konczal in the Washington Post with an interesting idea:

“The world has changed, the syllabus hasn’t.” That’s the motto of the Post-Crash Economics Society, a group of students at the University of Manchester who demand reforms to the way undergraduate economics is taught in light of the worldwide economic crisis. Similar activism is occurring in other elite undergraduate institutions: There was the well-publicized Open Letter to Greg Mankiw from students in the introductory economics class at Harvard, during the height of the Occupy movement. Meanwhile, institutions like the Institute for New Economic Thinking (INET) are getting involved by launching a pilot program to revamp the undergraduate economics curriculum.

Economics professors sometimes respond to these demands for change by arguing that, though the crisis presents unique challenges, there’s still a core set of knowledge that needs to be taught. If students want, they can move on to advanced classes which give a more nuanced view of elements of economics. But in order to critique economics, either inside the discipline or outside of it, they need to know the basics.

These professors have a point. But the stakes of even basic economic education are high. The language of economics is the language of elite discourse, and revamping undergraduate economic curriculum has the potential to profoundly shift the ways the next generation understands economies and crises–for better or for worse.

So here’s one temporary fix for introductory economics: teach it backwards. Reversing the order in which introductory economic classes are taught today might be the easiest way to respond to the crisis in undergraduate education. Plus, the history of how it gets taught now is more interesting and more political than you might think.

Today, first-year undergraduate students typically start with microeconomics, or the study of individuals and individual markets. This begins with the study of abstract, decontextualized, markets, where supply and demand work perfectly, individuals exist in isolation, and they effortlessly trade with others in isolation of society, the law, and politics. Students are often asked to imagine Robinson Crusoe, stranded on his island, making choices about how to work, eat and play. Introductory studies then proceed, at the end, to situations where markets don’t work perfectly–for instance, when environmental pollution imposes costs on others, or when someone has monopoly power to set prices.

In their second class, students begin to learn macroeconomics, or what happens when you add up all those markets. After gathering the basics of the field, they study the concept of long-run growth first. Though hard answers are often unclear to expert economists, this course of study is meant to figure out how things in the long-run change. Then, if there’s time left in the term, the class may turn to short-run issues, particularly the topics of the business cycle, recessions, and involuntary unemployment.

Notice how this orients the casual student, the non-major who will only encounter economics once in this survey course. They start off with an abstract market that always works, versus having to see the messy parts when it doesn’t. They then proceed to the long-run, and only after everything else do they get to something that might help them understand why unemployment is so high for young college graduates. Only then might they be introduced to the institutions that make markets happen, if those are discussed at all.

So, what if we just reversed all that?

What if macroeconomics came first, before the study of individual markets? If were to reverse the typical curriculum, the first thing undergraduates would encounter wouldn’t be abstract theories about people optimizing, but instead the idea of involuntary unemployment and the idea that the economy could operate below its potential. They’d study the economy in the short-run before going to issues of long-term growth, with professors having to explain the theories on how the two are linked, bringing in crucial concepts like hysteresis.

Then, in the second class, . . .

Continue reading.

Written by LeisureGuy

30 November 2013 at 9:03 am

Posted in Business, Education

Good result from a final Scent-Off shave

with 3 comments

SOTD 30 Nov 2013

A very smooth result today. I used my Whipped Dog 24mm silvertip and will save my lather comments for the summary on Monday. But the final result was BBS: the Sodial really is a good little razor. A good splash of Floïd starts the weekend well.

Written by LeisureGuy

30 November 2013 at 8:31 am

Posted in Shaving

Perhaps the idea of police officers in all schools needs rethinking

with 21 comments

Jodie Gummow reports at AlterNet:

An Austin family is suing the Texas sheriff’s deputy and school district in federal court after their 17-year-old son was tasered last week by a police officer in the school hallway, leaving him in a coma and fighting for his life,  Courthouse News  reported.

Noe Nino de Rivera, had successfully stepped in to break up a fight between two girls at Cedar Creek High School when school officials called in Randy McMillan, a Bastrop County sheriff department deputy. McMillan told the boy to step back and the teen obliged, with his hands in the air.

Yet, in a vicious act of police brutality, McMillan tasered the boy anyway, who fell onto his face and was knocked unconscious. Rather than calling for emergency medical assistance, the cop put the comatose boy in handcuffs.

Eventually, school officials contacted emergency services and the boy was airlifted to hospital where he underwent brain surgery and was placed in a medically induced coma where he remains and is still unable to communicate with his family.

Students who saw the incident say McMillan’s response was a gross overreaction, according to  KXAN.

“There was a crowd watching and the kid was just trying to get the officers to listen to him. When he shot the taser, there was a crowd, and others could have been hit,” said one student.

Acosta says the school was negligent in allowing McMillan to work at the school, despite the fact that he had previously tasered another student a year ago.

The incident is under investigation. The family is now seeking damages for the police brutality.

Written by LeisureGuy

29 November 2013 at 4:46 pm

Posted in Education, Guns

%d bloggers like this: