Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

Can we please stop gerrymandering?

leave a comment »

Christopher Ingraham has a great article in the Washington Post on the practice of gerrymandering, clearly explained in this short video:

Ingraham’s article is well worth reading because the problem is so prevalent. States that have solved it have taken the drawing of district lines away from the legislature and given it to an independent commission (see end of post). Ingraham’s article begins:

In his State of the Union speech, President Obama called on lawmakers and the public to take a number of steps “to change the system to reflect our better selves” for “a better politics.” The top item on that list was to end partisan gerrymandering: “we have to end the practice of drawing our congressional districts so that politicians can pick their voters, and not the other way around,” Obama said.

In most states, state legislatures draw the district boundaries that determine how many delegates the state sends to the U.S. Congress, as well as the general partisan make-up of that delegation. State legislatures are partisan beasts, and if one party is in control of the process they can draw boundaries to give themselves a numeric advantage over their opponents in Congress. This process is called gerrymandering.

Some state legislatures are more brazen about the process than others. Maryland’s districts, drawn by Democrats, are one particularly egregious example. North Carolina’s, drawn by Republicans, are another. Advocates of reform have proposed various solutions to the problem over the years. In some states, redistricting is put in the hands of an independent commission. In others, lengthy court battles are playing out to draw the districts more fairly.

But a fundamental problem with district-drawing still remains: as long as humans are drawing the lines, there’s a danger of bias and self-interest to creep into the process. There is another way, however: we could simply let computers do the drawing for us.

From a technological standpoint it’s fairly straightforward — a software engineer in Massachusetts named Brian Olson wrote an algorithm to do it in his spare time. As I described it in 2014, Olson’s algorithm creates “optimally compact” equal-population congressional districts in each state, based on 2010 census data. It draws districts that respect the boundaries of census blocks, which are the smallest geographic units used by the Census Bureau. This ensures that the district boundaries reflect actual neighborhoods and don’t, say, cut an arbitrary linethrough somebody’s house.”

To see what this looks like in practice, compare this map of our current congressional districts (top) with one we stitched together from Olson’s output (bottom).

gerrymander

Big difference, isn’t it? You can check out a larger version of the compacted map here. Rather than a confusing snarl of interlocked districts, you have neat, trim boundaries that make intuitive sense. Here are some individual state comparisons I made back in 2014 that let you see some more of the detail: . . .

Continue reading.

Some states have solved the problem, as Wikipedia notes:

Rather than allowing more political influence, some states have shifted redistricting authority from politicians and given it to non-partisan redistricting commissions. The states of Washington,[28] Arizona,[29] and California have created standing committees for redistricting following the 2010 census. Rhode Island[30]and the New Jersey Redistricting Commission have developed ad hoc committees, but developed the past two decennial reapportionments tied to new census data.

The Arizona State Legislature challenged the constitutionality of the use of a non-partisan commission, rather than the legislature, for redistricting. In Arizona State Legislature v. Arizona Independent Redistricting Commission, the US Supreme Court in 2015 upheld the constitutionality of non-partisan commissions.[31]

Note that both red states and blue states have been able to clean up their act on redistricting (though it seems that Arizona did not go quietly).

Wikipedia has an excellent short article devoted to Redistricting Commissions. Well worth reading and pondering and then getting your state moving toward it.

Written by LeisureGuy

19 February 2017 at 12:19 pm

Intelligent Machines are Teaching Themselves Quantum Physics

leave a comment »

It had to happen. Once AI achieved really good pattern recognition, AI can work on all sorts of problems. Daniel Oberhaus reports in Motherboard:

Last year, Google’s DeepMind AI beat Lee Sedol at Go, a strategy game like chess, but orders of magnitude more complicated. The win was a remarkable step forward for the field of artificial intelligence, but it got Roger Melko, a physicist at the Perimeter Institute for Theoretical Physics, thinking about how neural networks—a type of AI modeled after the human brain—might be used to solve some of the toughest problems in quantum physics. Indeed, intelligent machines may be necessary to solve these problems.

“The thing about quantum physics is it’s highly complex in a very precise mathematical sense. A big problem we face when we study these quantum systems [without machine learning] is how to deal with this complexity,” Melko told me.

“DeepMind winning this game of Go kind of crystallized some of our thinking. Go is a very complex game, but there was a solution that came from machine learning,” he continued. “So we thought, why can’t we employ similar solutions to tackle quantum physics complexity problems?”

As an example, Melko cites his own work, which focuses on condensed matter physics—basically the science of interactions between many quantum particles in various solids or liquids. As Melko wrote in a recent article for Quartz, condensed matter physics “deals with the most complex concept in nature: the quantum wave function of a many-particle system.” The quantum wave function of a particle mathematically describes all of its possible states, or as Melko describes it to me, it is the “infinitely complex…reality of the particle.”

While “infinitely complex” might seem like a bit of an overstatement, according to Melko, just modeling the wave function of a nanometer-scale mote of dust would require a computer whose hard drive contained more magnetic bits than there are atoms in the universe. As for trying to compute the wave functions of several of these dust particles at once with a classical computer? Forget about it.

The question posed by Melko and other pioneers of the field of quantum machine learning was whether neural nets could perform tasks that are beyond the capacity of algorithms which don’t incorporate machine learning, like modeling the wave function of a multi-particle system—and they didn’t have to wait long for an answer.

According to a study published last week in Science, two physicists that were not affiliated with Melko created a relatively simple neural network that was able to reconstruct the wave function of a multi-particle system, and it did so better than any previous technique that did not use machine learning. As Giuseppe Carleo, a physicist at ETH Zurich and co-author of the study, told New Scientist, “It’s like having a machine learning how to crack quantum mechanics, all by itself.”

Now that Carleo and his colleague have their proof of concept, they hope to develop a more robust neural network that can handle more complex problems. This is also a good sign for Melko, who . . .

Continue reading. There’s more.

Written by LeisureGuy

13 February 2017 at 3:38 pm

Serial Killers Should Fear This Algorithm

leave a comment »

Robert Koker writes in Bloomberg Businessweek:

On Aug. 18, 2010, a police lieutenant in Gary, Ind., received an e-mail, the subject line of which would be right at home in the first few scenes of a David Fincher movie:

“Could there be a serial killer active in the Gary area?”

It isn’t clear what the lieutenant did with that e-mail; it would be understandable if he waved it off as a prank. But the author could not have been more serious. He’d attached source material—spreadsheets created from FBI files showing that over several years the city of Gary had recorded 14 unsolved murders of women between the ages of 20 and 50. The cause of each death was the same: strangulation. Compared with statistics from around the country, he wrote, the number of similar killings in Gary was far greater than the norm. So many people dying the same way in the same city—wouldn’t that suggest that at least a few of them, maybe more, might be connected? And that the killer might still be at large?

The police lieutenant never replied. Twelve days later, the police chief, Gary Carter, received a similar e-mail from the same person. This message added a few details. Several of the women were strangled in their homes. In at least two cases, a fire was set after the murder. In more recent cases, several women were found strangled in or around abandoned buildings. Wasn’t all of this, the writer asked, at least worth a look?

The Gary police never responded to that e-mail, either, or to two follow-up letters sent via registered mail. No one from the department has commented publicly about what was sent to them—nor would anyone comment for this story. “It was the most frustrating experience of my professional life,” says the author of those messages, a 61-year-old retired news reporter from Virginia named Thomas Hargrove.

Hargrove spent his career as a data guy. He analyzed his first set of polling data as a journalism major at the University of Missouri, where he became a student director of the university’s polling organization. He joined an E.W. Scripps newspaper right out of college and expanded his repertoire from political polling data to practically any subject that required statistical analysis. “In the newsroom,” he remembers, “they would say, ‘Give that to Hargrove. That’s a numbers problem.’ ”

In 2004, Hargrove’s editors asked him to look into statistics surrounding prostitution. The only way to study that was to get a copy of the nation’s most comprehensive repository of criminal statistics: the FBI’s Uniform Crime Report, or UCR. When Hargrove called up a copy of the report from the database library at the University of Missouri, attached to it was something he didn’t expect: the Supplementary Homicide Report. “I opened it up, and it was a record I’d never seen before,” he says. “Line by line, every murder that was reported to the FBI.”

This report, covering the year 2002, contained about 16,000 murders, broken down by the victims’ age, race, and sex, as well as the method of killing, the police department that made the report, the circumstances known about the case, and information about the offender, if the offender was known. “I don’t know where these thoughts come from,” Hargrove says, “but the second I saw that thing, I asked myself, ‘Do you suppose it’s possible to teach a computer how to spot serial killers?’ ”

Like a lot of people, Hargrove was aware of criticisms of police being afflicted by tunnel vision when investigating difficult cases. He’d heard the term “linkage blindness,” used to describe the tendency of law-enforcement jurisdictions to fail to connect the dots between similar cases occurring right across the county or state line from one another. Somewhere in this report, Hargrove thought, could be the antidote to linkage blindness. The right person, looking at the information in the right way, might be able to identify any number of at-large serial killers.

Every year he downloaded and crunched the most recent data set. What really shocked him was the number of murder cases that had never been cleared. (In law enforcement, a case is cleared when a suspect is arrested, whatever the eventual outcome.) Hargrove counted 211,487, more than a third of the homicides recorded from 1980 to 2010. Why, he wondered, wasn’t the public up in arms about such a large number of unsolved murders?

To make matters worse, Hargrove saw that despite a generation’s worth of innovation in the science of crime fighting, including DNA analysis, the rate of cleared cases wasn’t increasing but decreasing—plummeting, even. The average homicide clearance rate in the 1960s was close to 90 percent; by 2010 it was solidly in the mid-’60s. It has fallen further since.

These troubling trends were what moved Hargrove to write to the Gary police. He failed to get any traction there. Sure enough, four years later, in October 2014, in Hammond, Ind.—the town next door to Gary—police found the body of 19-year-old Afrikka Hardy in a room at a Motel 6. Using her phone records, they tracked down a suspect, 43-year-old Darren Deon Vann. Once arrested, Vann took police to the abandoned buildings where he’d stowed six more bodies, all of them in and around Gary. Anith Jones had last been seen alive on Oct. 8; Tracy Martin went missing in June; Kristine Williams and Sonya Billingsley disappeared in February; and Teaira Batey and Tanya Gatlin had vanished in January.

Before invoking his right to remain silent, Vann offhandedly mentioned that he’d been killing people for years—since the 1990s. Hargrove went to Gary, reporting for Scripps, to investigate whether any of the cases he’d identified back in 2010 might possibly be attributed to Vann. He remembers getting just one helpful response, from an assistant coroner in Lake County who promised to follow up, but that too went nowhere. Now, as the Vann prosecution slogs its way through the courts, everyone involved in the case is under a gag order, prevented from speculating publicly about whether any of the victims Hargrove noted in 2010 might also have been killed by Vann. “There are at least seven women who died after I tried to convince the Gary police that they had a serial killer,” Hargrove says. “He was a pretty bad one.”

Hargrove has his eye on other possible killers, too. “I think there are a great many uncaught serial killers out there,” he declares. “I think most cities have at least a few.”

We’re in a moment when, after decades of decreases nationally in the overall crime rate, the murder rate has begun creeping upward in many major U.S. cities. For two years running, homicides in major cities jumped on average more than 10 percent. (Those increases aren’t uniform, of course: Chicago leapt from 485 reported killings in 2015 to 762 in 2016, while the number of murders dipped in New York and Baltimore.) President Trump, in the campaign and since, has vowed to usher in a new era of law and order, hammering away on Twitter at Chicago’s “carnage” in particular.

Threats of federal intervention aside, it will be difficult to fix the problem of high murder rates without first addressing clearance rates. So it’s fortuitous, perhaps, that we are living in an age in which the analysis of data is supposed to help us decipher, detect, and predict everything from the results of presidential elections to the performance of baseball players. The data-focused approach to problem-solving was brought to life for a lot of people by Michael Lewis’s Moneyball, which introduced the non-baseball-nerd public to the statistical evaluation of Major Leaguers and made a hero of Billy Beane, an executive with the Oakland A’s. Law enforcement would seem to be a fertile area for data to be helpful: In the 1990s the New York Police Department famously used data to more shrewdly deploy its officers to where the crimes were, and its CompStat system became the standard for other departments around the country.

What Hargrove has managed to do goes a few orders of magnitude beyond that. His innovation was . . .

Continue reading.

Later in the article:

. . . Police in large cities with stubbornly high murder rates point the finger at gang- and drug-related killings, and the reluctance of witnesses to come forward to identify the murderers. “The biggest problem is that everyone knows everyone,” Chester, Pa., Police Commissioner Darren Alston told the Philadelphia Daily News in September. (Chester’s homicide rate outstrips all other U.S. cities’—and is more than double that of nearby Philadelphia.) City residents, in turn, point to a lack of trust in the police. But one other obvious problem is resources. “We fund homicide investigations like we fund education—it comes down to a local tax,” Hargrove says. “When an economy fails enough and we just have to start firing cops, we see everything going to hell.”

MAP [Murder Accountability Project] tracks staffing trends on its website, too. Hargrove notes that Flint, Mich., and Dayton, Ohio, have seen their clearance rates fall more than 30 percentage points since the 1990s, coinciding with huge reductions in police manpower (330 to 185 officers in Flint; 500 to 394 in Dayton). When Hargrove’s group filed a FOIA request to get homicide data about a suspected serial killer in Detroit, the response was that the police lacked the budget to fulfill the request. “What do you do when a city says, ‘We’re too broke to even try to pull the records?’ ” Hargrove says. “I joke that what we’ve done is to create what amounts to a failed government detector.”

There is a case to be made, though, that clearance rates aren’t just a function of a police department’s staffing. Priorities and management also figure heavily. In 2000, Charles Wellford, a criminologist at the University of Maryland, published a seminal paper in which he identified the commonalities for departments that do effective murder clearance. No. 1 on that list was ensuring that cops are able to chase leads in the critical early hours after a murder, even if that means earning overtime pay. Wellford’s current research looks closely at the amount of money spent per officer, the amount spent per case, and the percentage of detectives on the force. Clearance rates, Wellford says, “are very much determined by priorities and resources. I’m beyond thinking that’s an open question. The question now for me is: How can we use the resources departments have to improve what they’re doing in clearing serious crimes?”

The most discouraging thing Hargrove has learned since starting his organization is how many police departments around the country not only ignore the FBI’s data but also don’t bother sharing their data with the FBI at all. Among the offenders: the state of Illinois, which MAP has sued for the information. Hargrove recently reported that homicides were more likely to go unsolved in Illinois in 2015 than in any other state: Only 37.3 percent of the 756 homicides were cleared. That dreadful clearance rate would seem to go a long way toward explaining Chicago’s notoriously climbing homicide rate, just as the president and others start searching for solutions. . .

Written by LeisureGuy

8 February 2017 at 2:00 pm

Wide Impact: Highly Effective Gmail Phishing Technique Being Exploited

leave a comment »

From the site Wordfence:

As you know, at Wordfence we occasionally send out alerts about security issues outside of the WordPress universe that are urgent and have a wide impact on our customers and readers. Unfortunately this is one of those alerts. There is a highly effective phishing technique stealing login credentials that is having a wide impact, even on experienced technical users.

I have written this post to be as easy to read and understand as possible. I deliberately left out technical details and focused on what you need to know to protect yourself against this phishing attack and other attacks like it in the hope of getting the word out, particularly among less technical users. Please share this once you have read it to help create awareness and protect the community.

The Phishing Attack: What you need to know

A new highly effective phishing technique targeting Gmail and other services has been gaining popularity during the past year among attackers. Over the past few weeks there have been reports of experienced technical users being hit by this.

This attack is currently being used to target Gmail customers and is also targeting other services.

The way the attack works is that an attacker will send an email to your Gmail account. That email may come from someone you know who has had their account hacked using this technique. It may also include something that looks like an image of an attachment you recognize from the sender.

You click on the image, expecting Gmail to give you a preview of the attachment. Instead, a new tab opens up and you are prompted by Gmail to sign in again. You glance at the location bar and you see accounts.google.com in there. It looks like this….

Continue reading.

And do read the whole thing. The attack is ingenious in how it spreads and how it is exploited.

Written by LeisureGuy

15 January 2017 at 1:11 pm

Posted in Software, Technology

Missing the “why” of AI

leave a comment »

So I started reading the collections from the Edge, and in the first I started, the one on AI, the latter part of the introduction and statement of the problem ended thusly:

. . . No novel science or technology of such magnitude arrives without disadvantages, even perils. To recognize, measure, and meet them is a task of grand proportions. Contrary to the headlines, that task has already been taken up formally by experts in the field, those who best understand AI’s potential and limits. In a project called AI100, based at Stanford, scientific experts, teamed with philosophers, ethicists, legal scholars and others trained to explore values beyond simple visceral reactions, will undertake this. No one expects easy or final answers, so the task will be long and continuous, funded for a century by one of AI’s leading scientists, Eric Horvitz, who, with his wife Mary, conceived this unprecedented study.

Since we can’t seem to stop, since our literature tells us we’ve imagined, yearned for, an extra-human intelligence for as long as we have records, the enterprise must be impelled by the deepest, most persistent of human drives. These beg for explanation. After all, this isn’t exactly the joy of sex.

Any scientist will say it’s the search to know. “It’s foundational,” an AI researcher told me recently. “It’s us looking out at the world, and how we do it.” He’s right. But there’s more.

Some say we do it because it’s there, an Everest of the mind. Others, more mystical, say we’re propelled by teleology: we’re a mere step in the evolution of intelligence in the universe, attractive even in our imperfections, but hardly the last word.

Entrepreneurs will say that this is the future of making things—the dark factory, with unflagging, unsalaried, uncomplaining robot workers—though what currency post-employed humans will use to acquire those robot products, no matter how cheap, is a puzzle to be solved.

Here’s my belief:  We long to save and preserve ourselves as a species. For all the imaginary deities throughout history we’ve petitioned, which failed to save and protect us—from nature, from each other, from ourselves—we’re finally ready to call on our own enhanced, augmented minds instead. It’s a sign of social maturity that we take responsibility for ourselves. We are as gods, Stewart Brand famously said, and we may as well get good at it.

We’re trying. We could fail.

It seems obvious to me why we are so driven: it’s not us who are driven, it’s the memes that live through the environment we provide. They’ve been evolving at an ever-accelerating rate, and they clearly are “selfish” in the sense that genes are, as described in The Selfish Gene, by Richard Dawkins, where the meme meme was given its name.

The idea of the meme—the meme meme—has provided quite successful in surviving in the memeverse, in apart because it offers an economical explanation of observed phenomena.

In this case, the evolution of memes for their own benefit (even when it exacts a cost from the host rather than providing a benefit to the host) seems to be the drive behind the memetic evolution of AI: it will provide an even richer environment for memes, and thus provides initially a very hospitable ecological niche, until the memes overrun it as well.

I’m reminded of those weird aliens in The Mote in God’s Eye, they representing memes. And the steps now underway in memetic evolution—something akin to the dawn of consciousness or, as the earlier part of the introduction suggests, the creation of a dual consciousness—suggests we are moving rapidly toward the sort of Singularity that has for some years been a staple of one branch of science-fiction. Maybe the general global stresses on traditional memeplexes (our nations, societies, laws, and organizing meme-structures) is clearing the ground for the arrival of a self-improving AI: one that can improve its own operational power and efficiency and extend its own databases from its own sensors, ask and seek answers to its own questions (or formulate and test hypotheses, quickly and in many areas, adding to its own pool of data/knowledge). You can sort of see how that might work, a few … months? years? (not decades, I bet) down the line.

Written by LeisureGuy

3 January 2017 at 1:24 pm

If programming languages were countries, which country would each language represent?

leave a comment »

Dimage Sapelkin answers at Quora:

Hey, what a funny question! ;]

CRussia. Everything has to be done in a backwards way, but everything is possible, and there’s a lot of legacy.

C++USA. Powerful, but more and more complicated, unreadable, error-prone. Tends to dominate and influence everything.

Python Netherlands. Modern, rich, easily approachable, attractive for various reasons, but not the top performer.

Haskell Monaco. Not many people, but very rich, so they don’t have to consider lower classes’ problems.

Java Sweden. Comfortable, but has its own king and currency.

JavaScript China. Developing really fast and can do lots of surprising stuff. A lot of users.

Basic Finland. Easy to use, but not very powerful.

Assembly Lesotho, which is completely surrounded by South Africa. Rarely used nowadays to make a whole program, more often as an included part of a bigger code in higher level language.

PHPBangladesh. Poor, but numerous, and it’s found all over the web.

PascalGermany. Strict rules, good performance. And there are many people who just don’t like the language.

BashSwitzerland. Not very big in itself, but pulls the strings of the others.


Update – based on comments:

Lisp – . . .

Continue reading.

Last on the list:

Forth Maldives. Remote from all mainland and likely to disappear underwater due to climate change. Known for its backwards writing direction.

Sad but probably true. A great language in its initial context, and in many microcontrollers today. Forth is the easiest, fastest, and most powerful language you can install on a new microprocessor: just a few definitions and you have a powerful working language and lets you investigate the details. So it goes.

Written by LeisureGuy

6 December 2016 at 5:48 pm

Posted in Software, Technology

Google Cloud announces new machine learning features for enterprise use

leave a comment »

Machine learning emerges. In the article “Found in translation: More accurate, fluent sentences in Google Translate,” by Barak Turovsky, product lead for Google Translate, writes:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

Today we’re putting Neural Machine Translation into action with a total of eight language pairs to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish. These represent the native languages of around one-third of the world’s population, covering more than 35% of all Google Translate queries! [click image to enlarge – LG]

neurallearning_translate_blog_hires-width-1927

With this update, Google Translate is improving more in a single leap than we’ve seen in the last ten years combined. But this is just the beginning. While we’re starting with eight language pairs within Google Search  the Google Translate app, and website; our goal is to eventually roll Neural Machine Translation out to all 103 languages and surfaces where you can access Google Translate.

And there’s more coming today too . . .

Continue reading.

And in Martechtoday, Danny Sullivan has an article describing the new cloud platform (and price cuts) for machine learning:

Are you a big business that’s been thinking you’d like some of that machine learning stuff to help with finding job applicants, doing translation, discovering linkages in data, or maybe building your own knowledge graph? Google’s got new offerings out today to help with those and more.

The news came during a special press event for the latest with Google Cloud machine learning. Here’s a summary slide of everything:

cxu5naivqaar4bp

Continue reading.

It will be interesting to see how machine learning affects political campaigns…

Written by LeisureGuy

21 November 2016 at 9:05 am

%d bloggers like this: