Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Software’ Category

The Department of Veterans Affairs built perhaps the most important medical computer system in history. Now it’s about to spend billions to throw it away.

leave a comment »

Note: Links currently do not work in my Chrome browser. (They work fine in Firefox.)

Arthur Allen writes in Politico:

Four decades ago, in 1977, a conspiracy began bubbling up from the basements of the vast network of hospitals belonging to the Veterans Administration. Across the country, software geeks and doctors were puzzling out how they could make medical care better with these new devices called personal computers. Working sometimes at night or in their spare time, they started to cobble together a system that helped doctors organize their prescriptions, their CAT scans and patient notes, and to share their experiences electronically to help improve care for veterans.

Within a few years, this band of altruistic docs and nerds—they called themselves “The Hardhats,” and sometimes “the conspiracy”—had built something totally new, a system that would transform medicine. Today, the medical-data revolution is taken for granted, and electronic health records are a multibillion-dollar industry. Back then, the whole idea was a novelty, even a threat. The VA pioneers were years ahead of their time. Their project was innovative, entrepreneurial and public-spirited—all those things the government wasn’t supposed to be.

Of course, the government tried to kill it.

Though the system has survived for decades, even topping the lists of the most effective and popular medical records systems, it’s now on the verge of being eliminated: The secretary of what is now the Department of Veterans Affairs has already said he wants the agency to switch over to a commercial system. An official decision is scheduled for July 1. Throwing it out and starting over will cost $16 billion, according to one estimate.

What happened? The story of the VA’s unique computer system—how the government actually managed to build a pioneering and effective medical data network, and then managed to neglect it to the point of irreparability—is emblematic of how politics can lead to the bungling of a vital, complex technology. As recently as last August, a Medscape survey of 15,000 physicians found that the VA system, called VistA, ranked as the most usable and useful medical records system, above hundreds of other commercial versions marketed by hotshot tech companies with powerful Washington lobbyists. Back in 2009, some of the architects of the Affordable Care Act saw VistA as a model for the transformation of American medical records and even floated giving it away to every doctor in America.

Today, VistA is a whipping boy for Congress; the VA’s senior IT leadership and its overseers in the House and Senate are all sharpening their knives for the system, which they caricature as a scruffy old nag that fails the veterans riding on it. Big commercial companies are circling, each one putting forward its own proprietary technology as the answer to the VA’s woes. The VA leadership seems to agree with them. “We need to move towards commercially tested products,” VA Secretary David Shulkin told a congressional committee on March 7. “If somebody could explain to me why veterans benefit from VA being a good software developer, then maybe I’d change my mind.”

You’d have to be a very brave VA administrator, and perhaps a foolhardy one, to keep VistA in 2017: The system’s homegrown structure creates security and maintenance challenges; a huge amount of talent has fled the agency, and many Congress members are leery of it. Because it serves nearly 9 million veterans at 167 hospitals and 1,700 sites of care, however, the wrangling over VistA concerns much more than just another computer software system. The men and women who created and shaped VistA over the decades were pathfinders in efforts to use data to reshape the multi-trillion-dollar U.S. health care system. Much of what they’ve done continues to serve veterans well; it’s an open question whether the Beltway solution to replacing VistA, and the billions that will be spent pursuing it, will result in a system that serves the VA—and the nation—as well in the long run.

What’s clear, though, is that the whole story of how VistA was born, grew and slid into disrepair illustrates just how difficult it can be for the government to handle innovation in its midst.

YOU COULD SAY that VistA—which stands for the Veterans Information Systems and Technology Architecture—began as a giant hack.

Its birth occurred in 1977, far back in the era of paper medical records, with a pair of computer nerds from the National Bureau of Standards. Ted O’Neill and Marty Johnson had helped standardize a computer language, originally developed at Massachusetts General Hospital, called MUMPS, and the two men were hired by the VA to see whether MUMPS could be the basis of a new computer system connecting the VA’s hospitals. Computerizing the one-on-one art of medical care seemed like a sacrilege at the time, but the VA, struggling with casualties of the Vietnam War, was underfunded, disorganized and needed all the help it could get.

O’Neill and Johnson began recruiting other techies to the effort, some of whom were already working in VA hospitals in places such as St. Petersburg, Florida; Lexington, Kentucky; and San Francisco. Though they were on an official mission, their approach—highly decentralized, with different teams trying things in various hospitals—ran against the grain of a big bureaucracy and aroused the suspicions of the central office. The project soon had the feeling of a conspiracy, something that nonconformists did in secret. They gave themselves an internal nickname—the Hardhats. People who followed the project recall being struck by just how idealistic it was. “This will sound a bit hokey, but they saw a way to improve health care at less cost than was being proposed in the central office,” says Nancy Tomich, a writer who was covering VA health care at the time. As bureaucratic battles mounted, she says, “I remember how impressed I was by these dedicated people who put their personal welfare on the line.”

In 1978, with personal computers just starting to appear in the homes of nerdy hobbyists, the Hardhats bought thousands of personal data processors and distributed them throughout the VA. Software geeks and physicians were soon exploring how patient care could be improved with these new devices. A scheduling system was built in Oklahoma City, while technicians in Columbia, Missouri, built a radiology program, and the Washington, D.C., VA’s Hardhats worked on a cardiology program. In Silicon Valley, Steve Wozniak was building a computer in his garage that would overturn an industry; at the VA, these unsung rebels were doing something that was equally disruptive in its own way—and threatening to the VA’s central computer office, which had a staff and budget hundreds of times greater and planned to service the data-processing needs of the VA hospitals and clinics by means of leased lines to regional mainframe centers. While the bureaucrats in the central office had their own empire, Tomich recalled, the Hardhats—some of them straight-looking guys with burr haircuts and pocket pen protectors, some scruffy, bearded dudes in T-shirts—were “in the field planting seeds, raising crops and things were blossoming,’’ she says.

The Hardhats’ key insight—and the reason VistA still has such dedicated fans today—was that the system would work well only if they brought doctors into the loop as they built their new tools. In fact, it would be best if doctors actually helped build them. Pre-specified computer design might work for an airplane or a ship, but a hospital had hundreds of thousands of variable processes. You needed a “co-evolutionary loop between those using the system and the system you provide them,” says one of the early converts, mathematician Tom Munnecke, a polymathic entrepreneur and philanthropist who joined the VA hospital in Loma Linda, California, in 1978.

So rather than sitting in an office writing code and having the bureaucracy implement it, the computer scientists fanned out to doctors’ offices to figure out what they needed. Doctors with a feel for technology jumped into the fray. “I got involved because it solved my problems,” says Ross Fletcher, a cardiologist at the Washington, D.C., VA—where he is now chief of staff—since 1972. Working in close consultation with their clinical partners, sometimes coding at home at night or in their spare time, the computer experts built software that enabled doctors to legibly organize their prescriptions, CAT scans and patient notes, and to share their experiences electronically. Fletcher, who had studied a little computer science in college, worked with a software developer to help create an electronic EKG record. “The technical staff was embedded with clinical staff. I had lunch with the doctors, and in the parking lot in the morning we’d report what we’d done the night before,” says Munnecke.

Munnecke, a leading Hardhat, remembers it as an exhilarating time. He used a PDP11/34 computer with 32 kilobytes of memory, and stored his programs, development work and his hospital’s database on a 5-megabyte disk the size of a personal pizza. One day, Munnecke and a colleague, George Timson, sat in a restaurant and sketched out a circular diagram on a paper place mat, a design for what initially would be called the Decentralized Hospital Computer Program, and later VistA. MUMPs computer language was at the center of the diagram, surrounded by a kernel of programs used by everyone at the VA, with applications floating around the fringes like electrons in an atom. MUMPS was a ludicrously simple coding language that could run with limited memory and great speed on a low-powered computer. The architecture of VistA was open, modular and decentralized. All around the edges, the apps flourished through the cooperation of computer scientists and doctors.

“We didn’t call it ‘agile development,’ but it was agile,” says Howard Hayes, another VA IT veteran who served as CIO for the Indian Health Service, which adopted VistA. “Tight relationships between user and programmer, and sometimes they were one and the same.” Instead of top-down goals and project sign-offs, teams of techies and doctors kept working to improve the system. “The developer did something, the user tried it, called him up or walked down the hall and says ‘It really needs to do this.’ The next day they had another build,” says Hayes.

The VA’s centralized computer department, which relied on contractors, was not amused. Its leadership wanted control, and they believed, with a position remarkably similar to current-day criticisms of the VA’s IT work, that it made more sense to let the outside experts move the ball than have “garages” full of unconventional nerds and upstart doctors. The Hardhats were sharing records among doctors and hospitals. They were digitizing X-ray images. They were doing everything much less expensively and more successfully than the central office. They had to be stopped. In 1979, Ted O’Neill was fired (he drove a cab for a while, and later became a real estate agent). The main Hardhats office was shut down, and “pretty much everybody in the Washington part of the organization headed for the hills,” says Munnecke.

But, remarkably, the project didn’t die. . .

Continue reading.

Written by LeisureGuy

25 March 2017 at 10:00 am

Baked mustard chicken

with 2 comments

I made this recipe last night and it was extremely tasty. I made a half-recipe (one chicken, not two) and it was reasonably easy.

I spatchcocked the chicken, which makes it easy to halve, and then to cut each half into a leg piece (drumstick and thigh) and the breast piece (with wing) is simply. I take the spine (removed in spatchcocking the chicken), wing tips, and neck (included when you buy a whole chicken) and simmered those in a quart of water to which I added salt, a dozen peppercorns, and the juice of two lemons. After simmering for an hour or so, I strain the resulting stock, which makes a very pleasant afternoon treat.

Some changes to the recipe, based on experience—the amounts shown are for the full (2-chicken) recipe; cut in half for 1 chicken

  • 1 cup Dijon mustard, not 3/4 cup – and be sure you dry the chicken well after rinsing, or the mustard mixture won’t stick
  • 4 cloves garlic, not 1, and after mincing it I put it and a good pinch of salt into my mortar & pestle and smashed it to a pulp
  • Panko bread crumbs are what I used

That mortar & pestle at the link is the first I’ve found that actually works well. I have the 5.5″ model; Williams-Sonoma sells the same thing in a 7″ model.

I lined the baking sheet with parchment paper, which worked well. I use flat pre-cut sheets rather than from a roll (because parchment paper from a roll doesn’t lie flat very well)

I download recipes into Paprika Recipe Manager so I can edit them (as in the changes above), and I added a note at the beginning of the instructions: “DON’T forget paprika and butter” (because after I had put the chicken into the oven I realized I had forgotten the last step, and the butter in particular makes a difference—no harm done: it had been in the oven only about 5 minutes when I remembered, took it out, and did that final step).

Two hours worked, but I think next time I’ll try 1.5 hours and see how that goes. Even with two hours cooking, the chicken was quite moist (and incredibly tender), probably because of the crust.

We’ll be having this again.

Written by LeisureGuy

28 February 2017 at 10:11 am

Can we please stop gerrymandering?

leave a comment »

Christopher Ingraham has a great article in the Washington Post on the practice of gerrymandering, clearly explained in this short video:

Ingraham’s article is well worth reading because the problem is so prevalent. States that have solved it have taken the drawing of district lines away from the legislature and given it to an independent commission (see end of post). Ingraham’s article begins:

In his State of the Union speech, President Obama called on lawmakers and the public to take a number of steps “to change the system to reflect our better selves” for “a better politics.” The top item on that list was to end partisan gerrymandering: “we have to end the practice of drawing our congressional districts so that politicians can pick their voters, and not the other way around,” Obama said.

In most states, state legislatures draw the district boundaries that determine how many delegates the state sends to the U.S. Congress, as well as the general partisan make-up of that delegation. State legislatures are partisan beasts, and if one party is in control of the process they can draw boundaries to give themselves a numeric advantage over their opponents in Congress. This process is called gerrymandering.

Some state legislatures are more brazen about the process than others. Maryland’s districts, drawn by Democrats, are one particularly egregious example. North Carolina’s, drawn by Republicans, are another. Advocates of reform have proposed various solutions to the problem over the years. In some states, redistricting is put in the hands of an independent commission. In others, lengthy court battles are playing out to draw the districts more fairly.

But a fundamental problem with district-drawing still remains: as long as humans are drawing the lines, there’s a danger of bias and self-interest to creep into the process. There is another way, however: we could simply let computers do the drawing for us.

From a technological standpoint it’s fairly straightforward — a software engineer in Massachusetts named Brian Olson wrote an algorithm to do it in his spare time. As I described it in 2014, Olson’s algorithm creates “optimally compact” equal-population congressional districts in each state, based on 2010 census data. It draws districts that respect the boundaries of census blocks, which are the smallest geographic units used by the Census Bureau. This ensures that the district boundaries reflect actual neighborhoods and don’t, say, cut an arbitrary linethrough somebody’s house.”

To see what this looks like in practice, compare this map of our current congressional districts (top) with one we stitched together from Olson’s output (bottom).

gerrymander

Big difference, isn’t it? You can check out a larger version of the compacted map here. Rather than a confusing snarl of interlocked districts, you have neat, trim boundaries that make intuitive sense. Here are some individual state comparisons I made back in 2014 that let you see some more of the detail: . . .

Continue reading.

Some states have solved the problem, as Wikipedia notes:

Rather than allowing more political influence, some states have shifted redistricting authority from politicians and given it to non-partisan redistricting commissions. The states of Washington,[28] Arizona,[29] and California have created standing committees for redistricting following the 2010 census. Rhode Island[30]and the New Jersey Redistricting Commission have developed ad hoc committees, but developed the past two decennial reapportionments tied to new census data.

The Arizona State Legislature challenged the constitutionality of the use of a non-partisan commission, rather than the legislature, for redistricting. In Arizona State Legislature v. Arizona Independent Redistricting Commission, the US Supreme Court in 2015 upheld the constitutionality of non-partisan commissions.[31]

Note that both red states and blue states have been able to clean up their act on redistricting (though it seems that Arizona did not go quietly).

Wikipedia has an excellent short article devoted to Redistricting Commissions. Well worth reading and pondering and then getting your state moving toward it.

Written by LeisureGuy

19 February 2017 at 12:19 pm

Intelligent Machines are Teaching Themselves Quantum Physics

leave a comment »

It had to happen. Once AI achieved really good pattern recognition, AI can work on all sorts of problems. Daniel Oberhaus reports in Motherboard:

Last year, Google’s DeepMind AI beat Lee Sedol at Go, a strategy game like chess, but orders of magnitude more complicated. The win was a remarkable step forward for the field of artificial intelligence, but it got Roger Melko, a physicist at the Perimeter Institute for Theoretical Physics, thinking about how neural networks—a type of AI modeled after the human brain—might be used to solve some of the toughest problems in quantum physics. Indeed, intelligent machines may be necessary to solve these problems.

“The thing about quantum physics is it’s highly complex in a very precise mathematical sense. A big problem we face when we study these quantum systems [without machine learning] is how to deal with this complexity,” Melko told me.

“DeepMind winning this game of Go kind of crystallized some of our thinking. Go is a very complex game, but there was a solution that came from machine learning,” he continued. “So we thought, why can’t we employ similar solutions to tackle quantum physics complexity problems?”

As an example, Melko cites his own work, which focuses on condensed matter physics—basically the science of interactions between many quantum particles in various solids or liquids. As Melko wrote in a recent article for Quartz, condensed matter physics “deals with the most complex concept in nature: the quantum wave function of a many-particle system.” The quantum wave function of a particle mathematically describes all of its possible states, or as Melko describes it to me, it is the “infinitely complex…reality of the particle.”

While “infinitely complex” might seem like a bit of an overstatement, according to Melko, just modeling the wave function of a nanometer-scale mote of dust would require a computer whose hard drive contained more magnetic bits than there are atoms in the universe. As for trying to compute the wave functions of several of these dust particles at once with a classical computer? Forget about it.

The question posed by Melko and other pioneers of the field of quantum machine learning was whether neural nets could perform tasks that are beyond the capacity of algorithms which don’t incorporate machine learning, like modeling the wave function of a multi-particle system—and they didn’t have to wait long for an answer.

According to a study published last week in Science, two physicists that were not affiliated with Melko created a relatively simple neural network that was able to reconstruct the wave function of a multi-particle system, and it did so better than any previous technique that did not use machine learning. As Giuseppe Carleo, a physicist at ETH Zurich and co-author of the study, told New Scientist, “It’s like having a machine learning how to crack quantum mechanics, all by itself.”

Now that Carleo and his colleague have their proof of concept, they hope to develop a more robust neural network that can handle more complex problems. This is also a good sign for Melko, who . . .

Continue reading. There’s more.

Written by LeisureGuy

13 February 2017 at 3:38 pm

Serial Killers Should Fear This Algorithm

leave a comment »

Robert Koker writes in Bloomberg Businessweek:

On Aug. 18, 2010, a police lieutenant in Gary, Ind., received an e-mail, the subject line of which would be right at home in the first few scenes of a David Fincher movie:

“Could there be a serial killer active in the Gary area?”

It isn’t clear what the lieutenant did with that e-mail; it would be understandable if he waved it off as a prank. But the author could not have been more serious. He’d attached source material—spreadsheets created from FBI files showing that over several years the city of Gary had recorded 14 unsolved murders of women between the ages of 20 and 50. The cause of each death was the same: strangulation. Compared with statistics from around the country, he wrote, the number of similar killings in Gary was far greater than the norm. So many people dying the same way in the same city—wouldn’t that suggest that at least a few of them, maybe more, might be connected? And that the killer might still be at large?

The police lieutenant never replied. Twelve days later, the police chief, Gary Carter, received a similar e-mail from the same person. This message added a few details. Several of the women were strangled in their homes. In at least two cases, a fire was set after the murder. In more recent cases, several women were found strangled in or around abandoned buildings. Wasn’t all of this, the writer asked, at least worth a look?

The Gary police never responded to that e-mail, either, or to two follow-up letters sent via registered mail. No one from the department has commented publicly about what was sent to them—nor would anyone comment for this story. “It was the most frustrating experience of my professional life,” says the author of those messages, a 61-year-old retired news reporter from Virginia named Thomas Hargrove.

Hargrove spent his career as a data guy. He analyzed his first set of polling data as a journalism major at the University of Missouri, where he became a student director of the university’s polling organization. He joined an E.W. Scripps newspaper right out of college and expanded his repertoire from political polling data to practically any subject that required statistical analysis. “In the newsroom,” he remembers, “they would say, ‘Give that to Hargrove. That’s a numbers problem.’ ”

In 2004, Hargrove’s editors asked him to look into statistics surrounding prostitution. The only way to study that was to get a copy of the nation’s most comprehensive repository of criminal statistics: the FBI’s Uniform Crime Report, or UCR. When Hargrove called up a copy of the report from the database library at the University of Missouri, attached to it was something he didn’t expect: the Supplementary Homicide Report. “I opened it up, and it was a record I’d never seen before,” he says. “Line by line, every murder that was reported to the FBI.”

This report, covering the year 2002, contained about 16,000 murders, broken down by the victims’ age, race, and sex, as well as the method of killing, the police department that made the report, the circumstances known about the case, and information about the offender, if the offender was known. “I don’t know where these thoughts come from,” Hargrove says, “but the second I saw that thing, I asked myself, ‘Do you suppose it’s possible to teach a computer how to spot serial killers?’ ”

Like a lot of people, Hargrove was aware of criticisms of police being afflicted by tunnel vision when investigating difficult cases. He’d heard the term “linkage blindness,” used to describe the tendency of law-enforcement jurisdictions to fail to connect the dots between similar cases occurring right across the county or state line from one another. Somewhere in this report, Hargrove thought, could be the antidote to linkage blindness. The right person, looking at the information in the right way, might be able to identify any number of at-large serial killers.

Every year he downloaded and crunched the most recent data set. What really shocked him was the number of murder cases that had never been cleared. (In law enforcement, a case is cleared when a suspect is arrested, whatever the eventual outcome.) Hargrove counted 211,487, more than a third of the homicides recorded from 1980 to 2010. Why, he wondered, wasn’t the public up in arms about such a large number of unsolved murders?

To make matters worse, Hargrove saw that despite a generation’s worth of innovation in the science of crime fighting, including DNA analysis, the rate of cleared cases wasn’t increasing but decreasing—plummeting, even. The average homicide clearance rate in the 1960s was close to 90 percent; by 2010 it was solidly in the mid-’60s. It has fallen further since.

These troubling trends were what moved Hargrove to write to the Gary police. He failed to get any traction there. Sure enough, four years later, in October 2014, in Hammond, Ind.—the town next door to Gary—police found the body of 19-year-old Afrikka Hardy in a room at a Motel 6. Using her phone records, they tracked down a suspect, 43-year-old Darren Deon Vann. Once arrested, Vann took police to the abandoned buildings where he’d stowed six more bodies, all of them in and around Gary. Anith Jones had last been seen alive on Oct. 8; Tracy Martin went missing in June; Kristine Williams and Sonya Billingsley disappeared in February; and Teaira Batey and Tanya Gatlin had vanished in January.

Before invoking his right to remain silent, Vann offhandedly mentioned that he’d been killing people for years—since the 1990s. Hargrove went to Gary, reporting for Scripps, to investigate whether any of the cases he’d identified back in 2010 might possibly be attributed to Vann. He remembers getting just one helpful response, from an assistant coroner in Lake County who promised to follow up, but that too went nowhere. Now, as the Vann prosecution slogs its way through the courts, everyone involved in the case is under a gag order, prevented from speculating publicly about whether any of the victims Hargrove noted in 2010 might also have been killed by Vann. “There are at least seven women who died after I tried to convince the Gary police that they had a serial killer,” Hargrove says. “He was a pretty bad one.”

Hargrove has his eye on other possible killers, too. “I think there are a great many uncaught serial killers out there,” he declares. “I think most cities have at least a few.”

We’re in a moment when, after decades of decreases nationally in the overall crime rate, the murder rate has begun creeping upward in many major U.S. cities. For two years running, homicides in major cities jumped on average more than 10 percent. (Those increases aren’t uniform, of course: Chicago leapt from 485 reported killings in 2015 to 762 in 2016, while the number of murders dipped in New York and Baltimore.) President Trump, in the campaign and since, has vowed to usher in a new era of law and order, hammering away on Twitter at Chicago’s “carnage” in particular.

Threats of federal intervention aside, it will be difficult to fix the problem of high murder rates without first addressing clearance rates. So it’s fortuitous, perhaps, that we are living in an age in which the analysis of data is supposed to help us decipher, detect, and predict everything from the results of presidential elections to the performance of baseball players. The data-focused approach to problem-solving was brought to life for a lot of people by Michael Lewis’s Moneyball, which introduced the non-baseball-nerd public to the statistical evaluation of Major Leaguers and made a hero of Billy Beane, an executive with the Oakland A’s. Law enforcement would seem to be a fertile area for data to be helpful: In the 1990s the New York Police Department famously used data to more shrewdly deploy its officers to where the crimes were, and its CompStat system became the standard for other departments around the country.

What Hargrove has managed to do goes a few orders of magnitude beyond that. His innovation was . . .

Continue reading.

Later in the article:

. . . Police in large cities with stubbornly high murder rates point the finger at gang- and drug-related killings, and the reluctance of witnesses to come forward to identify the murderers. “The biggest problem is that everyone knows everyone,” Chester, Pa., Police Commissioner Darren Alston told the Philadelphia Daily News in September. (Chester’s homicide rate outstrips all other U.S. cities’—and is more than double that of nearby Philadelphia.) City residents, in turn, point to a lack of trust in the police. But one other obvious problem is resources. “We fund homicide investigations like we fund education—it comes down to a local tax,” Hargrove says. “When an economy fails enough and we just have to start firing cops, we see everything going to hell.”

MAP [Murder Accountability Project] tracks staffing trends on its website, too. Hargrove notes that Flint, Mich., and Dayton, Ohio, have seen their clearance rates fall more than 30 percentage points since the 1990s, coinciding with huge reductions in police manpower (330 to 185 officers in Flint; 500 to 394 in Dayton). When Hargrove’s group filed a FOIA request to get homicide data about a suspected serial killer in Detroit, the response was that the police lacked the budget to fulfill the request. “What do you do when a city says, ‘We’re too broke to even try to pull the records?’ ” Hargrove says. “I joke that what we’ve done is to create what amounts to a failed government detector.”

There is a case to be made, though, that clearance rates aren’t just a function of a police department’s staffing. Priorities and management also figure heavily. In 2000, Charles Wellford, a criminologist at the University of Maryland, published a seminal paper in which he identified the commonalities for departments that do effective murder clearance. No. 1 on that list was ensuring that cops are able to chase leads in the critical early hours after a murder, even if that means earning overtime pay. Wellford’s current research looks closely at the amount of money spent per officer, the amount spent per case, and the percentage of detectives on the force. Clearance rates, Wellford says, “are very much determined by priorities and resources. I’m beyond thinking that’s an open question. The question now for me is: How can we use the resources departments have to improve what they’re doing in clearing serious crimes?”

The most discouraging thing Hargrove has learned since starting his organization is how many police departments around the country not only ignore the FBI’s data but also don’t bother sharing their data with the FBI at all. Among the offenders: the state of Illinois, which MAP has sued for the information. Hargrove recently reported that homicides were more likely to go unsolved in Illinois in 2015 than in any other state: Only 37.3 percent of the 756 homicides were cleared. That dreadful clearance rate would seem to go a long way toward explaining Chicago’s notoriously climbing homicide rate, just as the president and others start searching for solutions. . .

Written by LeisureGuy

8 February 2017 at 2:00 pm

Wide Impact: Highly Effective Gmail Phishing Technique Being Exploited

leave a comment »

From the site Wordfence:

As you know, at Wordfence we occasionally send out alerts about security issues outside of the WordPress universe that are urgent and have a wide impact on our customers and readers. Unfortunately this is one of those alerts. There is a highly effective phishing technique stealing login credentials that is having a wide impact, even on experienced technical users.

I have written this post to be as easy to read and understand as possible. I deliberately left out technical details and focused on what you need to know to protect yourself against this phishing attack and other attacks like it in the hope of getting the word out, particularly among less technical users. Please share this once you have read it to help create awareness and protect the community.

The Phishing Attack: What you need to know

A new highly effective phishing technique targeting Gmail and other services has been gaining popularity during the past year among attackers. Over the past few weeks there have been reports of experienced technical users being hit by this.

This attack is currently being used to target Gmail customers and is also targeting other services.

The way the attack works is that an attacker will send an email to your Gmail account. That email may come from someone you know who has had their account hacked using this technique. It may also include something that looks like an image of an attachment you recognize from the sender.

You click on the image, expecting Gmail to give you a preview of the attachment. Instead, a new tab opens up and you are prompted by Gmail to sign in again. You glance at the location bar and you see accounts.google.com in there. It looks like this….

Continue reading.

And do read the whole thing. The attack is ingenious in how it spreads and how it is exploited.

Written by LeisureGuy

15 January 2017 at 1:11 pm

Posted in Software, Technology

Missing the “why” of AI

leave a comment »

So I started reading the collections from the Edge, and in the first I started, the one on AI, the latter part of the introduction and statement of the problem ended thusly:

. . . No novel science or technology of such magnitude arrives without disadvantages, even perils. To recognize, measure, and meet them is a task of grand proportions. Contrary to the headlines, that task has already been taken up formally by experts in the field, those who best understand AI’s potential and limits. In a project called AI100, based at Stanford, scientific experts, teamed with philosophers, ethicists, legal scholars and others trained to explore values beyond simple visceral reactions, will undertake this. No one expects easy or final answers, so the task will be long and continuous, funded for a century by one of AI’s leading scientists, Eric Horvitz, who, with his wife Mary, conceived this unprecedented study.

Since we can’t seem to stop, since our literature tells us we’ve imagined, yearned for, an extra-human intelligence for as long as we have records, the enterprise must be impelled by the deepest, most persistent of human drives. These beg for explanation. After all, this isn’t exactly the joy of sex.

Any scientist will say it’s the search to know. “It’s foundational,” an AI researcher told me recently. “It’s us looking out at the world, and how we do it.” He’s right. But there’s more.

Some say we do it because it’s there, an Everest of the mind. Others, more mystical, say we’re propelled by teleology: we’re a mere step in the evolution of intelligence in the universe, attractive even in our imperfections, but hardly the last word.

Entrepreneurs will say that this is the future of making things—the dark factory, with unflagging, unsalaried, uncomplaining robot workers—though what currency post-employed humans will use to acquire those robot products, no matter how cheap, is a puzzle to be solved.

Here’s my belief:  We long to save and preserve ourselves as a species. For all the imaginary deities throughout history we’ve petitioned, which failed to save and protect us—from nature, from each other, from ourselves—we’re finally ready to call on our own enhanced, augmented minds instead. It’s a sign of social maturity that we take responsibility for ourselves. We are as gods, Stewart Brand famously said, and we may as well get good at it.

We’re trying. We could fail.

It seems obvious to me why we are so driven: it’s not us who are driven, it’s the memes that live through the environment we provide. They’ve been evolving at an ever-accelerating rate, and they clearly are “selfish” in the sense that genes are, as described in The Selfish Gene, by Richard Dawkins, where the meme meme was given its name.

The idea of the meme—the meme meme—has provided quite successful in surviving in the memeverse, in apart because it offers an economical explanation of observed phenomena.

In this case, the evolution of memes for their own benefit (even when it exacts a cost from the host rather than providing a benefit to the host) seems to be the drive behind the memetic evolution of AI: it will provide an even richer environment for memes, and thus provides initially a very hospitable ecological niche, until the memes overrun it as well.

I’m reminded of those weird aliens in The Mote in God’s Eye, they representing memes. And the steps now underway in memetic evolution—something akin to the dawn of consciousness or, as the earlier part of the introduction suggests, the creation of a dual consciousness—suggests we are moving rapidly toward the sort of Singularity that has for some years been a staple of one branch of science-fiction. Maybe the general global stresses on traditional memeplexes (our nations, societies, laws, and organizing meme-structures) is clearing the ground for the arrival of a self-improving AI: one that can improve its own operational power and efficiency and extend its own databases from its own sensors, ask and seek answers to its own questions (or formulate and test hypotheses, quickly and in many areas, adding to its own pool of data/knowledge). You can sort of see how that might work, a few … months? years? (not decades, I bet) down the line.

Written by LeisureGuy

3 January 2017 at 1:24 pm

%d bloggers like this: