Archive for the ‘Technology’ Category
Update: I talked to The Wife about the video and she pointed out how regional such things are in China. In Shenzhen you will have those back-alley markets of digital electronics; in another city it will be handbags; and the city whose specialty is art is well known.
Perhaps “purpose-modified” would be a better term. Charles Fleming has the story in the LA Times:
Ford Motor Co. has unveiled the world’s first “pursuit-rated” hybrid police car and the Los Angeles Police Department may be among the first agencies to have one.
Chief Charlie Beck and LAPD officers were expected to help introduce the vehicle at a Monday morning ceremony in Los Angeles.
The Ford Police Responder Hybrid Sedan started life as a Ford Fusion before engineers gave it the law enforcement treatment by upgrading the suspension, beefing up the brakes, adding different wheels and tires, attaching a skid plate and altering the seats so they have room for police utility belts and include an “anti-stab” plate to protect officers from sharp instruments coming from the back seat.
It won its world-first “pursuit-rated” stripes in testing by Michigan State Police and the Los Angeles Sheriff’s Department, Ford said in announcing the new vehicle.
The Responder is driven by Ford’s 2.0-liter, four-cylinder gasoline engine mated to an electric motor powered by a 1.4-kilowatt lithium ion battery. That powertrain, Ford said, will get an EPA-estimated 38 miles per gallon — more than twice the fuel economy in Ford’s current police vehicle, the Interceptor.
Ford hasn’t published specifications on the new vehicle yet, but Arie Groeneveld, of the company’s police programs department, said the Responder will accelerate from zero to 60 mph at about the same rate as the Crown Victoria-based Interceptor police vehicle.
That car scored well in 2016 law enforcement testing. In one test, the Interceptor went zero to 60 in 5.8 seconds, ahead of Dodge’s Charger police vehicle’s 6.6 seconds and Chevrolet’s police Caprice, at 6.7 seconds.
Pricing has not been announced, but hybrid vehicles typically cost more than their internal combustion engine counterparts. Ford executives said fuel savings should pay for the higher cost within one year.
Ford expects the Responder to do well.
“Cities have been asking us for solutions to reduce carbon emissions and costs, and agencies have been asking for greener police cars and greener pursuit vehicles,” said Kevin Koswick, director of Ford’s lease and remarking operations in North America. “We saw a need and we thought we could fulfill it.”
The Responder will go into production early next year. LAPD units could be driving them by late 2018. . .
Paul Rincon has some good news on BBC News:
A UK-based team of researchers has created a graphene-based sieve capable of removing salt from seawater.
The sought-after development could aid the millions of people without ready access to clean drinking water.
The promising graphene oxide sieve could be highly efficient at filtering salts, and will now be tested against existing desalination membranes.
It has previously been difficult to manufacture graphene-based barriers on an industrial scale.
Reporting their results in the journal Nature Nanotechnology, scientists from the University of Manchester, led by Dr Rahul Nair, show how they solved some of the challenges by using a chemical derivative called graphene oxide.
Isolated and characterised by a University of Manchester-led team in 2004, graphene comprises a single layer of carbon atoms arranged in a hexagonal lattice. Its unusual properties, such as extraordinary tensile strength and electrical conductivity, have earmarked it as one of the most promising materials for future applications.
But it has been difficult to produce large quantities of single-layer graphene using existing methods, such as chemical vapour deposition (CVD). Current production routes are also quite costly.
On the other hand, said Dr Nair, “graphene oxide can be produced by simple oxidation in the lab”.
He told BBC News: “As an ink or solution, we can compose it on a substrate or porous material. Then we can use it as a membrane.
“In terms of scalability and the cost of the material, graphene oxide has a potential advantage over single-layered graphene.”
Of the single-layer graphene he added: “To make it permeable, you need to drill small holes in the membrane. But if the hole size is larger than one nanometre, the salts go through that hole. You have to make a membrane with a very uniform less-than-one-nanometre hole size to make it useful for desalination. It is a really challenging job.”
Graphene oxide membranes have already proven their worth in sieving out small nanoparticles, organic molecules and even large salts. But until now, they couldn’t be used to filter out common salts, which require even smaller sieves.
Previous work had shown that . . .
Elizabeth John describes police body-camera tradeoffs in Slate:
On Wednesday, Taser International announced it is changing its name to Axon—and that it is offering every police department in the United States free body cameras, plus free software and data storage for one year. This announcement is a big deal, but not because it’s a great boon to policing. It isn’t.
Since the Ferguson protests in August 2014, lawmakers, watchdog groups, and even many police chiefs have embraced police body cameras as a tool of accountability. The Department of Justice has offered millions to local departments to purchase them. At first, this widespread enthusiasm seemed justified. A body camera on every cop would, in theory, record every controversial police encounter, and its very presence would deter misconduct.
But body cameras have not entirely lived up to that promise. Regulations about how, when, and whether to use them vary widely by jurisdiction. Many police departments have adopted the technology first, intending to figure out the details later. The absence of clear or uniform regulations has prompted concerns that body cameras are becoming surveillance tools of the police rather than an assurance of accountability to the public. In other words, when there are few limits on what can be recorded, it may be that everything and everyone will be.
Even as a surveillance tool, we should expect that the police, along with state and local governments, would be the ones to decide what these tools should do. But that hasn’t proven to be the case, either.
With body cameras, procurement is policy. The model and manufacturer a police department chooses will determine how it’s used in the field. That’s because choices about substantive issues like data production, storage, and sharing are issues of design. Who should be able to decide whether a body camera should be turned on: the officer or someone at the precinct? Should the camera feature a video or audio buffer constantly recording what are often crucial seconds before anyone hits the record button? Should the camera record the video or livestream it? These decisions are made by private companies, not public police departments.
No one is more dominant in this field than the organization formerly known as Taser International, which already controls at least three-quarters of the existing police body camera business, according to a July 2016 New York Times article. Building on its existing relationships with 17,000 of the country’s police departments established through its electric stun gun business, TAxon division has cornered the market on body camera contracts. In fact, the body cameras themselves aren’t nearly as profitable as Axon’s cloud service, which stores the massive amount of data generated by police body cameras and offers departments the software to analyze it. Axon’s cloud service subsidiary, Evidence.com, requires police to purchase yearly subscriptions. Every incentive exists to lock in law enforcement agencies early for these recurring, long-term services. Axon isn’t a really body camera company; it’s building a law enforcement platform.
In becoming the biggest vendor of police body cameras, Axon is exerting an undue influence on policing itself. Important questions about how body cameras operate and how their resulting data should be treated have been outsourced to a private company. Community oversight over policing is impossible when critical decisions about a surveillance technology have already been made by a vendor.
And Axon is eager to remain dominant in the body camera marketplace. Axon CEO Rick Smith has said that he expects to have facial recognition technology in his cameras sometime in the near future. The company recently acquired two artificial intelligence firms, which suggest that the company intends to apply AI to sift through the petabytes of data in its possession. Police body cameras would be instrumental in collecting “one of the richest treasure troves you could imagine” of data for applications like predictive policing. Whether to incorporate these capabilities are private, not public choices.
The usual mechanisms for police oversight don’t apply here. Want to find out what plans Axon has for future applications of their technology? Axon, like other technology vendors contracting with the police, isn’t subject to public records laws. Axon is a private company beholden to its shareholders, not the communities whose police officers adopt their cameras. Curious about the algorithm used to identify suspicious behavior in body camera video? Like other technology companies providing services to the government, Axon will likely invoke concerns about trade secrets in order to keep such information non-public. (I reached out to Axon for comment, but at the time of publication, there has been no response.)
Body cameras require a careful balancing of interests between . . .
But what about an effort to preserve old software as software has changed, mutated, and evolved? Where can one find the grand early firsts? In the physical world, we can review how, say, buildings of worship evolved over the millennia through the Darwinian process of meme evolution, structures that show the incredible diversity of such structures: Mayan pyramids, Greek temples, (Somehow I can’t picture an Egyptian house of worship), Muslim mosques, Russian Orthodox onion-shaped roofs, the great cathedrals of Europe, the plain simplicity of the Shakers, the steeples of New England, .. and that’s not even scratching the surface of that one meme.
But suppose you wanted to see the same sort of historical display of software, which certainly is as real as houses of worship: they are both memes. But one is hard tissue that fossilizes readily, and the other is soft tissue that disintegrates quickly and thus is lost to fossilization. And that’s whay software will not be able to be seen in the same way.
Jordan Pearson reports in Motherboard:
Code is heritage.
Software has shaped our lives and culture for decades, and now the United Nations will make a push to get world governments to work toward preserving it.
On Monday, UNESCO, one of the founding agencies of the UN, announced a partnership with the French Institute for Research in Computer Science and Automation (INRIA) to preserve every piece of software under the sun. Last year, INRIA kicked off its Software Heritage project with the mission of collecting software source code and ensuring it’s never forgotten. The project has so far logged 58 million projects and billions of source files.
“We live in a digital world, but we don’t know how it works—what’s behind the machine? It’s software,” said Davide Storti, a UNESCO spokesperson, over the phone. “If you know how software works, you might better understand the world you live in. So, it’s important for education, and that kids have access to this notion.”
“It’s something that governments should participate in,” he said.
Software preservation is a massive undertaking, especially outside of sought-after programs like video games, and one that requires a lot of collaboration and time. That’s why, with the current partnership, UNESCO will strike up a conversation among its 195 member states about how they can work to preserve code-as-cultural-heritage.
“Member states might agree that they want to do this, but nothing will happen,” Storti said. “So, we’re partnering with a public institution that is doing this.”
“[INRIA’s] software is real, it’s there, it contains millions of software projects, so it’s thanks to our partner that we can bring value to the conversation,” he continued.
There are a few software preservation projects out there right now, many of which are run by hobbyists, obsessives, and enthusiasts. For example, a group of hackers led by Jason Scott of the Internet Archive are currently on a mission to crack and preserve every piece of Apple II software before they’re lost forever.
“It’s important to not only discuss the preservation itself, but also how these small initiatives can talk to larger initiatives,” Storti said. “As we see it, we look forward to organizing debates on exactly this kind of subject.”
There’s clearly a lot of firming-up to be done, but UNESCO is ready to start the conversations that may lead to government-led software preservation. And with . . .
Memes stand out quite clearly in this example, and the fact that they evolve is clear.
It’s also clear that this sort of preservation is specifically a job for the government. Businesses come and go and cannot (any longer) be expected to focus on anything other than maximizing shareholder value: profits uber Allës is the current mindset/meme. So business won’t do it. But, as the article explains, it is worth doing. The only funded entity in sight is the government, and governments in general take on the responsibility of preserving the nation’s cultural heritage (which quite specifically consists of memes: that’s what culture is).
In the future it seems likely that people will want to know, “How did we end up in this situation?” and at those times having a record of the software family of meme evolution would be very helpful. Example: Shortly after the moment our new AI overloads take over completely, we might well be curious as how that happened, since no one was working toward that.
But that’s the nature of evolution: no one was working toward bringing about those weird courtship rituals in the previous post. Those are due purely to the process of evolution, not with a purpose in mind. Just as the random combination of letters will in time produce a masterwork, so the random interactions of meme-creators will in time produce the Great AI. Thus the emergence of the Great AI will similarly be due not to our actions’ specific intentions—that is, we are not working toward that purpose—but as an outcome of meme evolution much as the courtship rituals emerge from lifeform evolution. In particular, our minds are rooted in the evolution of lifeforms. Mind is an emergence, a phenomenon seen in various contexts (life itself being an emergence). And with minds we got memes, which evolve extremely rapidly—look at meme evolution since 1700, for example. And ultimately one would expect another emergence, which might well be the Great AI.
It will just emerge, and as a result of our own activity— building faster computers with more memory and figuring out machine learning and how to apply it, and hooking things together. As with lifeform evolution, not all mutations are beneficial, and many meme mutations are in effect pruned by natural selection. But the ones that work continue to evolve in various directions, so culturally we develop things as odd as those courtship rituals. All those working on various aspects of the Great AI are, in effect, parts of it: the cells, as it were, whose intelligence is directed toward doing things from which the Great AI will emerge: not designed, but evolved.
That does offer one possible way to increase the odds that the Great AI will be benign. (It’s clear already that memes themselves may not be benign—cf. North Korea.) Note the environment determines which variants are adaptations that survive and flourish and which are maladaptations that weaken and go extinct. That’s true for lifeforms and for memes. As the oceans become more acidic with climate change (absorbing CO2), that affects how well different lifeforms fare. Some will go extinct, others will flourish, giving rise to many species. When the environment change, natural selection picks winners and losers. The change be very slow, or quite sudden (e.g., the asteroid strike) The effect is a change in the direction of evolution. Evolution is like a snowflake: the patterns show the humidity and temperature of the air at the snowflake as the snowflake formed, so the snowflake is a record of those changes. Similarly, the evolution of lifeforms is a record of environmental change (with the environment in this including other lifeforms).
For memes, we are the environment. That is, we are the hosts for cultural knowledge (memes), so we determine the survival value of various memes. Indeed, in history you can trace out how certain memes arose, flourished (or not), branched, etc. Take a man’s tie. It originated as a scarf, divided at bow ties and long ties, has various small variants in knots, and branched off in various subspecies: the Western string tie, the lanyard, and so on. So if we want the Great AI to evolve in a benign direction, we need to provide an environment that encourages that—and we are the environment.
And it’s clear that we can change the meme environment. Donald Trump has had a very strong impact on which memes flourish and which are struggling. I fear, however, that the meme environment now being developed reduces the chance that the Great AI will be benign for humans. (Again: North Korea is a bunch of memes that turn out not to be good for their hosts.)
The control we have is very indirect, but it does seem important to go a direction different from the one President Trump is headed.
I hope you’re a fan of singularity sci-fi. You may be living it pretty soon.
Jack Goldsmith and Benjamin Wittes write in Lawfare:
The U.S. intelligence community is on the verge of a crisis of confidence and legitimacy it has not experienced since the 1970s. Back then, the crisis was one of the community’s own behavior. In the 1950s, 1960s, and 1970s the intelligence community used its secret powers of surveillance and other forms of government coercion—often but not always at the behest of its political superiors—to spy on and engage in operations against Americans for political ends. At that time, politicians really did use executive branch intelligence tools to seek to monitor and harm political enemies, and exposure of that reality nearly destroyed the intelligence community. The problem was Hoover’s illegal wiretaps, bugs, and break-ins, and his attempts to annihilate Martin Luther King and others; it was NSA’s and CIA’s domestic espionage and propaganda operations; it was Richard Nixon’s many dirty tricks.
The community survived because it entered a “grand bargain” with Congress and the American people in the 1970s. And it is that very grand bargain that today’s crisis now threatens.
Today’s crisis is sparked by allegations, both by President Trump and by some House Republicans, of political misuse of the intelligence community by the Obama administration. Whether the allegations are entirely false or turn out to have elements of truth, they put the intelligence community in the cross-hairs, since some of the institutions that are supposed to be key legitimators are now functioning as delegitimators. After all, entirely appropriate investigations of counterintelligence can easily look like inappropriate political meddling, and if the President the House Intelligence Committee chairman are not merely not defending the intelligence community but are actively raising questions about its integrity, the bargain itself risks unraveling.
The central elements of the grand bargain were these: the president and his intelligence bureaucracy were allowed to maintain robust surveillance and espionage capacities, including domestically. But in exchange, Congress subjected them significant legal restrictions on how they collected, analyzed, and disseminated intelligence information; a bevy of lawyers throughout the intelligence community and, over time, in the Justice Department monitored and enforced those restrictions; domestic surveillance required a court order, including a court order from a new court, the Foreign Intelligence Surveillance Court, for foreign intelligence investigations; and two new committees, the Senate and House Intelligence committees, were to be kept “fully and currently informed” of all significant intelligence activities, and would have robust oversight authorities. The idea was that the use of these powers would be documented and watched by institutions that could be trusted to keep secrets but would act as credible surrogates for public oversight mechanisms.
These reforms proved vital. Intelligence collection, including in the homeland, is essential to our security. But it is also among the most dangerous of government powers because it is so consequential, so secret, and so easy and tempting to abuse. Its legitimacy is inherently fraught. So it is crucial not merely that the entire process be above board politically but that it be seen to be above board. If enough people believe that the intelligence community is a political instrument of those in power to be used against opponents, it actually doesn’t matter if it’s untrue. So the key function of the grand bargain was not merely keeping the intelligence community actually within the law but also validating it to a public conditioned by Watergate and COINTELPRO to believe the worst that the intelligence community was functioning within the law.
This system did not always work perfectly, and it has every so often required strengthening. Sometimes, as in Iran-Contra, it was because of real abuse. Sometimes, it was because of perceived abuses. Sometimes, it was the result of changed technology. Sometimes, it was the result of changed threat perception.
But on the whole, the system of overlapping internal and external checks, combined with massive changes in intelligence community culture, worked well. It gave the intelligence community legitimate operating space in the midst of a political culture obsessed with movies about intelligence community plots and rogue operations. Even as Hollywood made Minority Report and Enemy of the State, the intelligence community could carry on its business. That was a huge accomplishment.
Another achievement of the grand bargain was the actual elimination of the great evil of governmental use of its vast intelligence apparatus for politically-motivated surveillance. And while it did not eliminate the perception in the mass culture that this was going on, it did give the community a powerful response to suggestions of politically motivated misconduct. The response went like this: here are the rules; here are the bodies we report to on our operations; we did not violate the rules; and our many oversight bodies, who in the round are credible actors, were kept fully informed.
This basic system survived even the Snowden revelations. Many people found Snowden’s disclosures of vast intelligence collection shocking. But though Snowden disclosed many technical legal problems with this surveillance, as well as some controversial legal judgments signed off on by the executive oversight apparatus, it also showed that the the problem of politically motivated surveillance simply didn’t exist. None of the thousands of pages of NSA revelations pointed to anything like the venal activities of the 1970s and before.
Yet events of the last year have put the domestic political use of surveillance tools front and center once again. And ironically, today it’s the President of the United States and the Chairman of the House Intelligence Committee who are alleging precisely that which the Snowden revelations did not show. . .
Continue reading. And do read the whole thing. Trump is really doing serious damage to our government, and seriously weakening it. And the whole world sees it, including those who are hostile to our country.
Jillian York reports at Motherboard:
Every society engages in censorship. Whether from church or state or otherwise, the desire to suppress information seems a natural human impulse, albeit one variant in all its manifestations. Most of us readily accept certain kinds of censorship—think child sexual abuse imagery—but are reluctant to call it by its name.
The restriction of content we deem beyond the pale is still, in fact, censorship. The word “censorship” is not itself a value judgement, but a statement of fact, an explanation for why something that used to be, no longer is. The American Civil Liberties Union defines “censorship” as “the suppression of words, images, or ideas that are ‘offensive’, [that] happens whenever some people succeed in imposing their personal political or moral values on others.” The definition further notes that censorship can be carried out by private groups—like social media companies—as well as governments. And when carried out by unaccountable actors (be they authoritarian governments or corporations) through opaque processes, it’s important that we question it.
According to Twitter’s latest transparency report, the company suspended more than 377,000 accounts for “promoting extremism.” Twitter said that 74 percent of extremist accounts were found by “internal, proprietary spam-fighting tools”—in other words, algorithms and filters built to find spam, but employed to combat the spectre of violent extremism.
Few have openly questioned this method, which is certainly not without error. In fact, the filtering of actual spam inspired more of a debate back in the day—in 1996, residents of the town of Scunthorpe, England, were prevented from signing up for AOL accounts due to the profanity contained within their municipality’s name, leading to the broader realization that filters intended to catch spam or obscenity can have overreaching effects. The “Scunthorpe problem” has arisen time and time again when companies, acting with good intentions, have filtered legitimate names or content.
The Scunthorpe problem demonstrates that when we filter content—even for legitimate reasons or through democratic decisions—innocuous speech, videos, and images are bound to get caught in the cracks. After all, you can’t spell socialism without “Cialis”.
We know that companies, using low-wage human content moderators and algorithms, undoubtedly make mistakes in their policing of content. To err is human, and algorithms are built and implemented by humans, lest we forget. But when a company takes charge of ridding the world of extremism, with minimal to no input from society at large, there’s something more insidious going on.
Twitter’s deeming of some content—but not other content—as “extremist” is, after all, a value judgement. Although there’s little transparency beyond numbers, much of the banned content matches up neatly with the US government’s list of designated terrorist organizations. We don’t know what kinds of terms Twitter uses to weed out the accounts, but accounts expressing support for Islamic terror organizations seem to make up the bulk of takedowns. Meanwhile, neo-Nazis like Richard Spencer are rewarded with a “verified” checkmark—intended to signify a confirmed identity, but often used and seen as a marker of celebrity.
By choosing to place its focus on the faraway spectre of ISIS—rather than the neo-Nazis closer to home—Twitter is essentially saying that “extremism” is limited to those scary bearded men abroad, a position not unlike that of the American media. In fact, extremism is a part of our new, everyday reality, as elected officials opt for racist and sexist policies and as President Trump eggs on his most ardent white supremacist fans, offering tacit support for their vile views. As white supremacist hate gains ground, companies seem caught unaware, and unwilling or unprepared to “tackle” it the way they have Islamic extremism.
The question of whether to censor, of what to censor, is an important one, one that must be answered not by corporations but through democratic and inclusive processes. As a society, we may in fact find that censoring extremism on social platforms helps prevent further recruitment, or saves lives, and we may decide that it’s worth the potential cost. At that point, we could work to develop tools and systems that seek to prevent collateral damage, to avoid catching the proverbial dolphins in the tuna nets. . .