Archive for the ‘Technology’ Category
From the About page at the Jolly Roger Telephone Co.:
I have had the same two landlines for about 15 years and I used to get a lot of telemarketer calls. When my son was old enough to answer the phone, he got a call from a very aggressive telemarketer who said “some bad words” so I had enough. I work on telephone systems for a living and I said “hey, I’m a phone guy, I should be able to do something about this!” so I did.
I basically set up an “opt-in” whitelist of caller-ids. If you call my house, you get a message that says “hey, if you are a real person press any button and you’ll ring through to my house. If not, I’ll send you to voicemail and you can leave a message.” If you press a button, you ring my house, your caller-id is white-listed, and you’ll ring though next time you call.
Now, I figured telemarketers would just press a button, so I also programmed it to send me an email that included links to set the behavior next time this same caller-id called. I could (1) send this caller to a disconnect message, (2) let them through to the house, or (3) challenge them again next time.
It worked perfectly. From that moment, telemarketers never rang through to the house. Not one telemarketer has rung through in the last two years.
I noticed that the telemarketer calls would hang up after about 10 seconds. This of course is the magic of “predictive dialers.” Telemarketers have machines that dial thousands of numbers and when the machine thinks a human has answered, it cuts through to an agent as quickly as it can. This is why you often have to say “hello” a couple times before someone is there. That’s the dialer figuring out if you are an answering machine or not. In my greeting that said “if you’re a human, press a button…”, the predictive dialers correctly figured they had reached a machine and hung up. So I had successfully blocked them, but I wasn’t causing them any pain or discomfort.
So I figured I’d try something. I was getting a lot of repeat calls from numbers. Obviously the same company’s predictive dialers were calling me at various times trying to find me home. I thought, what if I play a sound file that says “hello?… Hello?… Hello?…” a few times? Would it fool their predictive dialer into thinking it had reached a real person? So recorded some “hellos”, then “hang on a sec”, and then some silence. I created a “parrot” routine and sent these obvious telemarketers to this parrot.
And it worked like a champ! So then I thought “hmmmmm, how far can I take this parrot?”
I then created a fairly sophisticated algorithm that performs a handshake (hello? hello?), then engaged (yes, uh huh, right), then when it thinks the caller is suspicious, it says something completely inane. If the caller stops talking and there’s a long silence, it goes back to handshake. There’s a bit more to it, but that’s pretty much the general idea. As you’ll hear from the calls on this blog, it has been working like a champ!
So I modified the emails to include a link to send calls to this robot. And to prevent telemarketers from getting suspicious and hitting the robot again if they called back, I added a 1 hour delay before it would answer again. If they call back within an hour, they get a disconnect message.
I wrapped it all up in a “pirate” theme around a Jolly Roger pirate flag and I’ve been intercepting telemarketers for over a year. I have blocked thousands of calls to my house. The telemarketer calls have been entertaining and I really should update the robot to include more entertaining chatter. But it has served me well and there are only a few callers who figure out it’s a robot.
I hope you enjoy. Please let me know how you like it!
First, the racing. This video is from an interesting Motherboard article by Rachel Pick:
Second, using drones for and against piracy. Again from Motherboard, this time an article by Jordan Pearson:
Criminals at sea are already going high tech, using GPS and bootleg submarines to plan attacks and skirt around navy ships. According to a new report from a consultant to the Canadian military, the next technological frontier for oceanic crime could be drones.
The report, written by a strategic analyst for the Canadian military named David Rudd, notes that drones could be used for “surveillance” and “possibly weapons delivery,” which would give maritime non-state actors, or MNSAs, as they’re known, a long-range advantage. These actors could be pirates, smugglers, or traffickers. Rudd quotes one academic as saying, “For the first time, non-state adversaries would have an air force.”
Rudd notes that this kind of “super-empowerment” could occur through the transfer of military tech or with commercially available gadgets. Either way, drones could spell trouble for militaries policing the open water.
But drone technology cuts both ways, and Rudd notes that the Canadian military should consider using drones to their advantage against MNSAs, as well. Canadacurrently uses its navy to bust drug smuggling operations at sea, and conduct anti-piracy and anti-terror operations.
For example, Rudd suggests that drones could be outfitted with sensors to detect “low-acoustic signature swimmers” like the DIY subs used by the cartels circling military ships. Adding a mission bay under the helicopter deck on Canada’s warships could allow for drones to be deployed in order to “protect the ship by identifying and prosecuting threats at long range, exposing a swarm [of pirates] to a high level of attrition before any surviving constituents could bring their own weapons to bear.” In other words, annihilate them before they can fire off a round.
The use of drone tech to keep up with the capabilities of pirates could also be a cost-effective way to make Canada’s cash-strapped navy more capable, Rudd adds. . .
The site Fakespot.com claims to spot fake Amazon reviews. You enter the URL of the Amazon product, and it gives you its evaluation of the authenticity of the reviews. I checked Leisureguy’s Guide to Gourmet Shaving the Double-Edge Way, and I was reassured:
They included a summary of the reviews:
And even a word cloud of the reviews:
It might be worth trying on products you’re undecided on.
I picked a product at random, the Cold Steel Espada with Polished G10 Handle (using this URL: http://www.amazon.com/Cold-Steel-Espada-Polished-Handle/dp/B001DZM5QS/ref=sr_1_1) and ran it, with this result:
Radley Balko has a very interesting column on the implementation of police body cameras, including some knotty privacy considerations. Oddly, he doesn’t mention the missing-video presumption, presumably because none of the pilot projects or implementations include such a presumption, but I think the missing-video presumption is essential, given the rate at which police sabotage the recording devices.
The column begins:
Outfitting police officers with body cameras seems to be the one police reform that just about everyone can support. The idea has been endorsed by police chiefs, activist groups, and local, state and federal politicians across the ideological spectrum. (The only holdout appears to be the police unions.) The problem is how to implement the cameras — determining the policies that govern how they’re used. For example, some civil libertarian groups have raised concerns about the privacy of people caught in body camera footage. There are also important questions about public access, review, storage, tampering and disciplinary action for officers who don’t use the devices properly.
In Albuquerque, a city with a long history of police brutality and corruption, police officer Jeremy Dear was fired in December 2014 after repeated incidents in which he failed to activate his body camera just before using force, including just before shooting and killing a teenaged girl. Dear also has a history of allegations of excessive force. Even after Dear was explicitly told to record all of his interactions with citizens, he sent an emailasking to use his camera in “offline” mode, which would have allowed him to delete videos before they were uploaded to a cloud server. Last November, the city’s Personnel Board voted 3-2 to reinstate Dear on the police force.
In San Diego, police officials have said they (and only they) will determine which videos are released to the public and when, a policy that does little to increase transparency and public trust. A Washington Post investigation last fall found similar problems across the country. Cops accused of misconduct or excessive force were given access to footage of controversial incidents, but that footage was rarely made public.
In a hearing before a committee of the state legislature late last year, law enforcement leaders in Tennessee raised another potential barrier to the widespread use of body cameras: the cost of complying with open records requests. They raised the possibility of activist or watchdog groups requesting huge caches of video that they claimed would require weeks of labor to fulfill.
But body cameras are coming. A recent survey of the nation’s largest police departments found that 95 percent planned to implement a body camera program sometime in the near future. Body cameras are a piece of technology. And like any technology, the policies that control how they’re used will go a long way toward determining if they’re a tool for transparency and accountability, or if they’re ineffective or even counterproductive to those goals.
To that end, the Brennan Center has just completed a study of the body camera policies in the 24 police departments around the country that have so far implemented them. Of the 24, 9 programs are still in the pilot stage. For comparison, the Brennan Center also included three model programs from the ACLU, the International Association of Chiefs of Police, and the Police Executive Research Forum. The authors of the study then broke the policies down into several charts, “Recording Circumstances,” “Privacy and First Amendment Protections,” “Accountability,” “Retention and Release,” and“Security.”
So what did the Brennan Center find? . . .
Continue reading. And do read the whole thing. It’s quite interesting.
I’ve found that reading the books and then discussing them with a few people, ideally with a good discussion leader who can ask good questions, provides much greater understanding than reading the books without subsequent discussion, where you yourself have to think of good questions to ask yourself about what you’re reading. (A reading journal can help in this case: write down the questions, then try to answer them.)
In the Washington Post Christopher Ingraham discusses reading lists that can be extracted from the Open Syllabus project:
If you want an Ivy League education, you could fork over $200 grand or soand go to Cornell or Harvard for four years. Alternatively, you could save a ton of cash by simply reading the same books Ivy League students are assigned.
As the group behind the project explains: There’s an “intellectual judgment embedded” in the lists of books college students are required to read. The most frequently-assigned books at the nation’s universities are essentially our canon: the body of literature that society’s leaders are expected to be familiar with. So what does that canon look like?
For starters, the Explorer lets us filter by individual schools. I tallied the most frequently assigned books at all U.S. colleges and universities and compared them to the list at seven Ivy League schools — Harvard, Yale, Princeton, Cornell, Columbia, U. Penn and Brown (Dartmouth doesn’t seem to appear in the Explorer’s database — more on that below).
Across all schools, Strunk & White’s classic writing guide “The Elements of Style” is the most common book, assigned in over 3,000 courses in the Explorer’s database. Plato’s “Republic” is the second-most popular, appearing close to 2,500 times. The 1,500-page “Campbell Biology” textbook/doorstopcomes in at third place, perhaps a nod to the nation’s pre-med students.
Appearing fourth on the list, Marx and Engels’s “The Communist Manifesto” is sure to raise some eyebrows. Its popularity makes a certain amount of sense, given that it may be the most well-known critique of the capitalist system we all know and love. But that’s not likely to comfort anyone who’s convinced the nation’s universities are breeding grounds for bearded Marxist extremism.
The Ivy League list is considerably different, however. Plato’s “Republic” is the top book there, with Samuel Huntington’s “Clash of Civilizations” coming in a close second. “The Elements of Style” makes an appearance, as do poli-sci classics “Leviathan” and “The Prince.” Martin Luther King’s “Letter from a Birmingham Jail” comes it at No. 8 in the Ivy Leagues.
Overall, the Ivy League list is heavily skewed toward political philosophy and thought — the only book on the list that doesn’t fall under this category is Strunk and White’s “Elements of Style.”
The Explorer lets you filter by subject area, too. Here’s how the Ivies compare with everywhere else in English courses.
. . .
Continue reading. There’s more, including more graphics.
Cabe Metz reports in Wired:
Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.
Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.
This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.
As Bloomberg says, it was Singhal who approved the roll-out of RankBrain. And before that, he and his team may have explored other, simpler forms of machine learning. But for a time, some say, he represented a steadfast resistance to the use of machine learning inside Google Search. In the past, Google relied mostly on algorithms that followed a strict set of rules set by humans. The concern—as described by some former Google employees—was that it was more difficult to understand why neural nets behaved the way it did, and more difficult to tweak their behavior.
These concerns still hover over the world of machine learning. The truth is that even the experts don’t completely understand how neural nets work. But they do work. If you feed enough photos of a platypus into a neural net, it can learn to identify a platypus. If you show it enough computer malware code, it can learn to recognize a virus. If you give it enough raw language—words or phrases that people might type into a search engine—it can learn to understand search queries and help respond to them. In some cases, it can handle queries better than algorithmic rules hand-coded by human engineers. Artificial intelligence is the future of Google Search, and if it’s the future of Google Search, it’s the future of so much more.
Sticking to the Rules
This past fall, I sat down with a former Googler who asked that I withhold his name because he wasn’t authorized to talk about the company’s inner workings, and we discussed the role of neural networks inside the company’s search engine. At one point, he said, the Google ads team had adopted neural nets to help target ads, but the “organic search” team was reluctant to use this technology. Indeed, over the years, discussions of this dynamic have popped up every now and again on Quora, the popular question-and-answer site.
Edmond Lau, who worked on Google’s search team and is the author of the book The Effective Engineer, wrote in a Quora post that Singhal carried a philosophical bias against machine learning. With machine learning, he wrote, the trouble was that “it’s hard to explain and ascertain why a particular search result ranks more highly than another result for a given query.” And, he added: “It’s difficult to directly tweak a machine learning-based system to boost the importance of certain signals over others.” Other ex-Googlers agreed with this characterization.
Yes, Google’s search engine was always driven by algorithms that automatically generate a response to each query. But these algorithms amounted to a set of definite rules. Google engineers could readily change and refine these rules. And unlike neural nets, these algorithms didn’t learn on their own. As Lau put it: “Rule-based scoring metrics, while still complex, provide a greater opportunity for engineers to directly tweak weights in specific situations.”
But now, Google has incorporated deep learning into its search engine. And with its head of AI taking over search, the company seems to believe this is the way forward.
It’s true that with neural nets, you lose some control. But you don’t lose all of it, says Chris Nicholson, the founder of the deep learning startup Skymind. Neural networks are really just math—linear algebra—and engineers can certainly trace how the numbers behave inside these multi-layered creations. The trouble is that it’s hard to understand why a neural net classifies a photo or spoken word or snippet of natural language in a certain way.
“People understand the linear algebra behind deep learning. But the models it produces are less human-readable. They’re machine-readable,” Nicholson says. “They can retrieve very accurate results, but we can’t always explain, on an individual basis, what led them to those accurate results.”
What this means is that, in order to tweak the behavior of these neural nets, . . .
Jeanna Laughlin reports in The Intercept:
Bring technologists and members of the intelligence community together to figure out what to do about unbreakable encryption and guess what they conclude?
They conclude that they don’t really need to worry about it.
Unbreakable encryption—which prevents easy, conventional surveillance of digital communications—isn’t a big problem for law enforcement, says a new report published by Harvard’s Berkman Center for Internet and Society on Monday. The report, titled “Don’t Panic”, finds that we are probably not “headed to a future in which our ability to effectively surveil criminals and bad actors is impossible” because of companies that offer end-to-end encryption, such as Apple.
That’s because the technology isn’t universally marketable and there are so many other spying options on the table, as everything from fitness trackers to fridges is getting hooked up to the Internet and transmitting vast amounts of data about our everyday lives.
The Berkman Center convened a diverse group of technologists, cryptographers, and former and current government officials—from think tanks, universities, the NSA, FBI, ODNI, and others— to hold meetings over the course of a year to discuss encryption privately, and then publish their conclusions.
A very public debate over encryption was taking place simultaneously. FBI Director James Comey, in hearings and speeches, has repeatedly stressed the dangers of “going dark”—saying that law enforcement is losing the ability to get its hands on digital evidence because end-to-end encryption scrambles messages for everyone except for the sender and the receiver. Even the company that sends the message can’t decrypt it when served with a warrant.
The public response from scientists and privacy advocates has largely focused on the technological impossibility of creating a secure way to give law enforcement special access to those communications without tearing a hole in the protection encryption provides.
While the signers of the report (excluding government attendees, who were unable to sign on “because of their employment”) mention this cybersecurity risk—the bigger takeaway is about why end-to-end encryption, likely here to stay, doesn’t pose an existential threat to law enforcement investigations.
First, the signatories conclude, not every company is going to jump on the end-to-end encryption bandwagon, because it’s not going to make them money. . .