Archive for the ‘Software’ Category
I have mentioned FunBridge.com before: although you can play against other individuals on-line, you can also simply play against the computer (playing the other three hands). You bid, then you play—defense or as declarer, depending on the hand and the bid. You then get “points” by being compared to others who played the identical hand: the better your performance relative to theirs, the more points you get; the worse, the fewer.
My own points in the current series (you can always discard the record to date and start anew) range from +11 to -16 (don’t ask—and best not to play after a drink or two). My total right now is +15 and in this current series it’s been as high as +25. Previous series I would discard after reaching -75 or -100 points.
The interesting thing is, I’m doing much better now. It’s not from having studied, though my intentions in that regard were really excellent—of the very first rank, in fact. It’s simply from playing a LOT of hands, and allowing my adaptive unconscious to use its pattern recognition engine to figure it out.
Obviously, I can still improve a lot. It’s sobering to see your ranking against others who played the same hand be, say, 86 out of 97, but it’s exhilarating to see it as 3 out of 90 or 13 of 96—the two most recent hands. And sometimes I’m NUMBER ONE!!! At least for a while.
The thing that interests me, though, is how one can improve simply by playing a lot of games and seeing the (relative) result. It’s much the way in which one’s shaving technique improves over time simply by watching what you’re doing and seeing what results: the adaptive unconscious is quite powerful.
If you’ve not read Strangers to Ourselves: Discovering the Adaptive Unconscious, by Timothy Wilson, you really should. VERY interesting book.
And if you like card games, you should try Funbridge.com.
James Fallows has an interesting update—and in particular a good example of how The Brain works, with clickable links that allows you to explore. Definitely worth a look. From the column at the link:
Jerry Michalski’s Brain. For years I’ve also loved the innovative, multi-platform program TheBrain, from TheBrain software in Los Angeles. I wrote about it in the New York Times 10 years ago, and then in the Atlantic in 2009and 2012. It has various free or very low-cost versions; the full-strength desktop editor, for Mac or Windows, starts at $219.
The very most ambitious and creative user of TheBrain has long been the tech-world figure Jerry Michalski. He has been chronicling his life and thoughts via this software for 18 years now and has posted his results on the web. Now he’s created an iOS app, called JerrysBrain. He sends some notes about what he’s doing:
My Brain has been openly available on the Web for many years and will remain so, at JerrysBrain.com. Now a Jerry’s Brain app is available for iOS and costs a buck. Here’s the direct link to it in the app store.
It’s easy for me to create permalinks to specific thoughts in my online Brain, though not to the iOS app. Here are a few useful and interesting direct links:
- 2014 (I do this every year)
- 2011 Global Protests
- Types of Governance
- Social Networking Services (SNSes)
- Challenges for Kids
- Enumerated Wisdom
I started this Brain in December 1997. It has over 257K thoughts, all put in by hand. I just ran the numbers and it’s a span of 6300 days, or 40 thoughts a day.
The top insight from 17+ years of using TheBrain is that we’re an amnesic society. We have little context or memory available. A huge causal force is the business model of the media businesses, which historically needed us to watch the ads scattered in the content, so it kept the content from us.
For further exploration, here’s a screenshot from Jerry’s Brain and then three posts and screencasts from Jerry Michalski on how and why he works this way: . . .
UPDATE: Another article on Signal.
Micah Lee reports in The Intercept:
In the age of ubiquitous government surveillance, the only way citizens can protect their privacy online is through encryption. Historically, this has been extremely difficult for mere mortals; just watch the video Edward Snowden made to teach Glenn Greenwald how to encrypt his emails to see how confusing it gets. But all of this is quickly changing as high-quality, user-friendly encryption software becomes available.
App maker Open Whisper Systems took an important step in this direction today with the release of a major new version of its Signal encrypted calling app for iPhones and iPads. The new version, Signal 2.0, folds in support for encrypted text messages using a protocol called TextSecure, meaning users can communicate using voice and text while remaining confident nothing can be intercepted in transit over the internet.
That may not sound like a particularly big deal, given that other encrypted communication apps are available for iOS, but Signal 2.0 offers something tremendously useful: peace of mind.
Unlike other text messaging products, Signal’s code is open source, meaning it can be inspected by experts, and the app also supports forward secrecy, so if an attacker steals your encryption key, they cannot go back and decrypt messages they may have collected in the past.
Signal is also one special place on the iPhone where users can be confidentall their communications are always fully scrambled. Other apps with encryption tend to enter insecure modes at unpredictable times — unpredictable for many users, at least. Apple’s iMessage, for example, employs strong encryption, but only when communicating between two Apple devices and only when there is a proper data connection. Otherwise, iMessage falls back on insecure SMS messaging. iMessage also lacks forward secrecy and inspectable source code.
Signal also offers the ability . . .
Pretty clearly that fails the test of reciprocity. Lorenzo Franceschi-Bicchierai reports at Motherboard:
When the US demands technology companies install backdoors for law enforcement, it’s okay. But when China demands the same, it’s a whole different story.
The Chinese government is about to pass a new counter terrorism law that would require tech companies operating in the country to turn over encryption keys and include specially crafted code in their software and hardware so that chinese authorities can defeat security measures at will.
Technologists and cryptographers have long warned that you can’t design a secure system that will enable law enforcement—and only law enforcement—to bypass the encryption. The nature of a backdoor door is that it is also a vulnerability, and if discovered, hackers or foreign governments might be able to exploit it, too.
Yet, over the past few months, several US government officials, including the FBI director James Comey, outgoing US Attorney General Eric Holder, and NSA DirectorMike Rogers, have all suggested that companies such as Apple and Google should give law enforcement agencies special access to their users’ encrypted data—while somehow offering strong encryption for their users at the same time.
Their fear is that cops and feds will “go dark,” an FBI term for a potential scenario where encryption makes it impossible to intercept criminals’ communications.
But in light of China’s new proposals, some think the US’ own position is a little ironic.
“You can’t have it both ways,” Trevor Timm, the co-founder and the executive director of the Freedom of the Press Foundation, told Motherboard. “If the US forces tech companies to install backdoors in encryption, then tech companies will have no choice but to go along with China when they demand the same power.”
He’s not the only one to think the US government might end up regretting its stance.
Someday US officials will look back and realize how much global damage they’ve enabled with their silly requests for key escrow.
— Matthew Green (@matthew_d_green) February 27, 2015
Matthew Green, a cryptography professor at Johns Hopkins University, tweeted that someday US officials will “realize how much damage they’ve enabled” with their “silly requests” for backdoors.
Ironically, the US government sent a letter to China expressing concern about its new law. “The Administration is aggressively working to have China walk back from these troubling regulations,” US Trade Representative Michael Froman said in a statement.
A White House spokesperson did not respond to a request for comment from Motherboard.
“It’s stunningly shortsighted for the FBI and NSA not to realize this,” Timm added. “By demanding backdoors, these US government agencies are putting everyone’s cybersecurity at risk.” . . .
Very interesting Motherboard article by Bruce Schneier, a well-known (and highly qualified) cybersecurity consultant:
The thing about infrastructure is that everyone uses it. If it’s secure, it’s secure for everyone. And if it’s insecure, it’s insecure for everyone. This forces some hard policy choices.
When I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA’s program for what is called packet injection—basically, a technology that allows the agency to hack into computers.
Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well.
All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the internet’s defenses, the NSA has worked to ensure that anyone can use packet injection to hack into computers.
Technological capabilities cannot distinguish based on morality, nationality, or legality
This isn’t the only example of once-top-secret US government attack capabilities being used against US government interests. StingRay is a particular brand of IMSI catcher, and is used to intercept cell phone calls and metadata. This technology was once the FBI’s secret, but not anymore. There are dozens of these devices scattered around Washington, DC, as well as the rest of the country, run by who-knows-what government or organization. By accepting the vulnerabilities in these devices so the FBI can use them to solve crimes, we necessarily allow foreign governments and criminals to use them against us.
Similarly, vulnerabilities in phone switches—SS7 switches, for those who like jargon—have been long used by the NSA to locate cell phones. This same technology is sold by the US company Verint and the UK company Cobham to third-world governments, and hackers have demonstrated the same capabilities at conferences. An eavesdropping capability that was built into phone switches to enable lawful intercepts was used by still-unidentified unlawful intercepters in Greece between 2004 and 2005.
These are the stories you need to keep in mind when thinking about proposals to ensure that all communications systems can be eavesdropped on by government. Both the FBI’s James Comey and UK Prime Minister David Cameron recently proposed limiting secure cryptography in favor of cryptography they can have access to.
But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.
Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.
We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance. . .
Jason Koebler reports at Motherboard:
News broke earlier this week about the NSA’s “most sophisticated” malware yet: An undetectable backdoor that can filter information to and from a hard drive, using the underlying framework of the drive itself. It surprised a lot of people, sure, but maybe it shouldn’t have. A group of ordinary security researchers warned this was possible, and in fact installed hard drive backdoors themselves, nearly a year ago.
The paper ” Implementation and Implications of a Stealth Hard-Drive Backdoor,” published in March 2014 by a team of eight researchers from Eurecom in France, IBM Research in Zurich, and UCSD and Northeastern University in the US, reads almostexactly like security firm Kaspersky’s expose on the NSA malware. The full paper is absolutely worth your read if you’ve been fascinated by Kaspersky’s revelations.
The malware, developed by Travis Goodspeed and his colleagues (Goodspeed has spoken the most publicly about the exploit), can be installed remotely by people who have no physical access to it. In fact, the paper asserts that such an attack “is not limited to the area of government cyber warfare; rather, it is well within the reach of moderately funded criminals, botnet herders, and academic researchers.”
To install it remotely, a hacker would need to infect the operating system of the user’s computer with run-of-the-mill malware, alter the hard drive’s firmware, and then delete the original, operating system-side virus. From then on, the hacker would have complete access to everything on the person’s hard disk, the exploit would be almost completely undetectable, and it would persist until the hard drive was physically destroyed.
The exploit could also be installed by someone who had physical access to the drive.
“Once you have firmware control of a disk, you can also have it commit suicide or overwrite itself,” he explained at the 0x07 Sec-T Conference last year. “You can also have it act as a backdoor.”
That, apparently, is what the NSA was doing with its exploit. Though we just discovered the NSA was actually doing this, it seems likely that the program was going on for a while, perhaps a decade or more.
The team explains in its paper that a “catastrophic loss of security occurs when hard disks are not trustworthy.” Information can be funneled remotely from the disk and new information can be written to the disk, using remote commands sent to the exploit. An infected hard drive loses less than 1 percent of its read and write speed, so it’s essentially undetectable from a performance perspective. . .
NSA will doubtless deny it (if they comment at all), but as we know, NSA will lie like a rug: the agency simply cannot be trusted to make true statements. Andrea Peterson writes for the Washington Post:
The U.S. intelligence community has found ways to avoid even the strongest of security measures and practices, a new report from Moscow-based Kaspersky Lab suggests, demonstrating a range of technological accomplishments that place the nation’s hackers as among the most sophisticated and well resourced in the world.
Hackers who are part of what the cybersecurity researchers call “Equation Group” have been operating under the radar for at least 14 years, deploying a range of malware that could infect hard drives in a way almost impossible to remove and cold hide code in USB storage devices to infiltrate networks kept separate from the Internet for security purposes.
Kaspersky’s report did not say the U.S. government was behind the group. But it did say the group was closely linked to Stuxnet — malware widely reported to have been developed by the National Security Agency and Israel that was used in an attack against Iran’s uranium enrichment program — along with other bits of data that appear to align with previous disclosures. Reuters further linked the NSA to the Kaspersky report, citing anonymous former employees of the agency who confirmed Kaspersky’s analysis.
NSA spokesperson Vanee Vines said in a statement that the agency was aware of the report, but would not comment publicly on any allegations it raises.
The Kaspersky report shows a highly sophisticated adversary that has found ways to worm itself into computers with even the strongest of security measures in place. This matches up with what we know about other NSA efforts from documents leaked by former NSA contractor Edward Snowden, which showed efforts to undermine encryption and evade the protections major tech companies used to guard user data.
But the new report paints a more detailed picture of the breadth of the agency’s reported offensive cyber arsenal. And unlike other recent revelations about U.S. government snooping, which have largely come from Snowden, the insights from Kaspersky came from examining attacks found in the digital wild. Victims were observed in more than 30 countries, with Iran, Russia, Pakistan and Afghanistan having among the highest infection rates, according to the report.
One of the most sophisticated attacks launched by the Equation Group lodged malware deep into hard drives, according to Kaspersky. It worked by reprogramming the proprietary code, called firmware, built into the hard drives themselves. That allowed for persistent storage hidden inside a target system that could survive the hard drive being reformatted or an operating system being reinstalled, the report says.
The code uncovered by Kaspersky suggests the malware was designed to work on disk drives of more than a dozen major manufacturers — including those from Seagate, Western Digital, Toshiba, IBM and Samsung. But the report also notes that this particular technique seemed to be rarely deployed, suggesting that it was used only on the most valuable victims or in unusual circumstances.
The Kaspersky report also said the group found ways to hide malicious files within a Windows operating system database on the targets’ computer known as the registry — encrypting and stashing the files so that they would be impossible to detect using antivirus software.
Equation Group also found ways to infiltrate systems that were kept off the Internet for security purposes — commonly known as “air-gapped” networks. Malware used by the hackers relied on infected USB sticks to map out such networks — or even remotely deploy code on them, according to the report.
The program would create hidden areas on an infected USB stick. If that stick was then connected to a computer that lacked Internet access, it would scoop up data about the system and save it in that hidden area. If reconnected to a computer with Internet access, it would send that information off to its controllers. Attackers could also run commands on air-gapped networks by saving code to the hidden part of the drive that would run when it was connected to a network without Internet access.
Other malware thought to be developed by the U.S. government, including Stuxnet and Flame, used similar measures to bridge air gaps — and previous reports detail even more ways the spy agency has circumvented security measures and practices.
A 2013 story from the Guardian, ProPublica and the New York Timesreported that the NSA had worked secretly to break many types of encryption, successfully exploiting the technology used to protect the privacy of online communications and working with tech companies to introduce weaknesses into commercial products that consumers thought to be secure. . .
Sounds as though NSA has gone into business for itself.