Later On

A blog written for those whose interests more or less match mine.

Archive for the ‘Technology’ Category

Facial Recognition Failures Are Locking People Out of Unemployment Systems

leave a comment »

Todd Feathers writes in Vice:

People around the country are furious after being denied their unemployment benefits due to apparent problems with facial recognition technology that claims to prevent fraud.

Unemployment recipients have been complaining for months about the identity verification service ID.me, which uses a combination of biometric information and official documents to confirm that applicants are who they claim to be. The complaints reached another crescendo this week after Axios published a “deep dive” article about the threat of unemployment fraud based on statistics provided to the outlet by ID.me.

Some unemployment applicants have said that ID.me’s facial recognition models fail to properly identify them (generally speaking, facial recognition technology is notoriously less accurate for women and people of color). And after their applications were put on hold because their identity couldn’t be verified, many should-be beneficiaries have had to wait days or weeks to reach an ID.me “trusted referee” who could confirm what the technology couldn’t.

On Twitter, there are dozens of complaints about ID.me per day, and local news articles all over the country have detailed the problem over the course of months. In California, 1.4 million unemployment beneficiary accounts were abruptly suspended on New Year’s Eve and the beneficiaries were required to re-verify their identity using ID.me, a process which many found difficult and resulted in them waiting for weeks to reactivate their accounts while they struggled to make ends meet.

In Colorado, benefit recipients who had no problem establishing their identity before ID.me took over were suddenly rejected and went months without receiving the payments they were eligible for.

The story is similar in FloridaNorth CarolinaPennsylvaniaArizona, and many other states.

ID.me CEO Blake Hall told Motherboard that the company’s facial recognition technology does one-to-one matching—comparing one face against a picture of that same face (from a driver’s license, say)—whereas other applications of facial recognition attempt to find a match for a face in a large dataset of faces, known as one-to-many matching.

“The algorithms used for Face Match operate ~99.9% efficacy,” Hall wrote in an email to Motherboard. “There is in fact no relationship between skin tone and Face Match failure on a 1:1 basis” according to a regression analysis the company performed.

That doesn’t mesh with the experiences being shared on Twitter by people like Tim Weaver, a gig economy worker in Las Vegas who was suddenly cut off from his unemployment benefits in late March after ID.me failed to identify him.

Weaver told Motherboard that when he attempted to pass ID.me’s facial recognition test he held a phone in front of him in the instructed position but “it rejected it, didn’t give us a reason, just rejected it. It rejected it three times, and then it locked me out of the system.”

Weaver said he attempted to contact the company’s customer support through its chat feature, which claims to provide assistance 24-hours a day, seven days a week. He tried numerous times at all hours of the day. He tried contacting the state of Nevada for help, but the employees there directed him back to ID.me.

This went on for several weeks, Weaver said, until he tweeted a scathing criticism of the company, which then reached out and—after several more frustrating days—verified Weaver’s identity.

Weaver went for three weeks without receiving his benefit. “I couldn’t pay bills,” he said. “Luckily I had enough food saved up so I didn’t have to worry about that. It’s just ridiculous.”

In his statement to Motherboard, Hall said that facial recognition failures are not a problem with the technology but with  . . .

Continue reading. There’s more.

This is bad, and the company is taking no responsibility. Welcome to dystopia.

Written by Leisureguy

18 June 2021 at 1:13 pm

Phone Network Encryption Was Deliberately Weakened

leave a comment »

Lorenzo Franceschi-Bicchierai writes in Vice:

A weakness in the algorithm used to encrypt cellphone data in the 1990s and 2000s allowed hackers to spy on some internet traffic, according to a new research paper.

The paper has sent shockwaves through the encryption community because of what it implies: The researchers believe that the mathematical probability of the weakness being introduced on accident is extremely low. Thus, they speculate that a weakness was intentionally put into the algorithm. After the paper was published, the group that designed the algorithm confirmed this was the case.

Researchers from several universities in Europe found that the encryption algorithm GEA-1, which was used in cellphones when the industry adopted GPRS standards in 2G networks, was intentionally designed to include a weakness that at least one cryptography expert sees as a backdoor. The researchers said they obtained two encryption algorithms, GEA-1 and GEA-2, which are proprietary and thus not public, “from a source.” They then analyzed them and realized they were vulnerable to attacks that allowed for decryption of all traffic.

When trying to reverse-engineer the algorithm, the researchers wrote that (to simplify), they tried to design a similar encryption algorithm using a random number generator often used in cryptography and never came close to creating an encryption scheme as weak as the one actually used: “In a million tries we never even got close to such a weak instance,” they wrote. “This implies that the weakness in GEA-1 is unlikely to occur by chance, indicating that the security level of 40 bits is due to export regulations.”

Researchers dubbed the attack “divide-and-conquer,” and said it was “rather straightforward.” In short, the attack allows someone who can intercept cellphone data traffic to recover the key used to encrypt the data and then decrypt all traffic. The weakness in GEA-1, the oldest algorithm developed in 1998, is that it provides only 40-bit security. That’s what allows an attacker to get the key and decrypt all traffic, according to the researchers.

A spokesperson for the organization that designed the GEA-1 algorithm, the European Telecommunications Standards Institute (ETSI), admitted that the algorithm contained a weakness, but said it was introduced because the export regulations at the time did not allow for stronger encryption.

“We followed regulations: we followed export control regulations that limited the strength of GEA-1,” a spokesperson for ETSI told Motherboard in an email. . .

Continue reading.

Written by Leisureguy

17 June 2021 at 3:36 pm

Fixing equipment in the lab teaches life lessons

leave a comment »

I imagine we’ve all found that we learn a lot about how to do something by actually doing it. Experience is a fantastic teacher since it provides only feedback, not criticism, allows you to see for yourself the effects of mistakes you make, and offers the chance for active participation in finding solutions. James Crawford writes in Nature of what he learned from hands-on involvement:

The focus of my PhD thesis is examining ways of upgrading biomass to transportation fuels, and I regularly use a variety of analytical equipment and reactor systems in my laboratory at the Colorado School of Mines in Golden.

An expert is rarely on-site to assist with equipment repairs, which can range from simple tasks such, as replacing a used gasket on a vacuum chamber, to cumbersome rebuilds for pumps, furnaces, mass spectrometers and adsorption analysers.

Paying for specialist help is often financially out of the question — and, although reading through a manual for a broken stir plate might not be a bucket-list item, I have found, over the past four years of graduate school, that understanding and repairing equipment has given me more valuable experiences than I’d expected.

What I learnt from fixing a temperature controller

One key role of a chemical engineer working in industry or in the lab is process control — monitoring and controlling an operation to achieve the desired temperature, pressure, concentration or any other important parameter.

During my undergraduate studies at Montana State University in Bozeman, we were taught the fundamental theory, essential rules of thumb and computational methods for process control, but application of the knowledge was limited.

In the second year of my PhD, I had a chance to apply this knowledge when a power surge destroyed a previously functioning heater and temperature controller. This equipment worked in the same way as a household boiler and thermostat: tell the machine what temperature you want, and the system attempts to hit that target. Behind the scenes, control parameters determine how aggressively the system pursues that target.

After the power surge, the temperature controller would not power on. I enlisted the help of a fourth-year PhD student to diagnose and repair the damage. But, in the process of rebuilding, stored parameters in the memory of the temperature controller were lost.

I thought back to my undergraduate courses — and, after watching a few YouTube videos with the fourth-year student and checking Wikipedia, we successfully tuned and tested the rebuilt heating system. The process of diagnosing the problem, gathering relevant information and developing a solution was really empowering, and motivated me to continue fixing problems in the lab.

I was fortunate to take on this repair under the guidance of a more senior student, whose experience and patience was profoundly influential for me. In a graduate programme, it can be hard to find time to help others, so his efforts mentoring me were deeply appreciated and transformative for my future endeavours working with other students.

What I learnt from fixing a chemisorption analyser

A year later, I was trying to work out the structure of some catalyst materials that I had synthesized. Chemisorption, or the adsorption of vapour molecules in a sample, is a valuable analytical technique that provides information about the surface chemistry of a catalyst.

After being trained, I attempted to run my samples on our chemisorption system, but I found that the data were not reproducible. I spoke to some colleagues, and it became clear that the instrument was somehow malfunctioning, and so was being used only for basic qualitative analysis. For my purposes, it was important to fix the instrument so that the data collected from it were reproducible and quantitative. I got permission from the principal investigator in charge of the instrument, and teamed up with a chemistry PhD student to resolve the problem.

We ran a standard sample on the chemisorption instrument. This process would normally be automated, but we needed to catch the error as it occurred. We monitored the progress of the experiment for 12 hours. Taking turns watching the instrument and keeping notes, we discovered that a portion of the tubing was blocked: when gases were sent to . . .

Continue reading. There’s more.

My own experience in learning was finally nailing down how to make good tempeh, which involved also learning how to build a good incubator (it took two versions to learn how to build the next version best, though version 2 is perfectly workable). But the most recent batch is the best yet, and next with will chickpea-and-peanut tempeh.

Written by Leisureguy

17 June 2021 at 12:25 pm

Big Telecom Blocks Attempt to Bring $15 Broadband To Covid Victims

leave a comment »

Karl Bode reports in Vice:

A Judge has sided with the broadband industry and barred New York State from offering discounted broadband to those struggling during the COVID crisis.

The order by US District Judge Denis Hurley imposes an immediate injunction on New York State, barring it from enforcing the Affordable Broadband Act (ABA), a new state law requiring ISPs provide 25 megabits-per-second broadband for no more than $15-per-month to those struggling financially during the pandemic.

The broadband industry immediately filed suit against the effort, claiming New York was barred from regulating broadband thanks in part to the Trump administration’s 2017 net neutrality repeal. The Trump FCC claimed the repeal would boost job growth and investment in the telecom sector, yet data shows neither actually happened.

Instead, the repeal left the FCC ill-equipped to protect consumers during an economic crisis by eroding much of the agency’s consumer protection authority under the Communications Act. At telecom sector request, the repeal also attempted to ban states from being able to step in and fill the consumer protection void left by an apathetic federal government.

Both broadband experts and previous court rulings have argued that when the Trump FCC gave up its authority over broadband providers, it also gave up its right to tell states what to do. Still, the broadband industry continues to use the repeal as the basis of lawsuits undermining state efforts to hold US telecom giants accountable or pass state net neutrality laws.

Judge Haley sided with industry, proclaiming that providing discounted broadband to poor Americans struggling during Covid would impose “unrecoverable losses” on the hugely profitable and heavily monopolized broadband industry.

“Beginning June 15, 2021, Plaintiffs will suffer unrecoverable losses increasing with time, and the enormity of the matter—six plaintiffs with multiple member organizations attacking a statute affecting one-third of all New York households—portends a lengthy litigation,” the Judge wrote.

Dana Floberg, a telecom expert at consumer group Free Press, stated that the Biden administration could lend a hand by properly staffing the FCC and reversing the Trump administration’s net neutrality repeal.

“The path forward to reining in exorbitant internet prices is clear,” she said. “We need an FCC empowered with the legal authority to investigate and intervene in the market, and we need a long-term benefit to support internet adoption for low-income people.”

Under the law, the party in control of the White House enjoys a 3-2 partisan majority at the FCC. But the Trump administration’s rush appointment of Trump ally Nathan Simington to the agency last December left the agency intentionally gridlocked at 2-2, incapable of obtaining a majority vote on any issues of controversy.

Despite this, the Biden administration has been in no rush to appoint a new commissioner or reverse the net neutrality repeal. More than fifty consumer groups and union organizations wrote the administration this week asking for more urgency in the matter.

“Restoring the FCC’s Title II authority over broadband would give the agency the strong, flexible toolbox it needs to curtail unjust and discriminatory practices, including unreasonable pricing schemes, while avoiding the pitfalls of rate-setting,” Floberg said.

Cable and broadband providers routinely engage in all manner of dodgy pricing practices, from the use of . . .

Continue reading.

Written by Leisureguy

15 June 2021 at 3:47 pm

How Software Is Eating the Car

leave a comment »

Robert N. Charette writes in IEEE Spectrum:

Predictions of lost global vehicle production caused by the ongoing semiconductor shortage continue to rise. In January, analysts forecast that 1.5 million fewer vehicles would be produced as a result of the shortage; by April that number had steadily climbed to more than 2.7 million units, and by May, to more than 4.1 million units.

The semiconductor shortage has underscored not only the fragility of the automotive supply chain, but placed an intense spotlight on the auto industry’s reliance on the dozens of concealed computers embedded throughout vehicles today.

“No other industry is undergoing as rapid technological change as the auto industry,” says Zoran Filipi, Chair of the Department of Automotive Engineering at Clemson University’s International Center for Automotive Research. “This is driven by the need to address impending, evermore stringent CO2 and criteria emission regulations, while sustaining unprecedented rate of progress with development of automation and infotainment, and meeting the customer expectations regarding performance, comfort, and utility.” 

The coming years will see even greater change, as more auto manufacturers commit to phasing out their internal combustion engine (ICE) powered vehicles to meet global climate-change targets by replacing them with electric vehicles (EVs) that will eventually be capable of autonomous operation.

The past decade of ICE vehicle development illustrates the rapid progress it has made, as well as where it is heading.

“Once, software was a part of the car. Now, software determines the value of a car,” notes Manfred Broy, emeritus professor of informatics at Technical University, Munich and a leading expert on software in automobiles. “The success of a car depends on its software much more than the mechanical side.” Nearly all vehicle innovations by auto manufacturers, or original equipment manufacturers (OEMs) as they are called by industry insiders, are now tied to software, he says.

Ten years ago, only premium cars contained 100 microprocessor-based electronic control units (ECUs) networked throughout the body of a car, executing 100 million lines of code or more. Today, high-end cars like the BMW 7-series with advanced technology like advanced driver-assist systems (ADAS) may contain 150 ECUs or more, while pick-up trucks like Ford’s F-150 top 150 million lines of code. Even low-end vehicles are quickly approaching 100 ECUs and 100 million of lines of code as more features that were once considered luxury options, such as adaptive cruise control and automatic emergency braking, are becoming standard.

Additional safety features that have been mandated since 2010 like electronic stability control, backup cameras, and automatic emergency calling (eCall) in the EU, as well as more stringent emission standards that ICE vehicles can only meet using yet more innovative electronics and software, have further driven ECU and software proliferation.

Consulting firm Deloitte Touche Tohmatsu Limited estimates that as of 2017, some 40% of the cost of a new car can be attributed to semiconductor-based electronic systems, a cost doubling since 2007. It estimates this total will approach 50% by 2030. The company further predicts that each new car today has about $600 worth of semiconductors packed into it, consisting of up to 3,000 chips of all types.

Totaling the number of ECUs and lines of software only hints at the intricate electronic orchestration and software choreography found in vehicles today. By observing how they perform together, the extraordinary complexity that is meant to be invisible from a driver’s perspective begins to emerge. New safety, comfort, performance and entertainment features, the commercial imperative to offer scores of options to buyers resulting in a multiplicity of variants for each make and model, and the shift from gasoline and human drivers to electric and artificially intelligent drivers and the hundreds of millions of lines of new code that will need to be written, checked, debugged and secured against hackers, are making cars into supercomputers on wheels and forcing the auto industry to adapt. But can it? 

Features and Variants Drive Complexity

The drive over the last two decades to provide more safety and entertainment features has transformed automobiles from mere conveyances to mobile computing centers. Instead of . . .

Continue reading.  There’s much much more, including an interesting chart showing what jobs are assigned to microprocessors.

Written by Leisureguy

14 June 2021 at 1:00 pm

Cool pattern generator

leave a comment »

This little pattern generator is fun to play with. Click “Shuffle” to see new random patterns, which you can modify by fiddling with the various controls and options.

Update: Turns out that there a lot of pattern generators, including (for example) plaid generators.

Written by Leisureguy

14 June 2021 at 9:25 am

What if higher wages drive faster productivity growth?

leave a comment »

Noah Smith has an interesting column at Noahpinion:

The other day I ordered at a restaurant on my smartphone. No waiter came by to ask me if I was ready to order. I scanned a QR code on a piece of paper taped to a wooden post; this brought up the menu on my phone, and I simply indicated what I wanted. As if by magic, a server appeared a few minutes later with the food. During my meal, no one wandered by to ask me if I “was still working on that”; when I wanted more food, I just used my phone again. I’m sure I’m one of many millions of Americans who’s learning to order food this way, as a result of the understaffing and social distancing rules imposed by the Covid pandemic.

While I was ordering this way, I kept thinking over and over that this shift is a real game-changer in terms of productivity. Let people order food on their phones, and the number of wait staff you need to deliver the same service goes way down. It’s barely more onerous for the customer if at all, and it eliminates the need to have human beings constantly sashaying around the establishment, eyeing how much diners have eaten.

So I guess I wasn’t too surprised when I saw, via Erik Brynjolfsson and Georgios Petropoulos, that labor productivity is growing at rates not seen since the previous century:

What IS surprising is that this growth has accelerated in the first quarter of 2021, as Americans have started coming back to work en masse. If productivity gains had been purely a function of service establishments being forced to do stretch and do more with fewer workers because of Covid, we might have expected to see a reversal as workers came back to work en masse; instead, productivity is growing even faster.

Now, it’s important not to put too much weight on one quarter’s worth of data. This is a noisy time series with plenty of mismeasurement, and there are bound to be revisions (in fact, looking at the non-seasonally-adjusted numbers shows a slightly more modest increase in Q1). But coupled with my observations on the ground, the change seems real; we can see the actual labor-saving technologies being implemented right in front of our eyes. Nor am I the only one who sees this. Writing in the WSJ, Greg Ip reports:

Industries accounting for a third of the job loss since the start of the pandemic have increased output, including retailing, information, finance, construction, and professional and business services, said Jason Thomas, head of global research at private-equity manager Carlyle Group.

“This recession took on a life of its own by leading to greater remote work, greater reliance on technology,” Mr. Thomas said. Executives began to ask “hard questions: Why do we have so much floor space? Are we sure our cost base makes so much sense? Why were we taking so many intra-office trips? This experience has just revealed how ignorant you were about the frontiers of technology that you could exploit.”

So if employers really are investing in labor-saving technology, the question is: Why? Part of it is surely that the pandemic simply nudged managers to reconsider how they do things; inertia in the business world is strong. But another possible explanation is that workers are becoming more powerful and demanding higher wages. In the NYT, Neil Irwin notes some evidence that workers are gaining the upper hand in the labor market:

The “reservation wage,” as economists call the minimum compensation workers would require, was 19% higher for those without a college degree in March than in November 2019, a jump of nearly $10,000 a year, according to a survey by the Federal Reserve Bank of New York…

A survey of human resources executives from large companies conducted in April by The Conference Board, a research group, found that 49% of organizations with a mostly blue-collar workforce found it hard to retain workers, up from 30% before the pandemic.

With workers demanding higher wages, there could be an incentive for companies to invest in technologies that will economize on labor — like QR code ordering at restaurants.

One problem for this narrative is . . .

Continue reading. There’s much more.

Written by Leisureguy

13 June 2021 at 7:40 am

This season’s must-have Hermès bag is made from fungus

leave a comment »

Alice Fisher writes in the Guardian:

It’s fair to say that Hermès knows handbags. The luxury fashion house’s Birkin and Kelly bags are among the most expensive ever sold; demand outstrips supply by so much that you can’t even join a waiting list. Acquiring one is a matter of luck and contacts. So when Hermès announced this season’s handbag would be made from plant leather, it marked a new era in designer accessories.

The autumn/winter 2021 Hermès Victoria (prices start from about £3,500 for its previous leather version) will be made from Sylvania, a leather grown from fungus, before being crafted in France into a perfect Hermès handbag.

Mushrooms, pineapples, grapes, cactus and apples are just some of the plants on the receiving end of billions of dollars of research and development funding to create leather and plastic replacements. Many of the first generation of vegan alternatives used plastic – which also has devastating environmental consequences and can take hundreds of years to decompose. The new materials are made using biotechnology.

The growth of plant leather is driven by the fashion industry’s efforts to improve sustainability, though it’s also used in the car and furniture industries. Fashion creates a high level of pollution – from overproduction of clothing and synthetic fibres, and also from animal leather production.

“Cattle ranching is already the largest driver of deforestation in the Amazon,” says Carry Somers, co-founder of Fashion Revolution, the world’s largest fashion activism organisation. “We urgently need to fix our relationship with fashion to halt unsustainable agricultural practices. We need to look towards circular economy alternatives, including the use of agricultural residues to create bio-leathers.”

Although conventional leather makes use of animal byproducts, production also involves toxic chemicals.

“Even in fully modernised tanneries it’s nearly impossible to reclaim pollutants generated by the tanning process,” says Adrián López Velarde, co-founder of Desserto, a Mexican company that makes cactus leather. “As a rule of thumb, tanning one tonne of hide results in 20-80 cubic metres of polluted waste water, not to mention the offal effluence from preparation, and pesticides to stop mould growth during transportation.”

There’s also been an attitude shift among consumers. Customer concern about supply chains and methods of production was growing before the pandemic, but has accelerated in the past 18 months.

“There’s a huge drive for transparency,” says Carmen Hijosa, founder of Ananas Anam, a company that makes pineapple leather. “It’s especially important to young people, but we’re all becoming more empathic, we understand that we have to respect nature and be kind to each other.”

This change in priorities is the motivation for many of the companies developing bio-leathers. The people behind these new materials come from diverse backgrounds – fashion and art, science and business – and they bring fresh perspective to the world of textiles.

Dan Widmaier, chief executive of the biotextile company Bolt Threads, says: “This is personal for me. Bolt is based in northern California. I, and our employees, have been massively impacted by climate change and fires. The truth is, the challenges are so great right now, that the demand for innovative solutions far outstrips the supply.”

Bio-leathers are made either from agricultural byproducts or specially grown crops. Mycelium, the root structure of fungus, has become a favourite in the luxury industry.

Hermès worked with the Californian company MycoWorks to make Sylvania, which uses a technology called Fine Mycelium. This produces a strong cellular material that can be processed to become a luxury leather. “It’s more than a new material – it’s a manufacturing breakthrough that gives designers new levels of customisation and creative control,” says MycoWorks co-founder Sophia Wang. “Our materials are essentially made to order and there’s complete transparency into what is being made and how. We control each sheet’s size, strength, flexibility, thickness. This customisation creates a range of design possibilities, minimises waste and ensures consistent quality.”

Because these companies have been formed with sustainability in their DNA, good agricultural practice is front and centre. Desserto’s organic cactus plantations in Zacatecas, Mexico, use 164,650% less water compared with animal leather and 190% compared with polyurethane.

Bolt Threads developed and produces Mylo, a mycelium leather used by designers including Stella McCartney. Widmaier is proud of his product and says: “Mylo’s processing and finishing chemistries are evaluated and selected using green chemistry principles and are free of substances such as chromium and DMFa, two of the most noxious chemicals used in animal and synthetic leather respectively.”

For advocates of the circular economy, bio-leathers using byproducts are of particular interest. Piñatex, leather, made by UK-based Ananas Anam from pineapple leaves, is one of the best established. Hijosa, the company’s founder, had been a consultant for . . .

Continue reading. There’s more.

Written by Leisureguy

12 June 2021 at 1:39 pm

Free lithium from seawater

leave a comment »

Mining.com has an interesting article. The lithium is free because the by-products pay for the process.

Researchers at King Abdullah University of Science and Technology developed what they believe is an economically viable system to extract high-purity lithium from seawater.

Previous efforts to tease lithium from the mixture the metal makes together with sodium, magnesium and potassium in seawater yielded very little. Although the liquid contains 5,000 times more lithium than what can be found on land, it is present at extremely low concentrations of about 0.2 parts per million (ppmTo address this issue, the team led by Zhiping Lai tried a method that had never been used before to extract lithium ions. They employed an electrochemical cell containing a ceramic membrane made from lithium lanthanum titanium oxide (LLTO).

The cell itself, on the other hand, contains three compartments. Seawater flows into a central feed chamber, where positive lithium ions pass through the LLTO membrane into a side compartment that contains a buffer solution and a copper cathode coated with platinum and ruthenium. At the same time, negative ions exit the feed chamber through a standard anion exchange membrane, passing into a third compartment containing a sodium chloride solution and a platinum-ruthenium anode.

Lai and his group tested the system using seawater from the Red Sea. At a voltage of 3.25V, the cell generates hydrogen gas at the cathode and chlorine gas at the anode. This drives the transport of lithium through the LLTO membrane, where it accumulates in the side-chamber. This lithium-enriched water then becomes the feedstock for four more cycles of processing, eventually reaching a concentration of more than 9,000 ppm.

To make the final product pure enough so that it meets battery manufacturers’ requirements, the scientists then  . . .

Continue reading.

Concluding paragraph:

According to the researchers, the cell will probably need $5 of electricity to extract 1 kilogram of lithium from seawater. This means that the value of hydrogen and chlorine produced by the cell would end up offsetting the cost of power, and residual seawater could also be used in desalination plants to provide freshwater.

Written by Leisureguy

11 June 2021 at 9:31 pm

Google AI beats humans at designing computer chips

leave a comment »

Some science fiction uses the Singularity as a theme — the Singularity being the point at which AI becomes conscious and self-directed, iterating improvements and gains of function to the point it quickly outstrips humanity (cf. the Singularity trilogy — Singularity Sky, Iron Sunrise, and Accelerando, by Charles Stross). The key moment is when AI begins enhancing its own technology — like, say, around now. From Nature, a podcast:

Working out where to place the billions of components that a modern computer chip needs can take human designers months and, despite decades of research, has defied automation. Now, Google researchers have developed a machine-learning algorithm that does the job in a fraction of the time and is already helping to design their next generation of artifical-intelligence processors.

Podcast 28 minutes

Only the first 7 minutes are about how AI is now designing the chips for the next generation of AI — that is, designing improved descendants of itself. And those descendants will be able to do an even better job of designing improvements. It strikes me as somewhat like regenerative feedback. And what could possibly go wrong with that process? What’s the worst that could happen, eh?

Written by Leisureguy

11 June 2021 at 3:14 pm

A wind-powered vehicle that goes directly downwind faster than the wind

leave a comment »

Tricky to grasp, but there is the video.

Written by Leisureguy

10 June 2021 at 12:03 pm

Great story: British engineer who accidentally took off in a high-performance fighter jet

leave a comment »

Great story — and a very interesting fighter jet.

Written by Leisureguy

10 June 2021 at 11:25 am

What Really Happened When Google Ousted Timnit Gebru

leave a comment »

Tom Simonite writes in Wired:

ONE AFTERNOON IN late November of last year, Timnit Gebru was sitting on the couch in her San Francisco Bay Area home, crying.

Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper—or else remove her name from its list of authors, along with those of several other members of her team.

The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software—most famously exemplified by a system called GPT-3—that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the company’s search engine. Jeff Dean, Google’s revered head of research, had encouraged Gebru to think about the approach’s possible downsides. The paper had sailed through the company’s internal review process and had been submitted to a prominent conference. But Kacholia now said that a group of product leaders and others inside the company had deemed the work unacceptable, Gebru recalls. Kacholia was vague about their objections but gave Gebru a week to act. Her firm deadline was the day after Thanksgiving.

Gebru’s distress turned to anger as that date drew closer and the situation turned weirder. Kacholia gave Gebru’s manager, Samy Bengio, a document listing the paper’s supposed flaws, but told him not to send it to Gebru, only to read it to her. On Thanksgiving Day, Gebru skipped some festivities with her family to hear Bengio’s recital. According to Gebru’s recollection and contemporaneous notes, the document didn’t offer specific edits but complained that the paper handled topics “casually” and painted too bleak a picture of the new technology. It also claimed that all of Google’s uses of large language models were “engineered to avoid” the pitfalls that the paper described.

Gebru spent Thanksgiving writing a six-page response, explaining her perspective on the paper and asking for guidance on how it might be revised instead of quashed. She titled her reply “Addressing Feedback from the Ether at Google,” because she still didn’t know who had set her Kafkaesque ordeal in motion, and sent it to Kacholia the next day.

On Saturday, Gebru set out on a preplanned cross-country road trip. She had reached New Mexico by Monday, when Kacholia emailed to ask for confirmation that the paper would either be withdrawn or cleansed of its Google affiliations. Gebru tweeted a cryptic reproach of “censorship and intimidation” against AI ethics researchers. Then, on Tuesday, she fired off two emails: one that sought to end the dispute, and another that escalated it beyond her wildest imaginings.

The first was addressed to Kacholia and offered her a deal: Gebru would remove herself from the paper if Google provided an account of who had reviewed the work and how, and established a more transparent review process for future research. If those conditions weren’t met, Gebru wrote, she would leave Google once she’d had time to make sure her team wouldn’t be too destabilized. The second email showed less corporate diplomacy. Addressed to a listserv for women who worked in Google Brain, the company’s most prominent AI lab and home to Gebru’s Ethical AI team, it accused the company of “silencing marginalized voices” and dismissed Google’s internal diversity programs as a waste of time.

Relaxing in an Airbnb in Austin, Texas, the following night, Gebru received a message with a 😮 from one of her direct reports: “You resigned??” In her personal inbox she then found an email from Kacholia, rejecting Gebru’s offer and casting her out of Google. “We cannot agree as you are requesting,” Kacholia wrote. “The end of your employment should happen faster than your email reflects.” Parts of Gebru’s email to the listserv, she went on, had shown “behavior inconsistent with the expectations of a Google manager.” Gebru tweeted that she had been fired. Google maintained—and still does—that she resigned.

Gebru’s tweet lit the fuse on a controversy that quickly inflamed Google. The company has been dogged in recent years by accusations from employees that it mistreats women and people of color, and from lawmakers that it wields unhealthy technological and economic power. Now Google had expelled a Black woman who was a prominent advocate for more diversity in tech, and who was seen as an important internal voice for greater restraint in the helter-­skelter race to develop and deploy AI. One Google machine-learning researcher who had followed Gebru’s writing and work on diversity felt the news of her departure like a punch to the gut. “It was like, oh, maybe things aren’t going to change so easily,” says the employee, who asked to remain anonymous because they were not authorized to speak by Google management.

Dean sent out a message urging Googlers to ignore Gebru’s call to disengage from corporate diversity exercises; Gebru’s paper had been subpar, he said, and she and her collaborators had not followed the proper approval process. In turn, Gebru claimed in tweets and interviews that she’d been felled by a toxic cocktail of racism, sexism, and censorship. Sympathy for Gebru’s account grew as the disputed paper circulated like samizdat among AI researchers, many of whom found it neither controversial nor particularly remarkable. Thousands of Googlers and outside AI experts signed a public letter castigating the company.

But Google seemed to double down. Margaret Mitchell, the other coleader of the Ethical AI team and a prominent researcher in her own right, was among the hardest hit by Gebru’s ouster. The two had been a professional and emotional tag team, building up their group—which was one of several that worked on what Google called “responsible AI”—while parrying the sexist and racist tendencies they saw at large in the company’s culture. Confident that those same forces had played a role in Gebru’s downfall, Mitchell wrote an automated script to retrieve notes she’d kept in her corporate Gmail account that documented allegedly discriminatory incidents, according to sources inside Google. On January 20, Google said Mitchell had triggered an internal security system and had been suspended. On February 19, she was fired, with Google stating that it had found “multiple violations of our code of conduct, as well as of our security policies, which included exfiltration of confidential, business-­sensitive documents.”

Google had now fully decapitated its own Ethical AI research group. The long, spectacular fallout from that Thanksgiving ultimatum to Gebru left countless bystanders wondering: Had one paper really precipitated all of these events?

The story of what actually happened in the lead-up to Gebru’s exit from Google reveals a more tortured and complex backdrop. It’s the tale of a gifted engineer who was swept up in the AI revolution before she became one of its biggest critics, a refugee who worked her way to the center of the tech industry and became determined to reform it. It’s also about a company—the world’s fifth largest—trying to regain its equilibrium after four years of scandals, controversies, and mutinies, but doing so in ways that unbalanced the ship even further.

Beyond Google, the fate of Timnit Gebru lays bare something even larger:  . . .

Continue reading. There’s much more.

Written by Leisureguy

9 June 2021 at 10:58 am

Why electric cars will take over sooner than you think

leave a comment »

Launched in 1998, the EV1 was GM’s first attempt at an electric car and failed to take off.

Justiin Rowlatt has a very interesting report at BBC on why electric cars really are going to dominate Real Soon Now. The article has many charts, so I encourage you to click the link. The article begins:

I know, you probably haven’t even driven one yet, let alone seriously contemplated buying one, so the prediction may sound a bit bold, but bear with me.

We are in the middle of the biggest revolution in motoring since Henry Ford’s first production line started turning back in 1913.

And it is likely to happen much more quickly than you imagine.

Many industry observers believe we have already passed the tipping point where sales of electric vehicles (EVs) will very rapidly overwhelm petrol and diesel cars.

It is certainly what the world’s big car makers think.

Jaguar plans to sell only electric cars from 2025, Volvo from 2030 and last week the British sportscar company Lotus said it would follow suit, selling only electric models from 2028.

And it isn’t just premium brands.

General Motors says it will make only electric vehicles by 2035, Ford says all vehicles sold in Europe will be electric by 2030 and VW says 70% of its sales will be electric by 2030.

This isn’t a fad, this isn’t greenwashing.

Yes, the fact many governments around the world are setting targets to ban the sale of petrol and diesel vehicles gives impetus to the process.

But what makes the end of the internal combustion engine inevitable is a technological revolution. And technological revolutions tend to happen very quickly.

This revolution will be electric

Look at the internet.

By my reckoning, the EV market is about where the internet was around the late 1990s or early 2000s.

Back then, there was a big buzz about this new thing with computers talking to each other.

Jeff Bezos had set up Amazon, and Google was beginning to take over from the likes of Altavista, Ask Jeeves and Yahoo. Some of the companies involved had racked up eye-popping valuations.

For those who hadn’t yet logged on it all seemed exciting and interesting but irrelevant – how useful could communicating by computer be? After all, we’ve got phones!

But the internet, like all successful new technologies, did not follow a linear path to world domination. It didn’t gradually evolve, giving us all time to plan ahead.

Its growth was explosive and disruptive, crushing existing businesses and changing the way we do almost everything. And it followed a familiar pattern, known to technologists as an S-curve.

Riding the internet S-curve

It’s actually an elongated S.

The idea is that innovations start slowly, of interest only to the very nerdiest of nerds. EVs are on the shallow sloping bottom end of the S here.

For the internet, the graph begins at 22:30 on 29 October 1969. That’s when a computer at the University of California in LA made contact with another in Stanford University a few hundred miles away.

The researchers typed an L, then an O, then a G. The system crashed before they could complete the word “login”.

Like I said, nerds only.

A decade later there were still only a few hundred computers on the network but the pace of change was accelerating.

As the market grew, prices fell rapidly and performance improved in leaps and bounds – encouraging more and more people to log on to the internet.

The S is beginning to sweep upwards here, growth is becoming exponential. By 1995 there were some 16 million people online. By 2001, there were 513 million people.

Now there are more than three billion. What happens next is our S begins to slope back towards the horizontal.

The rate of growth slows as virtually everybody who wants to be is now online.

Jeremy Clarkson’s disdain

We saw the same pattern of a slow start, exponential growth and then a slowdown to a mature market with smartphones, photography, even antibiotics.

The internal combustion engine at the turn of the last century followed the same trajectory.

So did steam engines and printing presses. And electric vehicles will do the same.

In fact they have a more venerable lineage than the internet.

The first crude electric car was developed by the Scottish inventor Robert Anderson in the 1830s.

But it is only in the last few years that the technology has been available at the kind of prices that make it competitive.

The former Top Gear presenter and used car dealer Quentin Willson should know. He’s been driving electric vehicles for well over a decade.

He test-drove General Motors’ now infamous EV1 20 years ago. It cost a billion dollars to develop but was considered a dud by GM, which crushed all but a handful of the 1,000 or so vehicles it produced.

The EV1’s range was dreadful – about 50 miles for a normal driver – but Mr Willson was won over. “I remember thinking this is the future,” he told me.

He says he will never forget the disdain that radiated from fellow Top Gear presenter Jeremy Clarkson when he showed him his first electric car, a Citroen C-Zero, a decade later.

“It was just completely: ‘You have done the most unspeakable thing and you have disgraced us all. Leave!’,” he says. Though he now concedes that you couldn’t have the heater on in the car because it decimated the range.

How things have changed. Mr Willson says he has no range anxiety with his latest electric car, a Tesla Model 3.

He says it will do almost 300 miles on a single charge and accelerates from 0-60 in 3.1 seconds.

“It is supremely comfortable, it’s airy, it’s bright. It’s just a complete joy. And I would unequivocally say to you now that I would never ever go back.”

We’ve seen massive improvements in the motors that drive electric vehicles, the computers that control them, charging systems and car design.

But the sea-change in performance Mr Willson has experienced is largely possible because of the improvements in the non-beating heart of the vehicles, the battery.

Continue reading. There’s more — and it’s the important stuff.

Written by Leisureguy

7 June 2021 at 10:13 am

Apple’s tightly controlled App Store is teeming with scams

leave a comment »

Reed Albergotti and Chris Alcantaran report in the Washington Post:

Apple chief executive Tim Cook has long argued it needs to control app distribution on iPhones, otherwise the App Store would turn into “a flea market.”

But among the 1.8 million apps on the App Store, scams are hiding in plain sight. Customers for several VPN apps, which allegedly protect users’ data, complained in Apple App Store reviews that the apps told users their devices have been infected by a virus to dupe them into downloading and paying for software they don’t need. A QR code reader app that remains on the store tricks customers into paying $4.99 a week for a service that is now included in the camera app of the iPhone. Some apps fraudulently present themselves as being from major brands such as Amazon and Samsung.

Of the highest 1,000 grossing apps on the App Store, nearly two percent are scams, according to an analysis by The Washington Post. And those apps have bilked consumers out of an estimated $48 million during the time they’ve been on the App Store, according to market research firm Appfigures. The scale of the problem has never before been reported. What’s more, Apple profits from these apps because it takes a cut of up to a 30 percent of all revenue generated through the App Store. Even more common, according to The Post’s analysis, are “fleeceware” apps that use inauthentic customer reviews to move up in the App Store rankings and give apps a sense of legitimacy to convince customers to pay higher prices for a service usually offered elsewhere with higher legitimate customer reviews.

Two-thirds of the 18 apps The Post flagged to Apple were removed from the App Store.

The most valuable company in U.S. history, Apple is facing unprecedented scrutiny for how it wields its power and is fighting to hold onto it, including in a blockbuster trial that concluded last month. Regulators and competitors have zeroed in on the App Store in particular: Unlike app stores on other mobile operating systems, Apple’s store faces no competition and is the only way for iPhone owners to download software to their phones without bypassing Apple’s restrictions. Through it, Apple keeps a tight grip on software distribution and payments on its mobile operating system, called iOS.

Apple has long maintained that its exclusive control of the App Store is essential to protecting customers, and it only lets the best apps on its system. But Apple’s monopoly over how consumers access apps on iPhones can actually create an environment that gives customers a false sense of safety, according to experts. Because Apple doesn’t face any major competition and so many consumers are locked into using the App Store on iPhones, there’s little incentive for Apple to spend money on improving it, experts say.

[He believed Apple’s App Store was safe. Then a fake app stole his life savings in bitcoin]

“If consumers were to have access to alternative app stores or other methods of distributing software, Apple would be a lot more likely to take this problem more seriously,” said Stan Miles, an economics professor at Thompson Rivers University in British Columbia, Canada. . .

Continue reading. There’s much, much more. Apple’s responses are evasive and defensive and Apple employees seem to have been forbidden to comment (except for those employees paid to lie).

Later in the article:

Apple says it is constantly improving its methods for sniffing out scams and usually catches them within a month of hitting the App Store. In a recent news release, Apple said it employed new tools to verify the authenticity of user reviews and last year kicked 470,000 app developer accounts off the App Store. Developers, however, can create new accounts and continue to distribute new apps.

Apple unwittingly may be aiding the most sophisticated scammers by eliminating so many of the less competent ones during its app review process, said Miles, who co-authored a paper called “The Economics of Scams.” [Typical of meme evolution when a selection process eliminates some memes: the surviving memes adapt to evade being selected out — cf. in lifeform evolution how widespread use of antibiotics leads to the rise of antibiotic-resistant bacteria. – LG]

“If people do believe or are not worried about being scammed, then there’s going to be a lot of victimization,” he said. Miles also said Apple could warn consumers that some apps “are probably fraud and so buyer beware and you do your homework before you buy the app and don’t trust our store.”

And later:

The prevalence of scams on Apple’s App Store played a key role at trial. Apple’s lawyers were so focused on the company’s role in making the App Store safe that Epic’s attorneys accused them of trying to scare the court into a ruling in favor of Apple.In other internal emails unearthed during trial that date as far back as 2013, Apple’s PhilSchiller, who runs the App Store, expressed dismay when fraudulent apps made it pastApp Store review.

[Apple is lobbying against a bill aimed at stopping forced labor in China]

After a rip-off version of theTemple Run video game became the top-rated app, according to Schiller’s email exchange, he sent an irate message to two other Apple executives responsible for the store. “Remember our talking about finding bad apps with low ratings? Remember our talk about becoming the ‘Nordstroms’ of stores in quality of service? How does an obvious rip off of the super popular Temple Run, with no screenshots, garbage marketing text, and almost all 1-star ratings become the #1 free app on the store?” Schiller asked his team. “Is no one reviewing these apps? Is no one minding the store?” Apple declined to make Schiller available to comment.At trial, Schiller defended the safety of the app store on the stand. The app review process is “the best way we could come up with … to make it safe and fair.”

Eric Friedman, head of Apple’s Fraud Engineering Algorithms and Risk unit, or FEAR, said that Apple’s screening process is “more like the pretty lady who greets you with a lei at the Hawaiian airport than the drug sniffing dog,” according to a 2016 internal email uncovered during the Epic Games trial. Apple employs a 500-person App Review team, which sifts through submissions from developers. “App Review is bringing a plastic butter knife to a gun fight,” Friedman wrote in another email. Apple declined to make Friedman available to comment. In deposition testimony, Friedman pointed to investments Apple has made to stop fraud. “A lot has changed in the last five years,” he said.

Though the App Store ratings section is filled with customer complaints referring to apps as scams, there is no way for Apple customers to report this to Apple, other than reaching out to a regular Apple customer service representative. Apple used to have a button, just under the ratings and reviews section in the App Store, that said “report a problem,” which allowed users to report inappropriate apps. Based on discussions among Apple customers on Apple’s own website, the feature was removed some time around 2016. Sainzsaid customers can still report apps through other channels.

And there’s much more. It’s a long article and it shows just what a bad job Apple is doing. Part of that may be because Apple gets a heft cut of money spent in the App Store and so doesn’t really care to police it effectively: they make money in either case.

Written by Leisureguy

6 June 2021 at 12:49 pm

Malcolm Gladwell’s Fantasy of War From the Air

leave a comment »

In the New Republic Colin Dickey reviews a recent book by Malcom Gladwell:

There’s a scene in the 2011 film Moneyball where Brad Pitt’s Billy Beane is mentoring young Peter Brand (Jonah Hill) on how to cut a professional baseball player from the roster: bluntly, without euphemism. “Would you rather,” he asks, “get one shot in the head or five in the chest and bleed to death?” Imagine, if you will, that this was not a rhetorical question or an analogy about firing someone but rather a serious, literal question. Now imagine 206 pages of this, and you have a sense of what it’s like to read Malcolm Gladwell’s new book.

The Bomber Mafia: A Dream, a Temptation, and the Longest Night of the Second World War is a nasty, brutish book—if it’s also short, it’s not nearly short enough. It is a breathless and narratively riveting story about the best way to kill hundreds of thousands of civilians. It is the story of two different approaches to killing hundreds of thousands of civilians, and of the heroic men who each championed their own method for mass killing. Its central question is whether one should approach the wholesale massacre of the innocents with indifference or with hypocrisy, and its conceit is that this is a relevant or fascinating distinction. It is a book detailing a series of ingenious new technologies for butchery, dressed up in the polished technophilic language of a TED talk.

The book details the rise and fall (and rise again) of the doctrine of precision air bombing, an idea that emerged from the Air Corps Tactical School (the aviation equivalent of the Army War College), nicknamed the “Bomber Mafia.” The Air Force was not yet a separate branch of the military in the 1930s, but with the advent of military aviation the men at the Air Corps Tactical School (based at Maxwell Field in Montgomery, Alabama) began to fantasize about entirely new kinds of war-making and attempted to birth a revolution in how war might be fought. Their singular obsession, according to Gladwell, was this: What if, instead of bringing the full might of one’s military on the enemy, battering them into submission, you could take out key infrastructure and manufacturing targets (“choke points,” in the Bomber Mafia’s parlance) that would incapacitate your opponent while avoiding mass death?

It’s an interesting enough idea. In the opening years of World War II, aerial bombing meant total destruction. The London Blitz was designed to overwhelm the British and demoralize them into submission. England’s answer to this was Arthur “Bomber” Harris, whom Gladwell describes as, simply, a “psychopath.” Harris was one of the chief architects of the British tactic of “area bombing” or “morale bombing”: Reduce cities to rubble and incinerate the civilians until they submit. For Harris, civilians were viable targets if for no other reason than some of them worked in the factories that made bombs and submarines. As he would say later, “They were all active soldiers, to my mind.”

The minds at the Air Corps Tactical School thought there might be a different way. “The whole argument of the Bomber Mafia, their whole reason for being, was that they didn’t want to cross that line,” Gladwell writes. “They weren’t just advancing a technological argument. They were also advancing a moral argument.” When the Americans joined forces with the British Royal Air Force in bombing Germany, the Bomber Mafia sought to prove its approach. Under the command of General Haywood Hansell, the Americans argued that if they could destroy the German’s capacity to make ball bearings, they could bring their manufacturing to a standstill. What if you could leave the Germans for want of a nail and lose them the whole ship?

This is the “dream” of the subtitle—what if by changing one’s perspective and focusing on something small and seemingly insignificant, one could change how wars were fought? One can see how the author of The Tipping PointBlinkand Outliers would be taken by a group whose motto was Proficimus more irrententi—“We make progress unhindered by custom.” The Bomber Mafia is adapted from an audiobook, which means that what sounds conversational and engaging on tape can sound garrulous on the page, but it also allows Gladwell to telegraph his breathless fascination with these men. “I worry that I haven’t fully explained just how radical—how revolutionary—the Bomber Mafia thinking was,” he says at one point, before launching on a long digression about chapel architecture. Unbound by tradition, the Bomber Mafia wanted to innovate and rethink war from the ground up (or the sky down). This is a group “utterly uninterested in heritage and tradition,” Gladwell explains; rather than “studying the Peloponnesian War or the Battle of Trafalgar,” they were readying themselves for “today’s battles.”

In Gladwell’s world, the people who matter are the innovators, the disrupters. The protagonists of The Bomber Mafia are all various analogs of Steve Jobs or John Lennon—heroic icons who brought a unique perspective and, through determination and insight, pursued a dream that changed the world. But such decisions never happen in a vacuum, and by foregrounding such technological pursuits, The Bomber Mafia furthers the fiction that somehow airstrikes can be moral.


.
How much can you change the world from the air? In the 1920s, when aviation was in its infancy, proponents for air power imagined a utopian possibility: The airplane was so new, so unrefined, and offered so much potential. The sky was the limit, and perhaps somewhere in this technology would be a way to end war once and for all.

Though this dream would fade fast, the book strains to carry this early naïveté over to the realities of World War II. Gladwell organizes his chapters around individual men with unique, startling ideas, like Carl L. Norden, a Dutch engineer whose obsession was the aerial bomb sight, which would enable precision strikes and could entirely change how aerial warfare was conducted. The book follows first Norden and then the Air Corps Tactical School under Haywood Hansell, as it attempts to prove the efficacy of the precision bombing thesis. This group is repeatedly contrasted with men like Harris, as searching for a “moral” approach to bombing. Hansell, we’re told, “provides us with a model of what it means to be moral in our modern world.”

Gladwell repeats this line throughout; he quotes Tami Biddle, professor of national security at the U.S. Army War College, on this as well: “I think there’s a strong moral component to all this,” she tells Gladwell,

a desire to find a way to fight a war that is clean and that is not going to tarnish the American reputation as a moral nation, a nation of ideas and ideology and commitment to individual rights and respect for human beings.

Tellingly, though, Gladwell provides no direct quotes from Hansell or the Bomber Mafia suggesting that they thought their approach was moral; it’s all a retrospective appraisal from contemporary historians. After all, here is what their so-called “moral” approach looked like at the time: In a wargame that proposed a conflict between Canada and the United States, the Bomber Mafia gamed out what it would take for a hypothetical airstrike launched from Toronto to take out New York City. Bomber Mafia associate Muir Fairchild instead theorized that you could bring the city to its knees by striking 17 targets: the bridges, the aqueducts that brought fresh water to the city, and the power grid. As military historian Robert Pape explains, “They basically want to create a situation where there’s almost no potable water for the population to drink.” This would avoid “wave upon wave of costly and dangerous bombing attacks” or reducing the city to rubble, while still incapacitating the city. This, somehow, is the moral option: cutting off a city of millions to die slowly of thirst. We are back to Billy Beane’s question: Would you rather get one shot in the head or five in the chest and bleed to death? . . .

Continue reading. There’s more.

Written by Leisureguy

6 June 2021 at 8:03 am

The Lost Prophet of California Agriculture

leave a comment »

Charlie Siler has a well-illustrated and very interesting article in Craftsmanship magazine. He writes:

  1. Lessons of The Dust Bowl
  2. The Joys of Tinkering
  3. The Search For The Perfect Machine
  4. What Could Have Been

Al Ruozi, age 97, is a high-school dropout from Bakersfield, California, who made his living selling farm machinery that he designed and welded together, using handmade machinery that he built himself, in a building that he and his brother assembled. His primary invention, created in the 1950s, was a machine that gave cotton farmers a better way to clear their land. While little-known in the U.S., Ruozi’s invention has been emulated around the world, leading the way to a new generation of farm equipment that can save water, improve soil quality, and maybe even fight climate change.

“Al Ruozi was the inspiration for much of the innovation that happened over the next 30 years,” says Jeff Mitchell, a conservation specialist in the Department of Plant Sciences at UC Davis.

Bakersfield was a harsh place in the 1930s, when Ruozi quit school to help out on the family farm. The dust storms of the U.S. prairies had sent thousands of farmers west to California in search of jobs and land. The country was still recovering from the Great Depression, with unemployment improving only after 1933, when it peaked at 25 percent.

The Okies in the shantytowns of Bakersfield had to contend with hostile locals and inadequate sanitation that sometimes led to dysentery, impetigo or hookworm. In December 1936, concern about disease led a group of Bakersfield citizens to burn down an Okie slum that housed 1,500 people.

LESSONS OF THE DUST BOWL

We now know that the seeds of the Dust Bowl were sown in the 1920s, when the virgin topsoil of the Great Plains was broken by gasoline-driven plows, destroying native grasses and ruining the ability of the land to hold itself together, and thus retain moisture. In the 1930s, the ecological payback of the disaster wasn’t as clear as it is today. Yet somehow, the lessons of the prairies’ raging dust storms were not completely lost on a teenaged Ruozi. He saw that land could be ruined, and he suspected that the plow was to blame.

He nursed his concerns as he worked behind a horse-drawn plow, tilling his family’s land by slashing into the compacted dirt with the very practice that had contributed to the plight of the refugees huddled in squalid camps a few miles away. “The idea hit me,” he recalled during a recent conversation in his Bakersfield office. “I thought, why is the ground so hard?” Ruozi resolved to find a way to make it more “pliable.”

He got his chance about a decade later, when he returned to Bakersfield following some time in welding school and a stint in the Army during World War II. In 1948, Ruozi and his brother Gilbert bought and assembled a Quonset hut, one of the semi-cylindrical pre-fabricated structures that were used by the U.S. military in World War II and sold as surplus to the public afterward.

THE JOYS OF TINKERING

Ruozi called the new company Interstate Equipment and Manufacturing Corp. He worked there with his torch, using his welding skills to make potato-tillage equipment for a nearby manufacturer. All the while, he kept tinkering. “I’d pick up an old machine here or there, any time there was a scrap machine, and see if I could make it work,” Ruozi says. “I started out that way. Sometimes it worked, sometimes it didn’t.”

By the early 1950s, he was making his own patented machine. He called it the . . .

Continue reading. There’s much more, including some good photos.

Written by Leisureguy

4 June 2021 at 10:46 am

A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says

leave a comment »

Highly recommended: Kill Decision and also influx, by Daniel Suarez (and I would also highly recommend his other books, including the duology Daemon and Freedom™). Joe Hernandez reports for NPR:

Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator. Such weapons are known to be in development, but until recently there were no reported cases of autonomous drones killing fighters on the battlefield.

Now, a United Nations report about a March 2020 skirmish in the military conflict in Libya says such a drone, known as a lethal autonomous weapons system — or LAWS — has made its wartime debut. But the report does not say explicitly that the LAWS killed anyone.

“If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill,” Zachary Kallenborn wrote in Bulletin of the Atomic Scientists.

The assault came during fighting between the U.N.-recognized Government of National Accord and forces aligned with Gen. Khalifa Haftar, according to the report by the U.N. Panel of Experts on Libya.

“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions,” the panel wrote.

he Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use “machine learning” and “real-time image processing” against its targets.

The U.N. report goes on: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

“Fire, forget and find” refers to a weapon that once fired can guide itself to its target.

The idea of a “killer robot” has moved from fantasy to reality

Drone warfare itself is not new. For years, military forces and rebel groups have used remote-controlled aircraft to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets.

Azerbaijan used armed drones to gain a major advantage over Armenia in recent fighting for control of the Nagorno-Karabakh region. Just last month, the Israel Defense Forces reportedly used drones to drop tear gas on protesters in the occupied West Bank, while Hamas launched loitering munitions — so-called kamikaze drones — into Israel.

What’s new about the incident in Libya, if confirmed, is . . .

Continue reading. There’s more.

Written by Leisureguy

2 June 2021 at 3:34 pm

Wood goes techno

leave a comment »

Sid Perkins writes in Scientific American:

Some varieties of wood, such as oak and maple, are renowned for their strength. But scientists say a simple and inexpensive new process can transform any type of wood into a material stronger than steel, and even some high-tech titanium alloys. Besides taking a star turn in buildings and vehicles, the substance could even be used to make bullet-resistant armor plates.

Wood is abundant and relatively low-cost—it literally grows on trees. And although it has been used for millennia to build everything from furniture to homes and larger structures, untreated wood is rarely as strong as metals used in construction. Researchers have long tried to enhance its strength, especially by compressing and “densifying” it, says Liangbing Hu, a materials scientist at the University of Maryland, College Park. But densified wood tends to weaken and spring back toward its original size and shape, especially in humid conditions.

Now, Hu and his colleagues say they have come up with a better way to densify wood, which they report in Nature. Their simple, two-step process starts with boiling wood in a solution of sodium hydroxide (NaOH) and sodium sulfite (Na2SO3), a chemical treatment similar to the first step in creating the wood pulp used to make paper. This partially removes lignin and hemicellulose (natural polymers that help stiffen a plant’s cell walls)—but it largely leaves the wood’s cellulose (another natural polymer) intact, Hu says.

The second step is almost as simple as the first: Compressing the treated wood until its cell walls collapse, then maintaining that compression as it is gently heated. The pressure and heat encourage the formation of chemical bonds between large numbers of hydrogen atoms and neighboring atoms in adjacent nanofibers of cellulose, greatly strengthening the material.

The results are impressive. The team’s compressed wood is three times as dense as the untreated substance, Hu says, adding that its resistance to being ripped apart is increased more than 10-fold. It also can become about 50 times more resistant to compression and almost 20 times as stiff. The densified wood is also substantially harder, more scratch-resistant and more impact-resistant. It can be molded into almost any shape. Perhaps most importantly, the densified wood is also moisture-resistant: In lab tests, compressed samples exposed to extreme humidity for more than five days swelled less than 10 percent—and in subsequent tests, Hu says, a simple coat of paint eliminated that swelling entirely.

A five-layer, plywoodlike sandwich of densified wood stopped simulated bullets fired into the material—a result Hu and his colleagues suggest could lead to low-cost armor. The material does not protect quite as well as a Kevlar sheet of the same thickness—but it only costs about 5 percent as much, he notes.

The team’s results “appear to  . . .

Continue reading.

Written by Leisureguy

31 May 2021 at 11:06 am

Answering the Techno-Pessimists (complete)

leave a comment »

Noah Smith has a long column spelling out reasons for his techno-optimism:

When I started this Substack six months ago, I made it explicitly a techno-optimist blogA number of my earliest posts were gushing with optimism over the magical new technologies of cheap solar, cheap batteries, mRNA vaccines, and so on. But a blogger at a blog called Applied Divinity Studies wrote a post demanding more rigor to accompany my rosy projections, and putting forth a number of arguments in favor of continued stagnation. Heavily paraphrased, these were:

  1. We’ve picked the low-hanging fruit of science
  2. Productivity has been slowing down, why should it accelerate now?
  3. Solar, batteries, and other green energy tech isn’t for real
  4. Life expectancy is stagnating

So I decided to write a series of posts addressing all of these arguments. Here’s the whole series in one post.

Part 1: Life Expectancy

Is life expectancy stagnating?

The blogger at Applied Divinity Studies posted the following graph: . . .

Continue reading. There’s much more.

Written by Leisureguy

30 May 2021 at 6:27 pm

%d bloggers like this: