Later On

A blog written for those whose interests more or less match mine.

Posts Tagged ‘history

Why Did World War II End?

leave a comment »

Kevin Drum has an interesting post that makes the argument that it was not the atomic bombs that forced Japan’s surrender. I find his argument persuasive.

Written by LeisureGuy

7 August 2020 at 6:09 pm

Posted in Daily life

Tagged with ,

When the Culture Wars Hit Fort Wayne

leave a comment »

Charlie Savage — a really fine reporter — has a lengthy and intriguing article in Politico on the nuts and bolts of civic ambition aand conflict. It begins:

FORT WAYNE, Ind.—One chilly February evening last year in this Midwestern town where I grew up, a city council member named Jason Arp proposed a resolution: Starting that summer, the city should annually celebrate a “General ‘Mad’ Anthony Wayne Day.”

My hometown is a rust-belt industrial city of a quarter-million people on the confluence of three slow rivers, today home to branches of several defense contractors and a regionally famous zoo. Wayne was an early American military leader who in 1794 built a frontier fort here, during the period when white settlers from the young United States were pushing west into the Ohio Valley and coming into conflict with Native Americans. As a piece of city business, having a day in Fort Wayne to honor Anthony Wayne might seem—from a distance at least—to be correcting an oversight. Why wouldn’t the city have a founder’s day?

At the meeting, Arp, a stocky man in his mid-40s with close-cropped hair, leaned back and put forward a glowing portrait of Wayne as a hero and role model, clicking through a slideshow. Not only had Wayne won the Northwest Indian War and established the outpost that grew into the city, but he had earlier won fame in the American Revolution for leading a charge that overran a British outpost at Stony Point, New York. This feat, Arp said, had thwarted a British attempt to capture top American military leaders like George Washington and Henry Knox after the traitor Benedict Arnold had revealed their location. “We can thank Anthony Wayne for the fact that we even have a United States of America,” he said.

He proposed that July 16, the date of the 1779 Stony Point raid, be celebrated as Wayne Day.

Arp’s account of Wayne’s life and contributions to American history is not the universal consensus among historians, and the details appeared unfamiliar to the other city council members. Watching a video of this meeting later, this did not surprise me. Local public schools had taught us almost nothing about Wayne. Occasional Cub Scout visits to the “Old Fort”—a replica fort built in the run-up to its bicentennial and staffed by costumed reenactors—were primarily about spectacles like a blacksmith hammering red-hot iron into a horseshoe and soldiers raising a flag or firing a deafening blank from a cannon, with light discussion of history. To the extent we thought about Wayne himself, the version we grew up with amounted to something like this: Indians had been giving settlers a hard time, and Wayne solved the problem.

Arp described this, in a way. His presentation detailed a military campaign in which Wayne defeated a regional alliance of native tribes, omitting most of the larger context about what had led to the war. His resolution also softened any moral discomfort raised by the basic circumstances of Wayne’s feat—he led outside invaders to victory over people trying to defend their homes—by making the claim that the Indians had been “British led,” reframing the conflict as a struggle driven by two factions of white people.

The council meeting went on to address more typical business, like approving a maintenance contract for a water treatment plant. The impact of Arp’s resolution on Wayne Day would be very different. The people in the room might not have fully appreciated it when they arrived, but the culture wars had just come to Fort Wayne.

It wasn’t surprising that Arp had been the one to fire the first shot. He had stood apart from most of his colleagues on the city council since his election in 2015, embodying a local version of the strain of Republican politics that branded itself as the Tea Party in opposition to Barack Obama and that has tightened its grip on the GOP in the Donald Trump era. Though Arp was one of seven Republicans on the nine-member council, he often found himself fighting against the majority in his own party, as well as the two Democrats. A former mortgage-backed securities trader who told me he primarily lives off his investments, he voted against budgets and redevelopment project subsidies that the council nevertheless passed; he unsuccessfully proposed to eliminate a tax that funds local libraries and public schools. (His own daughters are home-schooled.) Arp went so far as to release a scorecard on his colleagues based on whether their votes supported or opposed activity by the government, which he translated into a choice between an “authoritarian” mindset or for “liberty.” He tarred not only the two Democrats on the council as authoritarian-leaning, but also its five more traditional Republican members.

Arp pitched his Wayne Day resolution at a moment when he needed to shore up support and attract attention. Near his term’s end, he was facing a contested primary. His opponent, a more traditional Republican, won a few prominent endorsements, putting Arp’s political future in doubt. But that was before the Wayne Day fight—which would arouse the anger of a local clergyman, paralyze the city’s historical society, and surprise Fort Wayne with unhappy emissaries from a distant tribal nation in Oklahoma.

The summer of 2020 has become a time of asking questions about what America commemorates and why. The nationwide Black Lives Matter protests that followed the police killing of George Floyd in Minneapolis are growing into a broader moment of reckoning, as the push to take down Confederate Civil War monuments expands into reconsidering statues and naming honors for historical figures like Christopher Columbus, Woodrow Wilson and Margaret Sanger. Business owners are dropping longstanding Native American stereotypes as logos and mascots from products like Land O’Lakes Butter and the no-longer Washington Redskins; frictions are erupting over calls to paint over New Deal-era murals in schools that depict slavery and Indian warfare.

But these tensions usually flare around whether to take down existing things and retire established traditions. Such debates tend to be complicated by nostalgia and the impulse to preserve things as we are accustomed to them. Wayne Day was different: This was an attempt to create something new.

On one level, to grow up in Fort Wayne was to be saturated in references to Anthony Wayne and the Native Americans he fought. I opened my first savings account at a branch of Anthony Wayne Bank, across Anthony Boulevard from an ice cream parlor that served massive “Mad Anthony” sundaes. The names of tribes that originally lived here and their chiefs also adorn schools, streets, libraries and camps. The most prominent was Chief Little Turtle of the Miami tribe, whose de facto capital town, Kekionga, had stood here long before Wayne came.

Yet most of us would have been hard-pressed to detail who any of these people were, or explain the wthat Fort Wayne’s submerged and sometimes unpleasant history helped shape modern-day America.

I have no recollection of anyone explaining that the name of our main geographical feature—the Maumee River, which forms here where two lesser waterways merge and flows to Toledo, Ohio, where it empties into Lake Erie—came from the Miami tribe. Nor was I taught that as white Americans pushed west, the Miami invited refugee tribes dislodged from their homelands, the Shawnee and the Delaware, to resettle here.

The fort that Wayne built here was soon abandoned by the army as the frontier kept shifting west, and while the white settlement that grew up around the site boomed for a period as an industrial center, it was always a minor place compared with other cities that grew up around frontier forts in the Midwest, especially Chicago and Detroit. But for a period, this spot was of singular importance in North America.

As a principal Native American settlement under the control of the Miami tribe, it flourished for most of the 18th century because it controlled the shortest overland connection between two vast river networks, linking Quebec and the Great Lakes region to the Mississippi Valley and its seaport at New Orleans. This land portage made it a crossroads both for the lucrative North American fur trade, and for Algonquin tribes throughout what we now call the Midwest.

Already a crucial hub, the cluster of native towns here then served as the military headquarters for a multi-tribal alliance—sometimes called the Western Confederacy—that battled white America for control of the entire Midwest after the Revolutionary War. The war ended when General “Mad” Anthony Wayne, carrying out the policies of President George Washington, subjugated the tribal confederation’s warriors in battle and then systematically burned down their villages and destroyed their food stores ahead of winter, breaking the tribes’ will to keep resisting. Wayne died soon after negotiating a peace treaty, but his conquest unleashed hundreds of thousands of white settlers to rapidly transform the Old Northwest into the new states of Ohio, Indiana, Illinois, Michigan, and Wisconsin. Most of its original inhabitants, especially those like the Miami who lived in the lower three states, would be removed within two generations.

Perhaps it was not surprising we hadn’t been taught about the . . .

Continue reading. There’s much more and it’s interesting.

Written by LeisureGuy

31 July 2020 at 12:01 pm

Posted in Daily life, Government

Tagged with

Age of Invention: The Agglomerationists

leave a comment »

Anton Howe writes:

The other day, economic historian Tim Leunig tagged me into a comment on twitter with the line “intellectually I think the biggest change since settled agriculture was the idea that most people could live in cities and not produce food”. What’s interesting about that, I think, is the idea that this was not just an economic change, but an intellectual one. In fact, I’ve been increasingly noticing a sort of ideology, if one can call it that, which seemingly took hold in Britain in the late sixteenth century and then became increasingly influential. It was not the sort of ideology that manifested itself in elections, or even in factions, but it was certainly there. It had both vocal adherents and strenuous opponents, the adherents pushing particular policies and justifying them with reference to a common intellectual tradition. Indeed, I can think of many political and economic commentators who are its adherents today, whether or not they explicitly identify as such.

Today, the people who hold this ideology will occasionally refer to themselves as “urbanists”. They are in favour of large cities, large populations, and especially density. They believe strongly in what economists like to call “agglomeration effects” — that is, if you concentrate people more closely together, particularly in cities, then you are likely to see all sorts of benefits from their interactions. More ideas, more trade, more innovation, more growth.

Yet urbanism as a word doesn’t quite capture the full scope of the ideology. The group also heavily overlaps with natalists — people who think we should all have more babies, regardless of whether they happen to live in cities — and a whole host of other groups, from pro-immigration campaigners, to people setting up charter cities, to advocates of cheaper housing, to enthusiasts for mass transit infrastructure like buses, trams, or trains. The overall ideology is thus not just about cities per se — it seems a bit broader than that. Given the assumptions and aims that these groups hold in common, perhaps a more accurate label for their constellation of opinions and interests would be agglomerationism.

So much for today. What is the agglomerationist intellectual tradition? In the sixteenth century, one of the mantras that keeps cropping up is the idea that “the honour and strength of a prince consists in the multitude of the people” — a sentiment attributed to king Solomon. It’s a phrase that keeps cropping up in some shape or form throughout the centuries, and used to justify a whole host of agglomerationist policies. And most interestingly, it’s a phrase that begins cropping up when England was not at all urban, in the mid-sixteenth century — only about 3.5% of the English population lived in cities in 1550, far lower than the rates in the Netherlands, Italy, or Spain, each of which had urbanisation rates of over 10%. Even England’s largest city by far, London, was by European standards quite small. Both Paris and Naples were at least three times as populous (don’t even mention the vast sixteenth-century metropolises of China, or Constantinople).

Given their lack of population or density, English agglomerationists had a number of role models. One was the city of Nuremburg — through manufactures alone, it seemed, a great urban centre had emerged in a barren land. Another was France, which in the early seventeenth century seemed to draw in the riches to support itself through sheer exports. One English ambassador to France in 1609 noted that its “corn and grain alone robs all Spain of their silver and gold”, and warned that it was trying to create still new export industries like silk-making and tapestry weaving. (The English rapidly tried to do the same, though with less success.) France may not have been especially urban either, but Paris was already huge and on the rise, and the country’s massive overall population made it “the greatest united and entire force of any realm or dominion” in Christendom. Today, the population of France and Britain are about the same, but in 1600 France’s was about four times as large. Some 20 millions compared to a paltry 5. If Solomon was right, then England had a lot of catching up to do to even approach France in honour.

Most important of all the role models, however, was the Dutch Republic. Here was a nation that could not possibly feed itself using its own land, but which nonetheless was extremely populous and wealthy. It made itself, as one English admirer put it, “powerful and rich in all kinds, by merchandise, manufactory, and fullness of trade, having no commodities in their own country growing”. With low tariffs and a vast merchant fleet, Amsterdam had become the global entrepôt, sucking in commodities from all across the world and then re-exporting them to the rest of Europe. And through high-value manufactures, it paid for whatever imports of raw materials it needed to support its population. It wove and dyed English wool into expensive fine cloth, then sold it back at a profit to the very English. It built its ships from Baltic timber, then used those ships to dominate the Baltic trade. Despite having “not a timber tree growing in their country”, they somehow still were able to build and maintain the largest merchant and fishing fleet in Europe. Observers even marvelled at how a country that produced hardly any grain of its own could relieve food shortages in England, which was usually a major grain exporter.

To emulate the Dutch, English agglomerationists recommended . . .

Continue reading.

Written by LeisureGuy

18 July 2020 at 9:26 am

Posted in Daily life, Government

Tagged with ,

The Oracle of Delphi: The Apex of Ancient Greece

leave a comment »

Jimmy Mahrer has a book, The Pyramids of Giza, on-line and also available as a Kindle book. Chapter XIX begins:

After a war is lost the losers must decide among themselves who was most to blame and who was simply a victim of circumstance. But the converse is also true: after a war is won, the victors must decide among themselves who has earned the biggest share of the glory and who could have done more for the cause. Greece was no exception to this rule as it looked about itself dazedly in 479 BC, still trying to understand how its improbable deliverance had come about. The Persian War became Greece’s collective crucible of character. For many years to come, “What did you do in the war?” would be the question on every tongue, to be asked of individual men and cities alike. The answer could be either a source of the greatest pride or the most abject shame.

The actions of Delphi were scrutinized as closely as those of any other city. In these increasingly skeptical times, when some were beginning to look upon the old beliefs with a decidedly jaundiced eye, there were insinuations made about the lack of confidence in the oracle’s countrymen which many of her prophecies had conveyed before the war;  there were whispers that Delphi’s supposed divine rescue from Mardonius’s army was, at best, a distortion of the full truth; there was talk of secret deals and dishonorable transactions meant to preserve Delphi’s special status after a Greek defeat.

Yet most Greeks proved willing to give Delphi and its oracle the benefit of the doubt. For, as a new sense of shared Greekness swept the land in the wake of the great victory, Delphi remained in some ineffable sense the seat of Greek consciousness. One might even say that the living soul of Greece still burned there.

Accordingly, the oracle ordered every other city in the land to extinguish the fires that burned in its temples and sacred spaces, saying that all of them had been polluted by the barbarian invaders. Then she ordered the fires relighted using torches kindled at her own temple’s sacred flame. One zealous soul from the city of Plataea ran from Delphi to his home in a single day in order to make Plataea’s temple of Artemis the first site to rekindle its flame. He dropped dead upon his arrival, whereupon he was buried with honor in the temple itself.

The oracle forgot the timidity that had marked her tenure prior to the war and leaned into her power with abandon. Apparently forgetting about her own gloomy prewar prophecies, she ordered that every city in Greece which had surrendered to the Persians, or simply hadn’t given everything it could have to the war effort, should pay a stiff tithe to the god of Delphi by way of repentance for its lack of faith. Part of this money was used to build a grand monument to the heroes of the war on the very terrace of Apollo’s temple. It took the form of a statue of the god almost as tall as the temple itself, pulverizing a Persian ship with his hands. The rest of the money was used for another monumental sculpture: three golden stars on a life-sized ship’s mast of bronze, which took pride of place at the other entrance to the temple, between the offering bowls of Croesus.

But the grandest monument of all was built using the spoils of war which the Persians themselves had left strewn over the landscape behind them as they retreated. Just in the front of the temple, there was erected a column soaring higher than any other structure in Delphi. Atop it were mounted three serpents made of gold and bronze, supporting a golden tripod whose distinctive gleam could be seen on the horizon from leagues away on a sunny day. It was Delphi’s crowning glory.

These monuments were unprecedented in that they were the shared project of all of Greece, tangible evidence of the new spirit of fellowship. And yet they were only the beginning of the redoubled frenzy of building which was suddenly going on in the town. Gold and marble poured into Delphi along with craftsmen to work them, as every Greek city — and not a few barbarian ones — attempted to outdo its peers in size and beauty. Delphi received more attention than most of the donor cities’ own central squares. Some cities even started to stamp Delphi’s likeness onto their coins; it had become a united Greece’s de facto capital.

The Pythian Games assumed a new importance and popularity that eclipsed those of the Olympic Games, long their only real rivals. They took place over five days at the end of summer on every fourth year, as they always had, but expanded dramatically in scale and variety. Not only all manners of athletic competition but every form of performance art was on offer, along with grand exhibitions of painting and sculpture. Delicacies from abroad were heaped upon the banquet tables to satisfy the spectators’ appetites, and exotic animals were herded into cages for their amusement. The Games culminated on their final day in the chariot races, the winner of which was guaranteed eternal fame and glory — not least because Pindar of Thebes, the finest poet of his generation, was always on hand to pen an ode in the champion’s honor, placing him almost on an equal footing with the legendary heroes of the Age of Gods.

Nevertheless, this was the Age of Men, and politics remained more complicated and more time-consuming than they had been of old, when the gods could be trusted to take a more active role in affairs. Thus Delphi buzzed even when neither the Pythian Games nor any of the many smaller festivals were taking place. Its people’s winter retirement to the Corycian Cave became a matter of ceremony only, for the new Greece needed Delphi the whole year round. The rulers and diplomats who came to the town still consulted with the oracle, but just as important were their consultations with one another. Delphi was the place where deals were made, treaties negotiated, transactions conducted. Most of the major cities maintained permanent embassies at Delphi, which buildings became, like so much else in the town, a way to demonstrate via their size and opulence the power and wealth of the ones who had built them.

There was no question which city came out of the Persian War with the biggest surfeit of both. While . . .

Continue reading.

Written by LeisureGuy

15 July 2020 at 4:30 pm

Posted in Books

Tagged with

The weight of air

leave a comment »

Anton Howes, writing on the age of invention:

Why was the steam engine invented in England? An awful lot hinges on this question, because the answer often depends on our broader theories of what caused the British Industrial Revolution as a whole. And while I never tire of saying that Britain’s acceleration of innovation was about much, much more than just the “poster boy” industries of cotton, iron, and coal, the economy’s transition to burning fossil fuels was still an unprecedented and remarkable event. Before the rise of coal, land traditionally had to be devoted to either fuel, food, or clothing: typically forest for firewood, fields for grain, and pastures for wool-bearing sheep. By 1800, however, English coal was providing fuel each year equivalent to 11 million acres of forest — an area that would have taken up a third of the country’s entire surface area, and which was many times larger than its actual forest. By digging downward for coal, Britain effectively increased its breadth.

And coal found new uses, too. It had traditionally just been one among many different fuels that could be used to heat homes, alongside turf, gorse, firewood, charcoal, and even cow dung. When such fuels were used for industry, they were generally confined to the direct application of heat, such as in baking bricks, evaporating seawater to extract salt, firing the forges for blacksmiths, and heating the furnaces for glass-makers. Over the course of the seventeenth century, however, coal had increasingly become the fuel of choice for both heating homes and for industry. Despite its drawbacks — it was sooty, smelly, and unhealthy — in places like London it remained cheap while the price of other fuels like firewood steadily increased. More and more industries were adapted to burning it. It took decades of tinkering and experimentation, for example, to reliably use coal in the smelting of iron.

Yet with the invention of the steam engine, the industrial uses of coal multiplied further. Although the earliest steam engines generally just sucked the water out of flooded mines, by the 1780s they were turning machinery too. By the 1830s, steam engines were having a noticeable impact on British economic growth, and had been applied to locomotion. Steam boats, steam carriages, steam trains, and steam ships proliferated and began to shrink the world. Rather than just a source of heat, coal became a substitute for the motive power of water, wind, and muscle.

So where did this revolutionary invention come from? There were, of course, ancient forms of steam-powered devices, such as the “aeolipile”. Described by Hero of Alexandria in the 1st century, the aeolipile consisted of a hollow ball with nozzles, configured in such a way that the steam passing into the ball and exiting through the nozzles would cause the ball to spin. But this was more like a steam turbine than a steam engine. It could not do a whole lot of lifting. The key breakthroughs came later, in the late seventeenth and early eighteenth centuries, and instead exploited vacuums. In a steam engine the main force was applied, not by the steam itself pushing a piston, but by the steam within the cylinder being doused in cold water, causing it to rapidly condense. The resulting partial vacuum meant that the weight of the air — the atmospheric pressure — did the real lifting work. The steam was not there to push, but to be condensed and thus pull. It saw its first practical applications in the 1700s thanks to the work of a Devon ironmonger, Thomas Newcomen.

Science was important here. Newcomen’s engine could never have been conceived had it not been for the basic and not at all obvious observation that the air weighed something. It then required decades of experimentation with air pumps, barometers, and even gunpowder, before it was realised that a vacuum could rapidly be created through the condensation of steam rather than by trying to suck the air out with a pump. And it was still more decades before this observation was reliably applied to exerting force. An important factor in the creation of the steam engine was thus that there was a sufficiently large and well-organised group of people experimenting with the very nature of air, sharing their observations with one another and publishing — a group of people who, in England, formalised their socialising and correspondence in the early 1660s with the creation of the Royal Society.

Yet many of the early experimenters with the nature of air, such as Evangelista Torricelli, Otto von Guericke, Denis Papin, and Christiaan Huygens, were Italian, German, French, and Dutch. The likes of Huygens and Papin may have been involved with England’s Royal Society, but the institution was just a branch of a much broader network of European scientists. Why, then, did the scientific observations only find their practical application in England? One influential answer is to do with England’s resource costs, and especially the presence of coal mines. The economic historian Robert C. Allen, for example, while fully recognising the key role played by seventeenth-century science, argues that “had the British coal industry not existed, there would have been no point going to the expense of developing the steam engine”.

Allen explains that the early engines were so energy-hungry that they were only cost-effective if applied directly to the source of their own fuel: the coal mines. It was where the fuel was cheapest. Thus, Allen argues that as a result of Britain having lots of coal mines, the nascent steam engine was kept alive as a technology for long enough that it could be developed and improved by various other inventors, until such time as it could be made cost-effective to apply it to other kinds of mine and then to rotary motion too. As Britain simply had more coal mines than anywhere else, Allen argues, British-based inventors were thus responsible for more of the steam engine’s improvements. Coal was the cradle for the infant industry.

But I don’t think this is quite right. The devil here is in the detail. Allen himself notes that although the first engine was made public in 1712 at a coal mine, Thomas Newcomen’s earliest experiments with steam engines, c.1700-10, occurred in Cornwall. The problem for Allen’s narrative here is that the earliest engines would thus have been used for pumping mines of tin, not coal. Indeed, I can hardly think of a more expensive place than Cornwall to have tried to invent a coal-fuelled steam engine. I say this because just last week I noticed some interesting details about Cornish fuel supplies in the travel accounts of Celia Fiennes, who meticulously recorded her visit to Cornwall in 1698 — just a few years before Newcomen’s very first engines were erected. Fiennes noted the same problem that Newcomen specifically set out to solve, which was that the tin mines near Cornwall’s southern coast had recently become especially prone to flooding. But she also commented on severe shortages of two of the most necessary resources: Cornwall, Fiennes noted, was almost entirely devoid of wood, which was a crucial material in the construction of any kind of pumping engine, regardless of whether it was powered by horse, water, or steam. This startup cost was likely offset for Newcomen by the fact that the government in the 1700s had begun to subsidise tin production. Investment was thus forthcoming. (It helped that one of the most powerful British ministers was the Earl of Godolphin, a major Cornish owner of tin mines.) But the other noticeably lacking resource, at least in Cornwall’s south, was coal.

Cornwall’s coal was largely shipped from Wales or Bristol, on Britain’s western coast. Yet Fiennes noted that during wartime these ships found it much riskier to sail around Land’s End and into the English Channel, due to the threat from French pirates that might be lying in wait. Cornwall’s northern coast was thus able to get Bristol coal cheaply, but during wartime the southern coast was hardly supplied with any kind of fuel at all, be it coal or even firewood. Fiennes described how her supper could be boiled by burning the abundant local gorse, but that roasting a meal was reserved only for special occasions because it required scarce firewood. When Fiennes was writing, the Nine Years’ War had just ended and the southern coast of Cornwall was beginning to receive its regular coal and wood supplies again. But an even more extensive conflict with France . . .

Continue reading.

Written by LeisureGuy

14 July 2020 at 3:01 pm

Posted in Business, Technology

Tagged with

How did ancient cities weather crises?

leave a comment »

Andrew Robinson has an interesting-book review in Nature:

The Life and Death of Ancient Cities: A Natural History Greg Woolf Oxford Univ. Press (2020)

For millennia, cities have generated power, wealth, creativity, knowledge and magnificent buildings. They have also incubated hunger, violence, war, inequality and disease — as we’ve so painfully experienced this year. The coronavirus pandemic has shaken our faith in urban life, as lockdowns have emptied streets that are home to more than half the world’s population. Basic supply networks have been revealed as fragile, and the densely packed social groups that are engines of income, support and enjoyment have become a source of peril.

As the pandemic forces us to contemplate the future of cities — three-quarters of the world’s people could live in urban areas by 2100 — historian Greg Woolf examines their past. His latest book is a deeply researched and ambitious “natural history” of the origins and growth of urbanism. Woolf is an expert on ancient Rome, the city with the highest population in antiquity — at its peak around 2,000 years ago, a mind-boggling one million people lived there, some 0.3% of the global population. That was in the reign of the emperor Augustus (27 BC to AD 14).

The Life and Death of Ancient Cities spans from the Bronze Age, starting in the fourth millennium BC, to the early part of the Middle Ages, in the first millennium AD. It focuses on the hundreds of ancient Mediterranean cities that sprang up during this time, including Alexandria, Antioch, Athens, Byzantium and Carthage, as well as Rome. Woolf synthesizes intriguing insights from the humanities, social sciences, climatology, geology and biology. He explains that the neoclassical buildings of modern cities, such as London’s British Museum, give a false impression. The famous centres of antiquity were “far less grandiose” — Athenian assemblies, for example, debated in the open air. He wryly notes that rats and humans thrive in cities, because both can survive on diverse food sources and cope with prolonged periods of hunger.

When did cities first appear? The answer depends on definitions. In today’s Nicaragua, notes Woolf, any settlement with street lights and electricity counts as a city. In Japan, a population greater than 50,000 is required. A prime candidate for the world’s first city is perhaps Jericho in what is now the Palestinian territories. It was founded before 9000 BC and about a millennium later had a wall — the earliest such barrier discovered. But Jericho’s population at the time is uncertain. Estimates range from a few hundred to 2,000 or 3,000. As Woolf observes, it is tricky to determine population size in early societies without written records. One option is to analyse the water supply to work out how many people it could have served, but this reveals maximum carrying capacity rather than use, and struggles to take into account public baths and fountains.

Like most specialists, Woolf prefers to give the title of first city to Uruk, in Mesopotamia. This settlement had an estimated 10,000–20,000 inhabitants in 4000 BC, rising to between 60,000 and 140,000 after a massive protective wall, ascribed to King Gilgamesh, was built around 2900 BC. Here, in the late fourth millennium BC, writing probably originated in the form of cuneiform script on clay tablets, used to record bureaucratic information such as economic transactions. One such tablet displays the world’s oldest known mathematical calculation, of the surface area of a roughly rectangular field. Yet the factors that drove the creative outburst that built the city remain mysterious. As Woolf admits: “For all the attention that has been devoted to the Uruk phenomenon, there is still no consensus about why it happened.”

Mediterranean metropolises

Cities arrived much later in the ancient Mediterranean. Athens became an important centre of the Mycenaean civilization around 1400 BC; Rome was founded in the eighth century BC; Alexandria dates from 332 BC. Mediterranean farmers generally lacked access to the flood water and fertile alluvial mud provided by Mesopotamia’s great rivers, the Tigris and Euphrates. For centuries, people lived in villages and hamlets rather than cities, which are at risk from crop failures and water shortages. Again, the driving forces are often debatable. The chief period of Roman urbanization is now known to have coincided with a period of increased temperatures during the last century BC and first two centuries AD. But, as Woolf warns, this might be a coincidence: “It is perfectly possible to explain urbanization without recourse to climate change.”

Another source of uncertainty is how ancient diseases affected urban centres. Written accounts suggest, for example, that . . .

Continue reading.

Written by LeisureGuy

14 July 2020 at 11:01 am

Posted in Books, Daily life

Tagged with

How Pandemics Wreak Havoc—and Open Minds

leave a comment »

Lawrence Wright writes in the New Yorker:

Great crises tend to bring profound social change, for good or ill. The consequences of wars and economic depressions have been amply studied; the consequences of pandemics, less so. This spring, in order to understand our possible future, I decided to look at the past through the eyes of Gianna Pomata, a retired professor at the Institute of the History of Medicine, at Johns Hopkins University. When we first talked, on Skype, she immediately compared COVID-19 to the bubonic plague that struck Europe in the fourteenth century—“not in the number of dead but in terms of shaking up the way people think.” She went on, “The Black Death really marks the end of the Middle Ages and the beginning of something else.” That something else was the Renaissance.

Since retiring, Pomata has returned to her home town, the old city of Bologna. “You know Bologna, right?” she asked in that first conversation, on March 27th. Decades ago, I was the best man at a wedding there. I recalled the giant churches, the red-tiled roofs, the marble walkways under arched porticoes; a stately city, low-slung, amber-hued, full of students and indomitable old couples. During the Middle Ages, Bologna was home to more than a hundred towers, the skyscrapers of their era, which served as showplaces of wealth and ambition for powerful oligarchs. Two of the remaining ones have become symbols of Bologna: one slightly out of plumb, the other as cockeyed as its cousin in Pisa. “You remember the Piazza Maggiore, the very heart of the city near the two towers?” Pomata said. “That’s where I live.”

Pomata’s country had been in a nationwide lockdown since March 10th. “In Italy, the streets are always crowded, night and day,” she said. “Our cities are medieval, made for a different way of life—not for cars but for people. Right now, to see them empty of people is so sad.” When we spoke, the number of confirmed cases in Italy had reached eighty-six thousand. Only the United States had a higher number, having just eclipsed China.

Pomata, who is sixty-nine, has brown hair, with a long, open face. That day, tortoiseshell glasses rested at half-mast on her nose, beneath upward-pointing, quizzical eyebrows. Like me, she was beginning to show the pallor of confinement. Having spent much of her adult life in the United States, her English had little accent, but she retained an Italian lilt, lingering on the broad vowels.

I asked Pomata to imagine walking out of her apartment six hundred and seventy-two years ago, during the Black Death. How would Bologna appear different? “If you try to imagine a plague-stricken city in the Middle Ages, the first thing you’d see would be dead people on the streets,” she said. “Just as we have to send the Army to take coffins to crematories in other cities, as in Bergamo right now, in the Middle Ages they couldn’t cope with so many dead. The bodies just piled up on the streets.” She paused and said, “I don’t have an idyllic vision of the Middle Ages.”

Italy at the beginning of the fourteenth century was a conglomeration of prosperous city-states that had broken free of the feudal system. Some of them, such as Venice, formed merchant republics, which became seedbeds for capitalism. Venice and other coastal cities, including Genoa, Pisa, and Amalfi, set up trading networks and established outposts throughout the Mediterranean and as far away as the Black Sea. Other Italian cities, such as Bologna, became free communes, which meant that peasants fleeing feudal estates were granted freedom once they entered the city walls. Serfs became artisans. A middle class began to form. The early fourteenth century was robust and ambitious. Then, suddenly, people began to die.

Bologna was a stronghold of medical teaching. The city’s famous university, established in 1088, is the oldest in the world. “What they had we call scholastic medicine,” Pomata told me. “When we say ‘scholastic,’ we mean something that is very abstract, not concrete, not empirical.” European scholars at the time studied a number of classical physicians—including Hippocrates, the Greek philosopher of the fifth century B.C. who is considered the father of medicine, and Galen, the second-century Roman who was the most influential medical figure in antiquity—but scholastic medicine was confounded with astrological notions. When the King of France sought to understand the cause of the plague, the medical faculty at the University of Paris blamed a triple conjunction of Saturn, Jupiter, and Mars in the fortieth degree of Aquarius, which had occurred on March 20, 1345.

“Whether it descended on us mortals through the influence of the heavenly bodies or was sent down by God in His righteous anger to chastise us because of our wickedness, it had begun some years before in the East,” Giovanni Boccaccio wrote in the Decameron, which was completed by 1353 and is set during the plague in Florence. “At its onset, in men and women alike, certain swellings would develop in the groin or under the armpits, some of which would grow like an ordinary apple and others like an egg.” These pus-filled swellings, called buboes, were inflammations of the lymph nodes. They eventually erupted. Internal organs broke down in a bloody froth, and bodies darkened with gangrene, which is why the plague came to be called the Black Death.

Before arriving in Italy, the rampaging contagion had already killed millions of people as it burned through China, Russia, India, Persia, Syria, and Asia Minor. It was said that there were entire territories where nobody was left alive. The source of the disease was sometimes thought to be “miasma,” or air that was considered unhealthy, such as sea breezes. Paradoxically, there was also a folk belief that attendants who cleaned latrines were immune, which led some people to confine themselves for hours a day amid human waste, absorbing the presumed medicinal odors. “The advice of doctors and the power of medicine appeared useless and unavailing,” Boccaccio wrote. Some people maintained that “the surest medicine for such an evil disease was to drink heavily, enjoy life’s pleasures, and go about singing and having fun, satisfying their appetites by any means available, while laughing at everything.” Others, he observed, “formed themselves into companies and lived in isolation from everyone else.” The Decameron tells of ten friends who shelter in place, entertaining one another with stories while the plague assails Florence. These ribald tales pay little heed to medieval notions of sacredness or piety; indeed, the society that the sequestered young people describe is amoral and cheerfully hypocritical. Priests are portrayed as stupid, lustful, greedy connivers. Illicit sex is exalted. The earthy realism of the Decameron, written in Italian vernacular rather than in classical Latin verse, sounded one of the opening notes of the Renaissance.

Pomata told me, “What happens after the Black Death, it’s like a wind—fresh air coming in, the fresh air of common sense.” The intellectual overthrow of the scholastic-medicine establishment in the Middle Ages was caused by doctors who set aside the classical texts and gradually turned to empirical evidence. It was a revival of medical science, which had been dismissed after the fall of ancient Rome, a thousand years earlier. “After the Black Death, nothing was the same,” Pomata said. “What I expect now is something as dramatic is going to happen, not so much in medicine but in economy and culture. Because of danger, there’s this wonderful human response, which is to think in a new way.”

In the fourteenth century, Tartar warriors in Crimea laid siege to the Black Sea port city of Caffa, which was owned by a group of wealthy Genoese traders. Like so many armies in history, the Tartars were also fighting an unseen enemy: they carried with them a horrible disease, which killed some victims in a few days, and left others to die in indolent agony. Before retreating from Caffa, the Tartar general, Khan Jani Beg, ordered the diseased bodies of dead warriors catapulted over the city walls, in one of the first instances of biological warfare. Panicked citizens took to boats, navigating through the Dardanelles into the Aegean Sea and the Mediterranean. A dozen ships made it to Sicily, in October, 1347.

Sicilians were appalled to find on their shores boats with dead men still at their oars. Other sailors, dead or barely alive, were in their bunks, covered with foul-smelling sores. The horrified Sicilians drove the ships back to sea, but it was too late. Rats and fleas . . .

Continue reading.

Written by LeisureGuy

13 July 2020 at 5:58 pm

Posted in Daily life, Medical

Tagged with

Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages

leave a comment »

A very interesting, very lengthy, profusely illustrated post by Ex Urbe at the eponymous website:

“If the Black Death caused the Renaissance, will COVID also create a golden age?”

Versions of this question have been going around as people, trying to understand the present crisis, reach for history’s most famous pandemic.  Using history to understand our present is a great impulse, but it means some of the false myths we tell about the Black Death and Renaissance are doing new damage, one of the most problematic in my view being the idea that sitting back and letting COVID kill will somehow by itself naturally make the economy turn around and enter a period of growth and rising wages.

Brilliant Medievalists have been posting Black Death pieces correcting misconceptions and flailing as one does when an error refuted 50 times returns the 51st (The Middle Ages weren’t dark and bad compared to the Renaissance!!!).  As a Renaissance historian, I feel it’s my job to shoulder the other half of the load by talking about what the Renaissance was like, confirming that our Medievalists are right, it wasn’t a better time to live than the Middle Ages, and to talk about where the error comes from, why we think of the Renaissance as a golden age, and where we got the myth of the bad Middle Ages.

Only half of this is a story about the Renaissance.  The other half is later: Victorian Britain, Italy’s unification, World Wars I and II, the Cold War, ages in which the myth of the golden Renaissance was appropriated and retold.  And yes, looking at the Black Death and Renaissance is helpful for understanding COVID-19’s likely impact, but in addition to looking at 1348 we need to look at its long aftermath, at the impact Yersinia Pestis had on 1400, and 1500, and 1600, and 1700.  So:

  • This post is for you if you’ve been wondering whether Black Death => Renaissance means COVID => Golden Age, and  you want a more robust answer than, “No no no no no!”
  • This post is for you if you’re tired of screaming The Middle Ages weren’t dark and bad! and want somewhere to link people to, to show them how the myth began.
  • This post is for you if you want to understand how an age whose relics make it look golden in retrospect can also be a terrible age to live in.
  • And this post is for you if want to ask what history can tell us about 2020 and come away with hope. Because comparing 2020 to the Renaissance does give me hope, but it’s not the hope of sitting back expecting the gears of history to grind on toward prosperity, and it’s not the hope for something like the Renaissance—it’s hope for something much, much better, but a thing we have to work for, all of us, and hard.

I started writing this post a few weeks ago but rapidly discovered that a thorough answer will be book-length (the book’s now nearly done in fact).  What I’m sharing now is just a precis,  the parts I think you’ll find most useful now.  So sometimes I’ll make a claim without examples, or move quickly over important things, just linking to a book instead of explaining, because my explanation is approaching 100,000 words.  That book will come, and soon, but meanwhile please trust me as I give you just urgent parts, and I promise more will follow.

Continue reading. There’s very much more, and also interesting comments.

Written by LeisureGuy

7 July 2020 at 3:26 pm

Posted in Books, Daily life, Medical

Tagged with

1 in 200 Men Are Direct Descendants of Genghis Khan

leave a comment »

That Genghis! Here’s the article.

Written by LeisureGuy

27 June 2020 at 1:19 pm

Posted in Science

Tagged with

It Can Happen Here

leave a comment »

In the New York Review of Books Cass R. Sunstein reviews a couple of ominous books about how a public can blind itself to what is happening (review also available in complete form here):

They Thought They Were Free: The Germans, 1933–45
by Milton Mayer, with a new afterword by Richard J. Evans
University of Chicago Press, 378 pp., $20.00 (paper)

Broken Lives: How Ordinary Germans Experienced the Twentieth Century
by Konrad H. Jarausch
Princeton University Press, 446 pp., $35.00<

Liberal democracy has enjoyed much better days. Vladimir Putin has entrenched authoritarian rule and is firmly in charge of a resurgent Russia. In global influence, China may have surpassed the United States, and Chinese president Xi Jinping is now empowered to remain in office indefinitely. In light of recent turns toward authoritarianism in Turkey, Poland, Hungary, and the Philippines, there is widespread talk of a “democratic recession.” In the United States, President Donald Trump may not be sufficiently committed to constitutional principles of democratic government.

In such a time, we might be tempted to try to learn something from earlier turns toward authoritarianism, particularly the triumphant rise of the Nazis in Germany in the 1930s. The problem is that Nazism was so horrifying and so barbaric that for many people in nations where authoritarianism is now achieving a foothold, it is hard to see parallels between Hitler’s regime and their own governments. Many accounts of the Nazi period depict a barely imaginable series of events, a nation gone mad. That makes it easy to take comfort in the thought that it can’t happen again.

But some depictions of Hitler’s rise are more intimate and personal. They focus less on well-known leaders, significant events, state propaganda, murders, and war, and more on the details of individual lives. They help explain how people can not only participate in dreadful things but also stand by quietly and live fairly ordinary days in the midst of them. They offer lessons for people who now live with genuine horrors, and also for those to whom horrors may never come but who live in nations where democratic practices and norms are under severe pressure.

Milton Mayer’s 1955 classic They Thought They Were Free, recently republished with an afterword by the Cambridge historian Richard J. Evans, was one of the first accounts of ordinary life under Nazism. Dotted with humor and written with an improbably light touch, it provides a jarring contrast with Sebastian Haffner’s devastating, unfinished 1939 memoir, Defying Hitler, which gives a moment-by-moment, you-are-there feeling to Hitler’s rise. (The manuscript was discovered by Haffner’s son after the author’s death and published in 2000 in Germany, where it became an immediate sensation.)* A much broader perspective comes from Konrad Jarausch’s Broken Lives, an effort to reconstruct the experience of Germans across the entire twentieth century. What distinguishes the three books is their sense of intimacy. They do not focus on historic figures making transformative decisions. They explore how ordinary people attempted to navigate their lives under terrible conditions.

Haffner’s real name was Raimund Pretzel. (He used a pseudonym so as not to endanger his family while in exile in England.) He was a journalist, not a historian or political theorist, but he interrupts his riveting narrative to tackle a broad question: “What is history, and where does it take place?” He objects that most works of history give “the impression that no more than a few dozen people are involved, who happen to be ‘at the helm of the ship of state’ and whose deeds and decisions form what is called history.” In his view, that’s wrong. What matters are “we anonymous others” who are not just “pawns in the chess game,” because the “most powerful dictators, ministers, and generals are powerless against the simultaneous mass decisions taken individually and almost unconsciously by the population at large.” Haffner insists on the importance of investigating “some very peculiar, very revealing, mental processes and experiences,” involving “the private lives, emotions and thoughts of individual Germans.”


Mayer had the same aim. An American journalist of German descent, he tried to meet with Hitler in 1935. He failed, but he did travel widely in Nazi Germany. Stunned to discover a mass movement rather than a tyranny of a diabolical few, he concluded that his real interest was not in Hitler but in people like himself, to whom “something had happened that had not (or at least not yet) happened to me and my fellow-countrymen.” In 1951, he returned to Germany to find out what had made Nazism possible.

In They Thought They Were Free, Mayer decided to focus on ten people, different in many respects but with one characteristic in common: they had all been members of the Nazi Party. Eventually they agreed to talk, accepting his explanation that he hoped to enable the people of his nation to have a better understanding of Germany. Mayer was truthful about that and about nearly everything else. But he did not tell them that he was a Jew.

In the late 1930s—the period that most interested Mayer—his subjects were working as a janitor, a soldier, a cabinetmaker, an office manager, a baker, a bill collector, an inspector, a high school teacher, and a police officer. One had been a high school student. All were male. None of them occupied positions of leadership or influence. All of them referred to themselves as “wir kleine Leute, we little people.” They lived in Marburg, a university town on the river Lahn, not far from Frankfurt.

Mayer talked with them over the course of a year, under informal conditions—coffee, meals, and long, relaxed evenings. He became friends with each (and throughout he refers to them as such). As he put it, with evident surprise, “I liked them. I couldn’t help it.” They could be ironic, funny, and self-deprecating. Most of them enjoyed a joke that originated in Nazi Germany: “What is an Aryan? An Aryan is a man who is tall like Hitler, blond like Goebbels, and lithe like Göring.” They also could be wise. Speaking of the views of ordinary people under Hitler, one of them asked:

Opposition? How would anybody know? How would anybody know what somebody else opposes or doesn’t oppose? That a man says he opposes or doesn’t oppose depends upon the circumstances, where, and when, and to whom, and just how he says it. And then you must still guess why he says what he says.

When Mayer returned home, he was afraid for his own country. He felt “that it was not German Man that I had met, but Man,” and that under the right conditions, he could well have turned out as his German friends did. He learned that Nazism took over Germany not “by subversion from within, but with a whoop and a holler.” Many Germans “wanted it; they got it; and they liked it.”

Mayer’s most stunning conclusion is that with one partial exception (the teacher), none of his subjects “saw Nazism as we—you and I—saw it in any respect.” Where most of us understand Nazism as a form of tyranny, Mayer’s subjects “did not know before 1933 that Nazism was evil. They did not know between 1933 and 1945 that it was evil. And they do not know it now.” Seven years after the war, they looked back on the period from 1933 to 1939 as the best time of their lives.

Mayer suggests that even when tyrannical governments do horrific things, outsiders tend to exaggerate their effects on the actual experiences of most citizens, who focus on their own lives and “the sights which meet them in their daily rounds.” Nazism made things better for the people Mayer interviewed, not (as many think) because it restored some lost national pride but because it improved daily life. Germans had jobs and better housing. They were able to vacation in Norway or Spain through the “Strength Through Joy” program. Fewer people were hungry or cold, and the sick were more likely to receive treatment. The blessings of the New Order, as it was called, seemed to be enjoyed by “everybody.”


Even in retrospect Mayer’s subjects liked and admired Hitler. They saw him as someone who had “a feeling for masses of people” and spoke directly in opposition to the Versailles Treaty, to unemployment—to all aspects of the existing order. They applauded Hitler for his rejection of “the whole pack”—“all the parliamentary politicians and all the parliamentary parties”—and for his “cleanup of moral degenerates.” The bank clerk described Hitler as “a spellbinder, a natural orator. I think he was carried away from truth, even from truth, by his passion. Even so, he always believed what he said.”

Mayer did not bring up the topic of anti-Semitism with any of his subjects, but after a few meetings, each of them did so on his own, and they returned to it constantly. When the local synagogue was burned in 1938, most of the community was under only one obligation: “not to interfere.” Eventually Mayer showed his subjects the local newspaper from November 11, 1938, which contained a report: “In the interest of their own security, a number of male Jews were taken into custody yesterday. This morning they were sent away from the city.” None of them remembered seeing it, or indeed anything like it.

The killing of six million Jews? Fake news. Four of Mayer’s subjects insisted that the only Jews taken to concentration camps were traitors to Germany, and that the rest were permitted to leave with their property or its fair market value. The bill collector agreed that the killing of the Jews “was wrong, unless they committed treason in wartime. And of course they did.” He added that “some say it happened and some say it didn’t,” and that you “can show me pictures of skulls…but that doesn’t prove it.” In any case, “Hitler had nothing to do with it.” The tailor spoke similarly: “If it happened, it was wrong. But I don’t believe it happened.”

With evident fatigue, the baker reported, “One had no time to think. There was so much going on.” His account was similar to that of one of Mayer’s colleagues, a German philologist in the country at the time, who emphasized the devastatingly incremental nature of the descent into tyranny and said that “we had no time to think about these dreadful things that were growing, little by little, all around us.” The philologist pointed to a regime bent on diverting its people through endless dramas (often involving real or imagined enemies), and “the gradual habituation of the people, little by little, to being governed by surprise.” In his account, “each step was so small, so inconsequential, so well explained or, on occasion, ‘regretted,’” that people could no more see it “developing from day to day than a farmer in his field sees the corn growing. One day it is over his head.”


Focusing largely on 1933, in Defying Hitler Haffner offers a radically different picture, in which the true nature of Nazism was evident to many Germans from the start. Just twenty-five years old that year and studying law with the goal of becoming a judge or administrator, he describes the mounting effects of Nazism on the lives of his high-spirited friends and fellow students, who were preoccupied with fun, job prospects, and love affairs. Haffner says that as soon as the Nazis took power, he was saved by his capacity to smell the rot:

As for the Nazis, my nose left me with no doubts. It was just tiresome to talk about which of their alleged goals and intentions were still acceptable or even “historically justified” when all of it stank. How it stank! That the Nazis were enemies, my enemies and the enemies of all I held dear, was crystal clear to me from the outset.

As Haffner describes it, a form of terror began quickly, as members of the SS made their presence felt, intimidating people in public places. At the same time, citizens were distracted by an endless stream of festivities and celebrations. The intimidation, accompanied by the fervent, orchestrated pro-Nazi activity, produced an increase in fear, which led many skeptics to become Nazis. Nonetheless, people flirted, enjoyed romances, “went to the cinema, had a meal in a small wine bar, drank Chianti, and went dancing together.” Sounding here like Mayer’s subjects, Haffner writes that it was the “automatic continuation of ordinary life” that “hindered any lively, forceful reaction against the horror.”

Continue reading.

Written by LeisureGuy

15 June 2020 at 12:02 pm

Posted in Books, Daily life, Government, Law, Politics

Tagged with

The Myth of the Kindly General Robert E. Lee

leave a comment »

Adam Serwer wrote in the Atlantic three years ago:

The strangest part about the continued personality cult of Robert E. Lee is how few of the qualities his admirers profess to see in him he actually possessed.

Memorial Day has the tendency to conjure up old arguments about the Civil War. That’s understandable; it was created to mourn the dead of a war in which the Union was nearly destroyed, when half the country rose up in rebellion in defense of slavery. This year, the removal of Lee’s statue in New Orleans has inspired a new round of commentary about Lee, not to mention protests on his behalf by white supremacists.

The myth of Lee goes something like this: He was a brilliant strategist and devoted Christian man who abhorred slavery and labored tirelessly after the war to bring the country back together.

There is little truth in this. Lee was a devout Christian, and historians regard him as an accomplished tactician. But despite his ability to win individual battles, his decision to fight a conventional war against the more densely populated and industrialized North is considered by many historians to have been a fatal strategic error.

But even if one conceded Lee’s military prowess, he would still be responsible for the deaths of hundreds of thousands of Americans in defense of the South’s authority to own millions of human beings as property because they are black. Lee’s elevation is a key part of a 150-year-old propaganda campaign designed to erase slavery as the cause of the war and whitewash the Confederate cause as a noble one. That ideology is known as the Lost Cause, and as the historian David Blight writes, it provided a “foundation on which Southerners built the Jim Crow system.”

There are unwitting victims of this campaign—those who lack the knowledge to separate history from sentiment. Then there are those whose reverence for Lee relies on replacing the actual Lee with a mythical figure who never truly existed.

In the Richmond Times DispatchR. David Cox wrote that “for white supremacist protesters to invoke his name violates Lee’s most fundamental convictions.” In the conservative publication TownhallJack Kerwick concluded that Lee was “among the finest human beings that has ever walked the Earth.” John Daniel Davidson, in an essay for The Federalistopposed the removal of the Lee statute in part on the grounds that Lee “arguably did more than anyone to unite the country after the war and bind up its wounds.” Praise for Lee of this sort has flowed forth from past historians and presidents alike.

This is too divorced from Lee’s actual life to even be classed as fan fiction; it is simply historical illiteracy.

White supremacy does not “violate” Lee’s “most fundamental convictions.” White supremacy was one of Lee’s most fundamental convictions.

Lee was a slave owner—his own views on slavery were explicated in an 1856 letter that is often misquoted to give the impression that Lee was some kind of abolitionist. In the letter, he describes slavery as “a moral & political evil,” but goes on to explain that:

I think it however a greater evil to the white man than to the black race, & while my feelings are strongly enlisted in behalf of the latter, my sympathies are more strong for the former. The blacks are immeasurably better off here than in Africa, morally, socially & physically. The painful discipline they are undergoing, is necessary for their instruction as a race, & I hope will prepare & lead them to better things. How long their subjugation may be necessary is known & ordered by a wise Merciful Providence. Their emancipation will sooner result from the mild & melting influence of Christianity, than the storms & tempests of fiery Controversy.

The argument here is that slavery is bad for white people, good for black people, and most important, better than abolitionism; emancipation must wait for divine intervention. That black people might not want to be slaves does not enter into the equation; their opinion on the subject of their own bondage is not even an afterthought to Lee.

Lee’s cruelty as a slave master was not confined to physical punishment. In Reading the Man, the historian Elizabeth Brown Pryor’s portrait of Lee through his writings, Pryor writes that “Lee ruptured the Washington and Custis tradition of respecting slave families” by hiring them off to other plantations, and that “by 1860 he had broken up every family but one on the estate, some of whom had been together since Mount Vernon days.” The separation of slave families was one of the most unfathomably devastating aspects of slavery, and Pryor wrote that Lee’s slaves regarded him as “the worst man I ever see.”

The trauma of rupturing families lasted lifetimes for the enslaved—it was, as my colleague Ta-Nehisi Coates described it, “a kind of murder.” After the war, thousands of the emancipated searched desperately for kin lost to the market for human flesh, fruitlessly for most. In Reconstruction, the historian Eric Foner quotes a Freedmen’s Bureau agent who notes of the emancipated, “In their eyes, the work of emancipation was incomplete until the families which had been dispersed by slavery were reunited.”

Lee’s heavy hand on the Arlington, Virginia, plantation, Pryor writes, nearly led to a slave revolt, in part because the enslaved had been expected to be freed upon their previous master’s death, and Lee had engaged in a dubious legal interpretation of his will in order to keep them as his property, one that lasted until a Virginia court forced him to free them.

When two of his slaves escaped and were recaptured, Lee either beat them himself or ordered the overseer to “lay it on well.” Wesley Norris, one of the slaves who was whipped, recalled that “not satisfied with simply lacerating our naked flesh, Gen. Lee then ordered the overseer to thoroughly wash our backs with brine, which was done.”

Every state that seceded mentioned slavery as the cause in their declarations of secession. Lee’s beloved Virginia was no different, accusing the federal government of “perverting” its powers “not only to the injury of the people of Virginia, but to the oppression of the Southern Slaveholding States.” Lee’s decision to fight for the South can only be described as a choice to fight for the continued existence of human bondage in America—even though for the Union, it was not at first a war for emancipation.

During his invasion of Pennsylvania, Lee’s Army of Northern Virginia enslaved free black Americans and brought them back to the South as property. Pryor writes that “evidence links virtually every infantry and cavalry unit in Lee’s army” to the abduction of free black Americans, “with the activity under the supervision of senior officers.”

Soldiers under Lee’s command at the Battle of the Crater in 1864 massacred black Union soldiers who tried to surrender. Then, in a spectacle hatched by Lee’s senior corps commander, A. P. Hill, the Confederates paraded the Union survivors through the streets of Petersburg to the slurs and jeers of the southern crowd. Lee never discouraged such behavior. As the historian Richard Slotkin wrote in No Quarter: The Battle of the Crater, “his silence was permissive.”

The presence of black soldiers on the field of battle shattered every myth that the South’s slave empire was built on: the happy docility of slaves, their intellectual inferiority, their cowardice, their inability to compete with white people. As Pryor writes, “fighting against brave and competent African Americans challenged every underlying tenet of southern society.” The Confederate response to this challenge was to visit every possible atrocity and cruelty upon black soldiers whenever possible, from enslavement to execution.

As the historian James McPherson recounts in Battle Cry of Freedom, in October of that same year, Lee proposed an exchange of prisoners with the Union general Ulysses S. Grant. “Grant agreed, on condition that black soldiers be exchanged ‘the same as white soldiers.’” Lee’s response was that “negroes belonging to our citizens are not considered subjects of exchange and were not included in my proposition.” Because slavery was the cause for which Lee fought, he could hardly be expected to easily concede, even at the cost of the freedom of his own men, that black people could be treated as soldiers and not things. Grant refused the offer, telling Lee . . .

Continue reading.

Written by LeisureGuy

11 June 2020 at 6:59 pm

The Common Seaman in the Heroic Age of Sail 1740–1840

leave a comment »

Come Hell or High Water, by Stephen Taylor, seems a book useful to read in conjunction with Patrick O’Brian’s series of British Naval novels that begin with the trilogy:

Master and Commander
Post Captain
HMS Surprise.

Matthew Lyons reviews the book in Literary Review:

Early in the 19th century, there were some 260,000 of them across Britain’s naval and merchant fleets. People called them Jacks, but they are mostly nameless – or nameless to history. Even on surviving muster lists, seamen’s identities can be hidden behind pseudonyms. Some of these – George Million or Jacob Blackbeard, say – express a degree of wish fulfilment. Others are more whimsical: a Mark Anthony and Julius Caesar could be found on board the Calcutta-bound Tyger in 1757.

To join them was to enter another world, with its own laws (the thirty-six Articles of War, read to them every Sunday, besides whatever strictures a captain thought fit to apply), its own rituals and its own argot. ‘All seemed strange,’ one former ship’s boy recalled of his first days on board, ‘different language and strange expressions of tongue, that I thought myself always asleep, and never properly awake.’

There were, of course, distinctions among them. The lowest of the low were the waisters, comprised of old men, boys and the most inexperienced landsmen, good for nothing but drudgery. Then came the afterguard, consisting of ordinary seamen and more skilled landsmen, who trimmed the after yards and the sails. Above them were the forecastlemen, able seamen who handled the lower ropes and saw to weighing and anchoring. Princes over all of them were the topmen (or Foremast Jacks), who went aloft to bend or reef the sails, even in the highest of seas.

As Stephen Taylor argues in this enthralling new book, it was men like these who, in the great age of sail, made the British Empire possible. He tells the story of Britain’s rise to maritime supremacy in roughly the century from 1750 to 1850, using first-hand accounts of life on the lower decks, official records – ships’ logs, muster rolls, court martials and so on – and other contemporary sources.

Because of the immediacy of these sources, and Taylor’s deft, incisive use of them, it is the men, not the nation, to whom Sons of the Waves belongs. ‘Out of the King’s service they are in general citizens of the world,’ one officer wrote of them. Jacks might have made the British Empire possible, but they were only circumstantially loyal to it.

When their personal discontent became intolerable, they deserted in their tens of thousands. Nelson himself reckoned that 42,000 deserted between 1793 and 1802 alone, a figure Taylor believes may be on the low side. Their skills made them highly prized commodities and they were happy to sail under any flag, towards any compass point. The institution that valued that commodity least was the Royal Navy.

Perhaps the most resented British naval practice in this period was . . .

Continue reading.

 

Written by LeisureGuy

8 June 2020 at 2:18 pm

Posted in Books, Daily life, Military

Tagged with

The Pillage of India

leave a comment »

I found this book review interesting because I’m getting the feeling that the US is being pillaged as well. Christopher de Ballaigue writes in the NY Review of Books:

The Anarchy: The East India Company, Corporate Violence, and the Pillage of an Empire
by William Dalrymple
Bloomsbury, 522 pp., $35.00

Inglorious Empire: What the British Did to India
by Shashi Tharoor
Melbourne: Scribe, 294 pp., $17.95 (paper)

In the eighteenth century a career with the East India Company was a throw of the dice for unattached young British men. Arriving in India wan and scurvy after a year at sea, many quickly succumbed to disease, madness, or one of the innumerable little wars that the company fought in order to embed itself on the subcontinent. The salary was hardly an incentive. In the 1720s junior clerks, or “writers,” received just £5 per year, not enough to live on in Bengal or Madras and a pittance when set against the handsome 8 percent annual dividend the company’s shareholders awarded themselves back in London. Such drawbacks tended to put off all but those whom circumstances had already disfavored: second sons, members of the down-at-heel Anglo-Irish gentry, dispossessed Scottish landowners who had backed the losing side in a rebellion against the crown.

Being on the company payroll was rather a means to an end; moonlighting was where the money lay in one of the richest places on earth. In 1700 India is estimated to have accounted for 27 percent of the world economy and a quarter of the global textile trade. A considerable number of company employees who survived the shock of arrival went on to make fortunes from off-books trading in textiles, saltpeter, indigo, opium, salt, tobacco, betel, rice, and sugar; sidelines also included selling Mughal-issued tax exemptions and lending money to distressed Indian grandees.

The wills of company officials in the early 1780s show that one in three left their wealth to Indian wives, or as one put it, “the excellent and respectable Mother of my two children for whom I feel unbounded love and affection and esteem.” Others went home. Newly enriched returnees elbowed their way into high society and were rewarded with a moniker, “nabob,” which derived from an Indian word for prince, nawab, and signified an Indian-made plutocrat of boundless amorality.

Neither the directors in Leadenhall Street, the company’s headquarters in the City of London, nor the Mughal authorities who had granted the company its trading privileges in return for “presents” and taxes, approved of the nabobs’ freelancing. But the directors didn’t particularly mind, provided that the thirty-odd ships that sailed east every year from England’s south coast returned laden with luxury imports, along with a share of the taxes collected from the Indian enclaves that the company controlled. All the while the authority of the emperor, the unwarlike Shah Alam, was crumbling under the pressure of repeated Maratha, Afghan, and Iranian incursions into the Mughal heartland of the Gangetic Plain. These and the foragings of another group of armed Europeans, the French Compagnie des Indes, turned what the Mughal chronicler Fakir Khair ud-Din Illahabadi called “the once peaceful abode of India” into “the abode of Anarchy.”

Through adroit use of its well-trained, disciplined armies, over the course of the eighteenth century the company expanded its influence inland from the three littoral “Presidencies” of Calcutta, Madras, and Bombay. By the 1750s, William Dalrymple tells us in The Anarchy, his new account of the rise of the company, it accounted for almost an eighth of Britain’s total imports of £8 million and contributed nearly a third of a million pounds to the home exchequer in annual customs duties.

Awell-known historian both in his native Britain and his adoptive India, where he cofounded what may be the world’s biggest literary festival, at Jaipur, Dalrymple has influenced the scholarly as well as the popular understanding of South Asian history through his use of both European and Indian sources, thus uniting the halves of a previously bisected whole. (To pick just two examples from the extensive company literature, both John Keay’s 1993 book, The Honourable Company, which also deals with its extensive involvement in Southeast Asia, and Nick Robins’s commercial history, The Corporation That Changed the World, from 2012, are entirely reliant on British sources.) Dalrymple’s ability to present events from an Indian as well as a European perspective owes much to his mining of the National Archives in Delhi and his collaboration with the late Bruce Wannell, a waspish global flaneur and gifted linguist who lived in a tent on Dalrymple’s lawn in South Delhi while translating Mughal-era texts for him.

The company was transformed into an instrument of imperialism under Robert Clive, a terse, pugnacious delinquent from Shropshire. After arriving in Madras as a writer in 1744, Clive distinguished himself on the battlefield, making up in daring what he lacked in experience. In 1752 he and a fellow officer led a company force that took prisoner almost three thousand troops from the Compagnie des Indes, for which he was rewarded with a lucrative sinecure.

In 1756, after a spell back home, Clive’s taste for conquest and treasure took him to Bengal, whose production of silks and muslins made it the biggest supplier of Asian goods to Europe. In 1757 Clive led the company’s forces to victory against both the French and the uncooperative local nawab; from defeating the latter the company received what Dalrymple calls “one of the largest corporate windfalls in history”—in modern terms around £232 million. Clive himself pocketed an astronomical £22 million, with which he went on to acquire a string of desirable British properties, including an estate outside Limerick to go with his Irish peerage, while Lady Clive, as the Salisbury Journal informed its readers, garlanded her pet ferret with a diamond necklace worth more than £2,500.

Besides his military exploits Clive was admired by the directors for his administrative vigor, and he ended his Indian career as governor of Bengal. In 1765—two years before he returned to Britain for good—he secured his most substantive legacy when he forced Shah Alam to recognize the company’s financial authority over three of his richest provinces, Bengal, Bihar, and Orissa. A Mughal chronicler lamented that the British “have appointed their own district officers, they make assessments and collections of revenue, administer justice, appoint and dismiss collectors…heaven knows what will be the eventual upshot of this state of things.”

The baneful consequences of a commercial concern enjoying political power but answering only to its shareholders became apparent during the Bengal famine of 1770–1771. Company officers exacted dues from a dying populace as diligently as they had from a healthy one. Tax evaders were publicly hanged. The following year Calcutta informed Leadenhall Street that “notwithstanding the great severity of the late famine…some increase [in revenue] has been made.”

While at least one million Bengalis were dying of the famine and its effects, some company employees enriched themselves by hoarding rice. According to one anonymous whistleblower whose account was published in the Gentleman’s Magazine back in London:

Our Gentlemen in many places purchased the rice at 120 and 140 seers a rupee [a seer was about two pounds], which they afterwards sold for 15 seers a rupee, to the Black [Indian] merchants, so that the persons principally concerned have made great fortunes by it; and one of our writers…not esteemed to be worth 1,000 rupees last year, has sent down it is said £60,000 to be remitted home this year.

In Calcutta, the same source went on, “one could not pass the streets without seeing multitudes in their last agonies,” while “numbers of dead were seen with dogs, jackalls, hogs, vultures and other birds and beasts of prey feeding on their carcases.”

Back home, denunciations of the company’s conduct equaled in vehemence anything that would be uttered by nationalist Indians in the later stages of British rule. One satire attacked the directors of the company, among them “Sir Janus Blubber,” “Caliban Clodpate,” “Sir Judas Venom,” and “Lord Vulture,” as a “scandalous confederacy to plunder and strip.” But when Clive was investigated by Parliament on charges of amassing a fortune illegally, his achievements in defeating the French and increasing company revenues counted for more than the regime of plunder he had overseen—and Parliament included company shareholders and men who owed their seats to his largesse. Clive was exonerated in May 1773. The following year he committed suicide. He had, Samuel Johnson wrote, “acquired his fortune by such crimes that his consciousness of them impelled him to cut his own throat.”

The company was now a permanent subject of controversy in Britain, which was, in strenuous, unemphatic fits, moving from absolutism to accountability. But only rarely . . .

Continue reading.

Written by LeisureGuy

29 May 2020 at 8:06 pm

Posted in Business, Government, Law, Military, Politics

Tagged with

Germans reflect on the Allies’ arrival in Europe in 1945.

leave a comment »

Technically, I suppose, the Allies arrived in Europe on 3 September 1943 (Salerno invasion) and also (more familiar) on 6 June 1944 (Normandy invasion). (My uncle Choc took part in the Italian campaign.) But the Allies did not invade Germany until 22 March 1945.

Der Spiegel has a staff-written report:

Georg Stefan Troller sits in his apartment overlooking the rooftops of Paris, with a piece of cake in front of him. He has agreed to take a trip down memory lane. Outside his window is the city’s 7th arrondissement. Hail pelts the glass as Troller talks about the war. He tells how he, a Jew from Vienna, enlisted in the United States Army and returned to a decimated Europe. How he marched into Munich, the “capital of the Movement.” How he stood in Hitler’s apartment in the city one day and walked through the recently liberated Dachau concentration camp shortly thereafter. Even 75 years later, he hasn’t forgotten the shock he felt that day.

Troller was born in 1921. He’ll turn 99 this December. DER SPIEGEL visited him and other witnesses of that dark period in Europe’s past — in Paris, Hamburg, Moscow, Bonn and Berlin. When visits were impossible due to the pandemic, interviews were conducted over the phone. Witnesses’ memories were supplemented by diary entries that have been either published (like those of the German author Thomas Mann) or preserved at the German Diary Archive in Emmendingen (as was the case with Annemarie and Johann von Duhn, Hans Diester, Insa Radomski and 7-year-old Theodor Gruschka). The questions were always the same: How did Germans spend the summer of 1945? What influenced their day-to-day lives? Were they depressed? Exhausted? Ashamed?

Not every memory can be traced back to a specific date. To a degree, reporting from the postwar period has overwritten survivors’ personal recollections. But one thing is certain: In the collective memory, the summer of 1945 began on May 8, right after Nazi Germany’s unconditional surrender. It marked both a new beginning and a transition. Indeed, 1945 was an epochal year. There were more than 6 million dead in Germany alone, and more than 60 million dead worldwide. In Europe, 6 million Jews had been murdered. German cities lay under a billion tons of rubble. That was the present. The future lay in the division of Europe, in the disintegration of the world into blocks and in the dawning of the nuclear age.

It became clear that the end of the war would not instantly mean an end to people’s suffering. The violence spilled over into the summer, into the years of reconstruction, into generations to come. It was striking how significant people’s longing was for a life of privacy after years of mass mobilization and appeals for national unity.

And everyone learned in a very short time just what freedom meant. Hans-Jochen Vogel, who later became the head of the Social Democratic Party, tried chewing gum for the first time in his life. The journalist Wolf Schneider developed a fear of bridges. Friedrich Nowottny, who went on to head the German public broadcaster WDR, traded SS skull rings for cigarettes from American soldiers who collected them as Nazi souvenirs. The married couple Annemarie and Johann von Duhn sewed flags of the four victorious powers from rags and an old swastika banner. Future German Chancellor Helmut Schmidt experimented with a coffee substitute. A restaurant owner from Hamburg realized that “Hitler” was an unfortunate last name to have. Hans Modrow, who years later became the last prime minister of communist East Germany, met a Red Army soldier who could quote the German novelist Heinrich Heine. Marianne von Kretschmann, later Marianne von Weizsäcker, longed for school to begin. Klaus von Dohnanyi rode through Germany on a ladies’ bicycle and enjoyed the most unforgettable breakfast of his life. A young Theodor Gruschka observed that during a raid, sometimes “nude females” would appear. Fighting was still going on in the Pacific. And Martin Walser met the woman of his dreams that summer.

May 9

The day after its capitulation, the Third Reich made its last public announcement. Radio host Klaus Kahlenberg read aloud a message at 8:03 p.m. on the state-owned radio station in Flensburg. It began: “The Wehrmacht High Command announces.”

Neither the High Command nor the Wehrmacht existed anymore. Both had unconditionally surrendered the day before. “The German Wehrmacht has been honorably defeated at the hands of massively superior forces. We are broadcasting the text of the final Wehrmacht report of the war. Radio silence will be maintained for three minutes.”

Germany’s “zero hour” lasted three minutes.

Hans-Jochen Vogel pinned his German translation of an article in the U.S. Army newspaper Stars and Stripes to a bulletin board. That was his job in the Coltano prison camp near Pisa, Italy. “To translate things I considered important into German and post them on a bulletin board.”

The Russian Nikolai Pudow, a captain of the Red Army and an occupying soldier, experienced the first day of peace in a village on the Elbe River near the city of Wittenberg. The restaurants, he said, were teeming with military men in civilian clothing. Their posture exposed them as officers. “The Germans were very intimidated,” he recalls. “There were posters all over the villages: a giant ear, the enemy is listening, Red Army soldiers with bloody claws for hands. Most of the German words I used to know I’ve since forgotten. Except for ‘Untermensch (subhuman).'”

Near Greifswald, a young man with the Volkssturm, the Nazis’ last-ditch defensive army, found himself in Russian captivity: Hans Modrow, 17 years old at the time, wanted to walk home along the railway tracks, to Jasenitz. What he hadn’t considered was that the Red Army was keeping a close eye on the tracks, for fear of acts of sabotage by the Werewolves, a group of Nazi guerrilla fighters who tried to slow the Allied occupation.

Far away in Los Alamos, New Mexico, the Interim Committee on Atomic Energy met for the first time that day. Among those who attended the meeting were

Continue reading.

Written by LeisureGuy

6 May 2020 at 6:35 pm

Posted in Daily life

Tagged with

Aztecs beyond their sacrificial rituals

leave a comment »

Matthew Restall writes in History Today:

Everything you thought you knew about the Aztecs is wrong. Or, as Camilla Townsend more tactfully puts it at the start of her wonderful new book: ‘The Aztecs would never recognize themselves in the picture of their world that exists in the books and movies we have made.’

The picture to which Townsend refers is perhaps best symbolised for British readers by the image on the cover of the original Angry Aztecs volume in Terry Deary’s Horrible Histories series (1997): a cartoon depicts an Aztec warrior holding a fresh human heart, saying ‘His heart was in the right place’ (covers of other editions show variations on this theme, save for a 2014 edition depicting a rat in Aztec warrior garb). The joke works because the association of the Aztecs with the practice of human sacrifice runs deep and wide: most people who know only one thing about the Aztecs know that they are famous for sacrificing people to their gods; and those who are more familiar with the Aztecs – including those who, for example, teach in schools or universities – tend to think of Aztec culture as one in which bloodthirsty rituals and exotic superstitions played central roles.

In recent decades, a growing number of scholars have pointed out the many ways and reasons why and how that perception is distorted, if not plain wrong. The Aztecs, it turns out, were no more bloodthirsty or savage than anybody else in the world – including the early modern Europeans who systematically demonised them. Their culture was part of a civilisation (that of the Nahuas of central Mexico) that was as sophisticated and accomplished as that of those Europeans who sought to destroy it.

But fighting negative stereotypes and replacing them with something less prejudicial, less sensationalist, more multifaceted and more accurate has proved to be an uphill battle. Franciscan friars in the 16th century, along with other Catholic priests and chroniclers, created a portrait of Aztec religion, politics and social practices that was designed to justify the often-violent imposition of Spanish colonisation and forced conversion to Christianity. That portrait took root and flourished for centuries. The era of the global triumph of European empires was fertile ground for derogatory views of ‘barbarian’ societies swept aside by civilisation’s progress. When new fields of study and new evidence on the Aztec past emerged – archaeological discoveries from beneath Mexico City, for example, or unpublished manuscripts written in Nahuatl in the early colonial period – they tended to be deployed to confirm, or at best modify, that deep-rooted stereotype, not upend it.

What has changed? As Townsend explains in an appendix to Fifth Sun, not until the 21st century was there a convergence of scholars with a profound grasp of colonial-era Nahuatl, a willingness to challenge the well-established portrait of the Aztecs on which generations of scholars had built their careers and a readily available body of sources written in the early colonial decades by the descendants of the Aztecs (mostly in Nahuatl). Townsend makes particular use of a genre of documentation called xiuhpohualli by its Nahua writers. Literally meaning ‘yearly account’, such sources were more like community histories. Townsend presented the xiuhpohualli in greater detail in an earlier book, Annals of Native America (2016), so here they stand as the largely invisible foundation to her reconstruction of Aztec history. But, significantly, they allow her to . . .

Continue reading.

Written by LeisureGuy

9 April 2020 at 3:34 pm

Posted in Daily life

Tagged with

Human crap

leave a comment »

Gabrielle Hecht, Frank Stanton Foundation Professor of Nuclear Security at Stanford Universityand affiliated with the Center for International Security and Cooperation, the Department of History and the Department of Anthropology, writes in Aeon:

We are turning the world inside-out. Massive mining operations rip into rock, unearthing lithium, coltan and hundreds of other minerals to feed our gargantuan appetite for electronic stuff. Sand dredged from riverbeds and ocean floors becomes concrete; so far, there’s enough to cover the globe in a 2mm-thick shell. Oil sucked up from the seabed powers locomotion and manufacturing, and serves as the chemical base for our plasticised lives. We could easily wrap our concrete replica in plastic wrap.

Inverting the planet is messy. Retrieving all those minerals requires boring through tonnes of what the mining industry refers to as ‘sterile material’ – a revealing term for matter it perceives as purely obstructive, without use, infertile in every way. A typical 14 karat gold chain leaves one tonne of waste rock in South Africa. Obtaining the lithium that fuels cellphones and Teslas means drilling through fragile beds of salt, magnesium and potassium high in the Chilean Andes, producing piles and pools of discarded materials. More than 12,000 oil spills have defiled the Niger Delta. All this and more, so much more, from extraction alone.

Earth-systems scientists portray these processes with hockey-stick curves. Starting in the second half of the 20th century, their disturbing asymptotic graphs show a ‘great acceleration’ in the squandering of planetary materials. Some exponential increases can be measured directly, such as those for carbon dioxide or methane; others require extrapolation, like what’s left behind by dam building or motorised transport. Either way, the result is clear. Materials and molecules discarded in the course of planetary inversion do not disappear – instead, they move around, rising into the atmosphere, spreading out across once-fertile soils, seeping into waterways. We are worlding our waste.

Humans have always produced discards. But discards become waste only if they aren’t metabolised in a meaningful way. Consider the stuff emitted by our bodies on a more or less daily basis: pee and poop. Many societies have thrived by deploying, rather than discarding, human faeces. Pre-industrial Japan monetised excreta; as the historian Susan Hanley writes, in Osaka, ‘the rights to faecal matter … belonged to the owner of the building, whereas the urine belonged to the tenants’. For 4,000 years, China sustained an agricultural system using human stool as fertiliser. In the early 20th century, more than 180 million tonnes of human manure were collected annually in the Far East, according to estimates made in 1911 by the soil scientist F H King – amounting to 450 kilos per person per year, and enriching the soil with more than 1 million tonnes of nitrogen, 376,000 tonnes of potassium and 150,000 tonnes of phosphorus.

Admittedly, King might have overestimated: those figures equate to 1.2 kilos (2.6 pounds) of poop per person per day, which seems like a lot. Nevertheless, it’s hard to dismiss his subsequent comment:

Man [by which King meant white settler American men] is the most extravagant accelerator of waste the world has ever endured. His withering blight has fallen upon every living thing within his reach, himself not excepted; and his besom of destruction in the uncontrolled hands of a generation has swept into the sea soil fertility which only centuries of life could accumulate …

That was just over 100 years ago. Prophetic? Not really: King drew his conclusions from observations. Better to read this as yet another ‘Don’t say I didn’t warn you’ from a scientist.

Yet pooping can be pleasurable as well as practical. The 16th-century French author François Rabelais wrote not just about the gluttonous delights of food ingestion, but also of the ecstasy of its evacuation. Responding to his father’s question about how he stayed clean, the five-year old Gargantua of Rabelais’s fiction offered up a long list of options he’d tried, ranging from neckerchiefs to nettles. None could compare to his top choice, though:

I say and maintain that of all torcheculs, arsewisps, bumfodders, tail-napkins, bunghole cleansers and wipe-breeches, there is none in this world comparable to the neck of a goose, that is well downed, if you hold her head betwixt your legs … You will thereby feel in your nockhole a most wonderful pleasure, both in regard of the softness of the said down, and of the temperate heat of the goose, which is easily communicated to the bum-gut and the rest of the inwards … even to the regions of the heart and brains … The felicity of the heroes and demigods in the Elysian fields, consisteth [n]either in their AsphodelAmbrosia, or Nectar, but in this: … that they wipe their tails with the necks of a goose.

A startling image, very much of its time. Today, we might take it as an allegory for the relentless pursuit of comfort and pleasure, weirdly resonant with the contemporary affordances of modern middle-class existence. Three-ply ultra-soft toilet paper offers a facsimile of this Rabelaisian downiness, while capitalist infrastructures enable poopers to treat all of it – faeces and wipes alike – as disposable. Just flush it all away. Don’t think about where it goes. No need to wash the goose.

Disposing of dung is historically and culturally contingent. For a time, . . .

Continue reading.

Written by LeisureGuy

26 March 2020 at 3:24 pm

Posted in Daily life, Environment, Science

Tagged with

How Do You Know Whether You’re Living Through the Death of an Empire?

leave a comment »

Patrick Wyman writes in Mother Jones:

The fall of an empire is supposed to be a dramatic thing. It’s right there in the name. “Fall” conjures up images of fluted temple columns toppling to the ground, pulled down by fur-clad barbarians straining to destroy something beautiful. Savage invasions, crushing battlefield defeats, sacked cities, unlucky rulers put to death: These are the kinds of stories that usually come to mind when we think of the end of an empire. They seem appropriate, the climaxes we expect from a narrative of rise, decline, and fall.

We’re all creatures of narrative, whether we think explicitly in those terms or not, and stories are one of the fundamental ways in which we engage with and grasp the meaning of the world. It’s natural that we expect the end of a story—the end of an empire—to have some drama.

The reality is far less exciting. Any political unit sound enough to project its power over a large geographic area for centuries has deep structural roots. Those roots can’t be pulled up in a day or even a year. If an empire seems to topple overnight, it’s certain that the conditions that produced the outcome had been present for a long time—suppurating wounds that finally turned septic enough for the patient to succumb to a sudden trauma.

That’s why the banalities matter. When the real issues come up, healthy states, the ones capable of handling and minimizing everyday dysfunction, have a great deal more capacity to respond than those happily waltzing toward their end. But by the time the obvious, glaring crisis arrives and the true scale of the problem becomes clear, it’s far too late. The disaster—a major crisis of political legitimacy, a coronavirus pandemic, a climate catastrophe—doesn’t so much break the system as show just how broken the system already was.

Comparisons between the United States and Rome go back to the very beginning. The first volume of Edward Gibbon’s magisterial The History of the Decline and Fall of the Roman Empire was published in 1776, the year of the Declaration of Independence. The Founding Fathers had a deep appreciation and understanding of classical antiquity, and to some extent, they modeled aspects of their new nation on that understanding of the Roman past. Southern planters retained a distinct fondness for the Roman aristocracy, cultivating a life of high-minded leisure on the backs of chattel slaves.

As at the beginning, so too at the end. If anybody knows anything about Rome, they know that it fell, and they usually have a theory—lead poisoning is a popular one—to explain why. Every scholar working on Roman history has faced the linked questions of whether we’re Rome and where we are in the decline and fall. Those twin queries might come from students, casual acquaintances at a mandatory social function desperately trying to find conversational common ground, some guy at a party ripping massive bong hits to whom you made the mistake of telling your occupation, or, in my case, from podcast listeners and people on Twitter.

I spent the better part of a decade thinking about the end of the Roman Empire in its various manifestations. Academics, being academics, agree on very little about the topic. The idea of “fall” is now passé, for better and for worse; scholars prefer to speak of a “transformation” of the Roman world taking place over centuries, or better still, a long, culturally distinct, and important-in-its-own-right Late Antiquity spanning the Mediterranean world and beyond. If the Roman Empire did ever come to a real end, all agree, it was a long, slow process spanning many lifetimes—hardly the stuff of dramatic narratives. There are still a few catastrophists out there, but not many.

On one hand, this is all beside the point. While the eastern half of the Roman Empire survived in some form for the next thousand years, brought to an end only by the Ottoman sultan Mehmed the Conqueror in 1453, the Roman Empire in the west did in fact come to an end. After a certain point, either 476 (Romulus Augustulus) or 480 (Julius Nepos), there was no longer an emperor in Ravenna claiming authority over the vast territory it had encompassed, stretching from the sands of the Sahara to the moors of northern Britain. Supply wagons laden with grain and olive oil for garrisons of Roman soldiers no longer rolled along roads maintained at state expense. The villas everywhere from Provence to Yorkshire in which Roman aristocrats had passed their time, plotting their election to town councils and composing bad poetry, fell into ruin.

Depending on the time, place, and the identity of the observer, this process could look and feel much different. Let’s say you were a woman born in a thriving market town in Roman Britain in the year 360. If you survived to age 60, that market town would no longer exist, along with every other urban settlement of any significant size. You lived in a small village instead of a genuine town. You had grown up using money, but now you bartered—grain for metalwork, beer for pottery, hides for fodder. You no longer saw the once-ubiquitous Roman army or the battalions of officials who administered the Roman state. Increasing numbers of migrants from the North Sea coast of continental Europe—pagans who didn’t speak a word of Latin or the local British language, certainly not wage-earning servants of the Roman state—were already in the process of transforming lowland Britain into England. That 60-year-old woman had been born into a place as fundamentally Roman as anywhere in the Empire. She died in a place that was barely recognizable.

Let’s consider an alternative example. Imagine you were lucky enough to have been born the son of an aristocrat in Provence around the year 440. . .

Continue reading. There’s much more.

Written by LeisureGuy

20 March 2020 at 12:33 pm

Posted in Congress, Daily life, Government, Memes, Politics

Tagged with

Yuval Noah Harari’s History of Everyone, Ever

leave a comment »

Yuval Harari’s book Sapiens is one of the books on the list of books I find myself repeatedly recommending. To my eyes, it is a history of memetic evolution. Highly readable and very interesting.

Ian Parker writes in the New Yorker:

In 2008, Yuval Noah Harari, a young historian at the Hebrew University of Jerusalem, began to write a book derived from an undergraduate world-history class that he was teaching. Twenty lectures became twenty chapters. Harari, who had previously written about aspects of medieval and early-modern warfare—but whose intellectual appetite, since childhood, had been for all-encompassing accounts of the world—wrote in plain, short sentences that displayed no anxiety about the academic decorum of a study spanning hundreds of thousands of years. It was a history of everyone, ever. The book, published in Hebrew as “A Brief History of Humankind,” became an Israeli best-seller; then, as “Sapiens,” it became an international one. Readers were offered the vertiginous pleasure of acquiring apparent mastery of all human affairs—evolution, agriculture, economics—while watching their personal narratives, even their national narratives, shrink to a point of invisibility. President Barack Obama, speaking to CNN in 2016, compared the book to a visit he’d made to the pyramids of Giza.

“Sapiens” has sold more than twelve million copies. “Three important revolutions shaped the course of history,” the book proposes. “The Cognitive Revolution kick-started history about 70,000 years ago. The Agricultural Revolution sped it up about 12,000 years ago. The Scientific Revolution, which got under way only 500 years ago, may well end history and start something completely different.” Harari’s account, though broadly chronological, is built out of assured generalization and comparison rather than dense historical detail. “Sapiens” feels like a study-guide summary of an immense, unwritten text—or, less congenially, like a ride on a tour bus that never stops for a poke around the ruins. (“As in Rome, so also in ancient China: most generals and philosophers did not think it their duty to develop new weapons.”) Harari did not invent Big History, but he updated it with hints of self-help and futurology, as well as a high-altitude, almost nihilistic composure about human suffering. He attached the time frame of aeons to the time frame of punditry—of now, and soon. His narrative of flux, of revolution after revolution, ended urgently, and perhaps conveniently, with a cliffhanger. “Sapiens,” while acknowledging that “history teaches us that what seems to be just around the corner may never materialise,” suggests that our species is on the verge of a radical redesign. Thanks to advances in computing, cyborg engineering, and biological engineering, “we may be fast approaching a new singularity, when all the concepts that give meaning to our world—me, you, men, women, love and hate—will become irrelevant.”

Harari, who is slim, soft-spoken, and relentless in his search for an audience, has spent the years since the publication of “Sapiens” in conversations about this cliffhanger. His two subsequent best-sellers—“Homo Deus” (2017) and “21 Lessons for the 21st Century” (2018)—focus on the present and the near future. Harari now defines himself as both a historian and a philosopher. He dwells particularly on the possibility that biometric monitoring, coupled with advanced computing, will give corporations and governments access to more complete data about people—about their desires and liabilities—than people have about themselves. A life under such scrutiny, he said recently, is liable to become “one long, stressing job interview.”

If Harari weren’t always out in public, one might mistake him for a recluse. He is shyly oracular. He spends part of almost every appearance denying that he is a guru. But, when speaking at conferences where C.E.O.s meet public intellectuals, or visiting Mark Zuckerberg’s Palo Alto house, or the Élysée Palace, in Paris, he’ll put a long finger to his chin and quietly answer questions about Neanderthals, self-driving cars, and the series finale of “Game of Thrones.” Harari’s publishing and speaking interests now occupy a staff of twelve, who work out of a sunny office in Tel Aviv, where an employee from Peru cooks everyone vegan lunches. Here, one can learn details of a scheduled graphic novel of “Sapiens”—a cartoon version of Harari, wearing wire-framed glasses and looking a little balder than in life, pops up here and there, across time and space. There are also plans for a “Sapiens” children’s book, and a multi-season “Sapiens”-inspired TV drama, covering sixty thousand years, with a script by the co-writer of Mel Gibson’s “Apocalypto.”

Harari seldom goes to this office. He works at the home he shares with Itzik Yahav, his husband, who is also his agent and manager. They live in a village of expensive modern houses, half an hour inland from Tel Aviv, at a spot where Israel’s coastal plain is first interrupted by hills. The location gives a view of half the country and, hazily, the Mediterranean beyond. Below the house are the ruins of the once mighty Canaanite city of Gezer; Harari and Yahav walk their dog there. Their swimming pool is blob-shaped and, at night, lit a vivid mauve.

At lunchtime one day in September, Yahav drove me to the house from Tel Aviv, in a Porsche S.U.V. with a rainbow-flag sticker on its windshield. “Yuval’s unhappy with my choice of car,” Yahav said, laughing. “He thinks it’s unacceptable that a historian should have money.” While Yahav drove, he had a few conversations with colleagues, on speakerphone, about the fittings for a new Harari headquarters, in a brutalist tower block above the Dizengoff Center mall. He said, “I can’t tell you how much I need a P.A.”—a personal assistant—“but I’m not an easy person.” Asked to consider his husband’s current place in world affairs, Yahav estimated that Harari was “between Madonna and Steven Pinker.”

Harari and Yahav, both in their mid-forties, grew up near each other, but unknown to each other, in Kiryat Ata, an industrial town outside Haifa. (Yahav jokingly called it “the Israeli Chernobyl.”) Yahav’s background is less solidly middle class than his husband’s. When the two men met, nearly twenty years ago, Harari had just finished his graduate studies, and Yahav teased him: “You’ve never worked? You’ve never had to pick up a plate for your living? I was a waiter from age fifteen!” He thought of Harari as a “genius geek.” Yahav, who was then a producer in nonprofit theatre, is now known for making bold, and sometimes outlandish, demands on behalf of his husband. “Because I have only one author, I can go crazy,” he had told me. In the car, he noted that he had declined an invitation to have Harari participate in the World Economic Forum, at Davos, in 2017, because the proposed panels were “not good enough.” A year later, when Harari was offered the main stage, in a slot between Angela Merkel and Emmanuel Macron, Yahav accepted. His recollections of such negotiations are delivered with self-mocking charm and a low, conspiratorial laugh. He likes to say, “You don’t understand—Yuval works for me! ”

We left the highway and drove into the village. He said of Harari, “When I meet my friends, he’s usually not invited, because my friends are crazy and loud. It’s too much for him. He shuts down.” When planning receptions and dinners for Harari, Yahav follows a firm rule: “Not more than eight people.”

For more than a decade, Harari has spent several weeks each year on a silent-meditation retreat, usually in India. At home, he starts his day with an hour of meditation; in the summer, he also swims for half an hour while listening to nonfiction audiobooks aimed at the general reader. (Around the time of my visit, he was listening to a history of the Cuban Revolution, and to a study of the culture of software engineering.) He swims the breaststroke, wearing a mask, a snorkel, and “bone conduction” headphones that press against his temples, bypassing the ears.

When Yahav and I arrived at the house, Harari was working at the kitchen table, reading news stories from Ukraine, printed for him by an assistant. He had an upcoming speaking engagement in Kyiv, at an oligarch-funded conference. He was also planning a visit to the United Arab Emirates, which required some delicacy—the country has no diplomatic ties with Israel.

The house was open and airy, and featured a piano. (Yahav plays.) Harari was wearing shorts and Velcro-fastened sandals, and, as Yahav fondly observed, his swimming headphones had left imprints on his head. Harari explained to me that the device “beams sound into the skull.” Later, with my encouragement, he put on his cyborgian getup, including the snorkel, and laughed as I took a photograph, saying, “Just don’t put that in the paper, because Itzik will kill both me and you.”

Unusually for a public intellectual, Harari has drawn up a mission statement. It’s pinned on a bulletin board in the Tel Aviv office, and begins, “Keep your eyes on the ball. Focus on the main global problems facing humanity.” It also says, “Learn to distinguish reality from illusion,” and “Care about suffering.” The statement used to include “Embrace ambiguity.” This was cut, according to one of Harari’s colleagues, because it was too ambiguous.

One recent afternoon, Naama Avital, the operation’s C.E.O., and Naama Wartenburg, Harari’s chief marketing officer, were sitting with Yahav, wondering if Harari would accept a hypothetical invitation to appear on a panel with President Donald Trump.

“I think that whenever Yuval is free to say exactly what he thinks, then it’s O.K.,” Avital said.

Yahav, surprised, said that he could perhaps imagine a private meeting, “but to film it—to film Yuval with Trump?”

“You’d have a captive audience,” Wartenburg said.

Avital agreed, noting, “There’s a politician, but then there are his supporters—and you’re talking about tens of millions of people.”

“A panel with Trump?” Yahav asked. He later said that he had never accepted any speaking invitations from Israeli settlers in the West Bank, adding that Harari, although not a supporter of settlements, might have been inclined to say yes.

Harari has acquired a large audience in a short time, and—like the Silicon Valley leaders who admire his work—he can seem uncertain about what to do with his influence. Last summer, he was criticized when readers noticed that the Russian translation of “21 Lessons for the 21st Century” had been edited to make it more palatable to Vladimir Putin’s government. Harari had approved some of these edits, and had replaced a discussion of Russian misinformation about its 2014 annexation of Crimea with a passage about false statements made by President Trump.

Harari’s office is still largely a boutique agency serving the writing and speaking interests of one client. But, last fall, it began to brand part of its work under the heading of “Sapienship.” The office remains a for-profit enterprise, but it has taken on some of the ambitions and attributes of a think tank, or the foundation of a high-minded industrialist. Sapienship’s activities are driven by what Harari’s colleagues call his “vision.” Avital explained that some projects she was working on, such as “Sapiens”-related school workshops, didn’t rely on “everyday contact with Yuval.”

Harari’s vision takes the form of a list. “That’s something I have from students,” he told me. “They like short lists.” His proposition, often repeated, is that humanity faces three primary threats: nuclear war, ecological collapse, and technological disruption. Other issues that politicians commonly talk about—terrorism, migration, inequality, poverty—are lesser worries, if not distractions. In part because there’s little disagreement, at least in a Harari audience, about the seriousness of the nuclear and climate threats, and about how to respond to them, Harari highlights the technological one. Last September, while appearing onstage with Reuven Rivlin, Israel’s President, at an “influencers’ summit” in Tel Aviv, Harari said, in Hebrew, “Think about a situation where somebody in Beijing or San Francisco knows what every citizen in Israel is doing at every moment—all the most intimate details about every mayor, member of the Knesset, and officer in the Army, from the age of zero.” He added, “Those who will control the world in the twenty-first century are those who will control data.”

He also said that Homo sapiens would likely disappear, in a tech-driven upgrade. Harari often disputes the notion that he makes prophecies or predictions—indeed, he has claimed to do “the opposite”—but a prediction acknowledging uncertainty is still a prediction. Talking to Rivlin, Harari said, “In two hundred years, I can pretty much assure you that there will not be any more Israelis, and no Homo sapiens—there will be something else.”

“What a world,” Rivlin said. The event ended in a hug.

Afterward, Harari said of Rivlin, “He took my message to be kind of pessimistic.” Although the two men had largely spoken past each other, they were in some ways aligned. . .

Continue reading.

Written by LeisureGuy

13 February 2020 at 12:41 pm

1491 CE

leave a comment »

Before it became the New World, the Western Hemisphere was vastly more populous and sophisticated than has been thought—an altogether more salubrious place to live at the time than, say, Europe. New evidence of both the extent of the population and its agricultural advancement leads to a remarkable conjecture: the Amazon rain forest may be largely a human artifact.

That’s the blurb for an Atlantic article from March 2002 by Charles C. Mann, which begins:

The plane took off in weather that was surprisingly cool for north-central Bolivia and flew east, toward the Brazilian border. In a few minutes the roads and houses disappeared, and the only evidence of human settlement was the cattle scattered over the savannah like jimmies on ice cream. Then they, too, disappeared. By that time the archaeologists had their cameras out and were clicking away in delight.

Below us was the Beni, a Bolivian province about the size of Illinois and Indiana put together, and nearly as flat. For almost half the year rain and snowmelt from the mountains to the south and west cover the land with an irregular, slowly moving skin of water that eventually ends up in the province’s northern rivers, which are sub-subtributaries of the Amazon. The rest of the year the water dries up and the bright-green vastness turns into something that resembles a desert. This peculiar, remote, watery plain was what had drawn the researchers’ attention, and not just because it was one of the few places on earth inhabited by people who might never have seen Westerners with cameras.

Clark Erickson and William Balée, the archaeologists, sat up front. Erickson is based at the University of Pennsylvania; he works in concert with a Bolivian archaeologist, whose seat in the plane I usurped that day. Balée is at Tulane University, in New Orleans. He is actually an anthropologist, but as native peoples have vanished, the distinction between anthropologists and archaeologists has blurred. The two men differ in build, temperament, and scholarly proclivity, but they pressed their faces to the windows with identical enthusiasm.

Dappled across the grasslands below was an archipelago of forest islands, many of them startlingly round and hundreds of acres across. Each island rose ten or thirty or sixty feet above the floodplain, allowing trees to grow that would otherwise never survive the water. The forests were linked by raised berms, as straight as a rifle shot and up to three miles long. It is Erickson’s belief that this entire landscape—30,000 square miles of forest mounds surrounded by raised fields and linked by causeways—was constructed by a complex, populous society more than 2,000 years ago. Balée, newer to the Beni, leaned toward this view but was not yet ready to commit himself.

Erickson and Balée belong to a cohort of scholars that has radically challenged conventional notions of what the Western Hemisphere was like before Columbus. When I went to high school, in the 1970s, I was taught that Indians came to the Americas across the Bering Strait about 12,000 years ago, that they lived for the most part in small, isolated groups, and that they had so little impact on their environment that even after millennia of habitation it remained mostly wilderness. My son picked up the same ideas at his schools. One way to summarize the views of people like Erickson and Balée would be to say that in their opinion this picture of Indian life is wrong in almost every aspect. Indians were here far longer than previously thought, these researchers believe, and in much greater numbers. And they were so successful at imposing their will on the landscape that in 1492 Columbus set foot in a hemisphere thoroughly dominated by humankind.

Given the charged relations between white societies and native peoples, inquiry into Indian culture and history is inevitably contentious. But the recent scholarship is especially controversial. To begin with, some researchers—many but not all from an older generation—deride the new theories as fantasies arising from an almost willful misinterpretation of data and a perverse kind of political correctness. “I have seen no evidence that large numbers of people ever lived in the Beni,” says Betty J. Meggers, of the Smithsonian Institution. “Claiming otherwise is just wishful thinking.” Similar criticisms apply to many of the new scholarly claims about Indians, according to Dean R. Snow, an anthropologist at Pennsylvania State University. The problem is that “you can make the meager evidence from the ethnohistorical record tell you anything you want,” he says. “It’s really easy to kid yourself.”

More important are the implications of the new theories for today’s ecological battles. Much of the environmental movement is animated, consciously or not, by what William Denevan, a geographer at the University of Wisconsin, calls, polemically, “the pristine myth”—the belief that the Americas in 1491 were an almost unmarked, even Edenic land, “untrammeled by man,” in the words of the Wilderness Act of 1964, one of the nation’s first and most important environmental laws. As the University of Wisconsin historian William Cronon has written, restoring this long-ago, putatively natural state is, in the view of environmentalists, a task that society is morally bound to undertake. Yet if the new view is correct and the work of humankind was pervasive, where does that leave efforts to restore nature?

The Beni is a case in point. In addition to building up the Beni mounds for houses and gardens, Erickson says, the Indians trapped fish in the seasonally flooded grassland. Indeed, he says, they fashioned dense zigzagging networks of earthen fish weirs between the causeways. To keep the habitat clear of unwanted trees and undergrowth, they regularly set huge areas on fire. Over the centuries the burning created an intricate ecosystem of fire-adapted plant species dependent on native pyrophilia. The current inhabitants of the Beni still burn, although now it is to maintain the savannah for cattle. When we flew over the area, the dry season had just begun, but mile-long lines of flame were already on the march. In the charred areas behind the fires were the blackened spikes of trees—many of them, one assumes, of the varieties that activists fight to save in other parts of Amazonia.

After we landed, I asked Balée, Should we let people keep burning the Beni? Or should we let the trees invade and create a verdant tropical forest in the grasslands, even if one had not existed here for millennia?

Balée laughed. “You’re trying to trap me, aren’t you?” he said.

Like a Club Between the Eyes

According to family lore, my great-grandmother’s great-grandmother’s great-grandfather was the first white person hanged in America. His name was John Billington. He came on the Mayflower, which anchored off the coast of Massachusetts on November 9, 1620. Billington was not a Puritan; within six months of arrival he also became the first white person in America to be tried for complaining about the police. “He is a knave,” William Bradford, the colony’s governor, wrote of Billington, “and so will live and die.” What one historian called Billington’s “troublesome career” ended in 1630, when he was hanged for murder. My family has always said that he was framed—but we would say that, wouldn’t we?

A few years ago it occurred to me that my ancestor and everyone else in the colony had voluntarily enlisted in a venture that brought them to New England without food or shelter six weeks before winter. Half the 102 people on the Mayflower made it through to spring, which to me was amazing. How, I wondered, did they survive?

In his history of Plymouth Colony, Bradford provided the answer: by robbing Indian houses and graves. The Mayflower first hove to at Cape Cod. An armed company staggered out. Eventually it found a recently deserted Indian settlement. The newcomers—hungry, cold, sick—dug up graves and ransacked houses, looking for underground stashes of corn. “And sure it was God’s good providence that we found this corn,” Bradford wrote, “for else we know not how we should have done.” (He felt uneasy about the thievery, though.) When the colonists came to Plymouth, a month later, they set up shop in another deserted Indian village. All through the coastal forest the Indians had “died on heapes, as they lay in their houses,” the English trader Thomas Morton noted. “And the bones and skulls upon the severall places of their habitations made such a spectacle” that to Morton the Massachusetts woods seemed to be “a new found Golgotha”—the hill of executions in Roman Jerusalem.

To the Pilgrims’ astonishment, one of the corpses they exhumed on Cape Cod had blond hair. A French ship had been wrecked there several years earlier. The Patuxet Indians imprisoned a few survivors. One of them supposedly learned enough of the local language to inform his captors that God would destroy them for their misdeeds. The Patuxet scoffed at the threat. But the Europeans carried a disease, and they bequeathed it to their jailers. The epidemic (probably of viral hepatitis, according to a study by Arthur E. Spiess, an archaeologist at the Maine Historic Preservation Commission, and Bruce D. Spiess, the director of clinical research at the Medical College of Virginia) took years to exhaust itself and may have killed 90 percent of the people in coastal New England. It made a huge difference to American history. “The good hand of God favored our beginnings,” Bradford mused, by “sweeping away great multitudes of the natives … that he might make room for us.”

By the time my ancestor set sail on the Mayflower, Europeans had been visiting New England for more than a hundred years. English, French, Italian, Spanish, and Portuguese mariners regularly plied the coastline, trading what they could, occasionally kidnapping the inhabitants for slaves. New England, the Europeans saw, was thickly settled and well defended. In 1605 and 1606 Samuel de Champlain visited Cape Cod, hoping to establish a French base. He abandoned the idea. Too many people already lived there. A year later Sir Ferdinando Gorges—British despite his name—tried to establish an English community in southern Maine. It had more founders than Plymouth and seems to have been better organized. Confronted by numerous well-armed local Indians, the settlers abandoned the project within months. The Indians at Plymouth would surely have been an equal obstacle to my ancestor and his ramshackle expedition had disease not intervened.

Faced with such stories, historians have long wondered how many people lived in the Americas at the time of contact. “Debated since Columbus attempted a partial census on Hispaniola in 1496,” William Denevan has written, this “remains one of the great inquiries of history.” (In 1976 Denevan assembled and edited an entire book on the subject, The Native Population of the Americas in 1492.) The first scholarly estimate of the indigenous population was made in 1910 by James Mooney, a distinguished ethnographer at the Smithsonian Institution. Combing through old documents, he concluded that in 1491 North America had 1.15 million inhabitants. Mooney’s glittering reputation ensured that most subsequent researchers accepted his figure uncritically.

That changed in 1966, when Henry F. Dobyns published “Estimating Aboriginal American Population: An Appraisal of Techniques With a New Hemispheric Estimate,” in the journal Current Anthropology. Despite the carefully neutral title, his argument was thunderous, its impact long-lasting. In the view of James Wilson, the author of The Earth Shall Weep (1998), a history of indigenous Americans, Dobyns’s colleagues “are still struggling to get out of the crater that paper left in anthropology.” Not only anthropologists were affected. Dobyns’s estimate proved to be one of the opening rounds in today’s culture wars.

Dobyns began his exploration of pre-Columbian Indian demography in the early 1950s, when he was a graduate student. At the invitation of a friend, he spent a few months in northern Mexico, which is full of Spanish-era missions. There he poked through the crumbling leather-bound ledgers in which Jesuits recorded local births and deaths. Right away he noticed how many more deaths there were. The Spaniards arrived, and then Indians died—in huge numbers, at incredible rates. It hit him, Dobyns told me recently, “like a club right between the eyes.”

It took Dobyns eleven years to obtain his Ph.D. Along the way he joined a rural-development project in Peru, which until colonial times was the seat of the Incan empire. Remembering what he had seen at the northern fringe of the Spanish conquest, Dobyns decided to compare it with figures for the south. He burrowed into the papers of the Lima cathedral and read apologetic Spanish histories. The Indians in Peru, Dobyns concluded, had faced plagues from the day the conquistadors showed up—in fact, before then: smallpox arrived around 1525, seven years ahead of the Spanish. Brought to Mexico apparently by a single sick Spaniard, it swept south and eliminated more than half the population of the Incan empire. Smallpox claimed the Incan dictator Huayna Capac and much of his family, setting off a calamitous war of succession. So complete was the chaos that Francisco Pizarro was able to seize an empire the size of Spain and Italy combined with a force of 168 men.

Smallpox was only the first epidemic. Typhus (probably) in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, measles in 1618—all ravaged the remains of Incan culture. Dobyns was the first social scientist to piece together this awful picture, and he naturally rushed his findings into print. Hardly anyone paid attention. But Dobyns was already working on a second, related question: If all those people died, how many had been living there to begin with? Before Columbus, Dobyns calculated, the Western Hemisphere held  . . .

Continue reading.

Written by LeisureGuy

14 October 2019 at 9:39 am

The Death of Alexander the Great: One of History’s Great Unsolved Mysteries

leave a comment »

Update: See also Alexander the Great and West Nile Virus Encephalitis  /update

Anthony Everitt writes at Literary Hub:

Alexander the Great’s death is an unsolved mystery. Was he a victim of natural causes, felled by some kind of fever, or did his marshals assas­sinate him, angered by his tyrannical ways? An autopsy would decide the question, but it is too late for that.

The trail is long cold. All who recalled the terrible fortnight of his dying had their own reputations to protect and they were not under oath when publishing their memoirs. The secret of Alexander’s end will not be discovered by poring over disputed narratives, but by as­sessing his interaction with others. Who were the men and women he knew, and who his friends and enemies? What did they think of him and he of them? Where lay their loyalties, and where the imperatives of self-interest?

In the year 323 BC, Alexander enjoyed an overdue vacation in the deluxe metropolis of Babylon in Mesopotamia. This was one of the great cities of the Persian empire and over the centuries had grown ac­customed to looking after the needs of invaders. Its Hanging Gardens were one of the Seven Wonders of the ancient world. A few weeks there of uninterrupted leisure and pleasure were just what Alexander and his careworn soldiers needed.

The youthful Macedonian monarch had spent a good ten years fighting his way nonstop through the Per­sian empire to its Indian frontier, deposing the Great King and seizing power himself. After winning victories in the Punjab and along the Indus River, he marched back to civilization through a searing desert, losing thousands of his men for lack of water before reaching the safety and the comforts of Mesopotamia.

Alexander was still a handsome man in his prime whose triumphant past augured a shining future. His next and imminent project was to establish commercially viable townships along the Arabian coast. A port had been specially built near Babylon to house a new fleet. Mean­while the army prepared to march south by land. Victory was taken for granted, but after that, who knew what?

For now, in late May, as the unrelenting heat of summer ap­proached, he needed a good rest. Babylon had all the necessary facili­ties. There was water everywhere; the river Euphrates on its way to the Persian Gulf passed through the center of the city and poured into the moats that lay alongside the lofty defensive walls of baked mud brick. And beyond the walls lay swamps and lagoons bursting with wildlife, irrigation channels, and reservoirs.

Two colossal palaces stood in the north of Babylon, with offices and workshops. One of them functioned, at least in part, as among the world’s earliest museums, housing treasured artifacts from earlier times, and was probably where kings and their families lived in grand but private seclusion. The other, which modern archaeologists have named the Southern Palace, was set aside mainly for administration and for ceremonial functions. Offices and workshops surrounded five courtyards, one of which opened onto a vast throne room whose walls were glazed in blue and yellow tiles and decorated with floral reliefs, lions, and fan-shaped designs suggesting the fronds of a palm tree.

On the river’s edge beside the palace, the Hanging Gardens as­tounded visitors. A set of ascending terraces, angled back one above the other, rested on great brick vaults. Each terrace contained a deep bed of earth and was planted with trees and shrubs. The effect was of a wooded hillside. A staircase led up to all the floors, and water drawn from the river by mechanical pumps irrigated each tier. The story was told that Babylon’s most successful king, Nebuchadnezzar II, con­structed the Hanging Gardens for his wife, who missed the mountains of her childhood.

In principle, there was nothing so very unusual about them, for they were a condensed urban version of the large walled garden or park much favored by the wealthy and the powerful, who sought refreshing green relief from the parched landscapes of the east. The Greek word for such a garden was paradeisos, from which we derive our “paradise.”

As the design of the Hanging Gardens goes to show, the people of Babylon and other Mesopotamians were skillful managers of water. They built canals and irrigation systems, and just to the north of the Southern Palace they constructed what seems to have been a large res­ervoir.

On the eastern side of Babylon, an outer wall formed . . .

Continue reading.

Written by LeisureGuy

30 August 2019 at 5:06 pm

Posted in Daily life

Tagged with

%d bloggers like this: