Posts Tagged ‘History’
John Glubb and Avoiding the Fate of Empires (which tend to have a 250-year lifespan)
Just to save you the trouble: 1776 + 250 = 2026.
Lee Nicolleto writes in Quillette:
Empires rise, and empires fall. This fact of history—so obvious looking backwards—is all but inconceivable to those living through an empire’s peak. Human life is so short in the scheme of civilisations that we tend to overemphasise the importance and length of our own era, while past ages blur together. We live closer in time to Cleopatra than she did to the builders of the pyramids, but Ancient Egypt, Greece, and Rome all blend in the popular imagination into a shadowy and distant past.
Culture biases us as much as our sense of time. The Arabic-speaking Moors ruled large parts of what is now Spain for nearly eight centuries—that is, for a third as long again as the 600 years that have passed since they (or at least, their leaders) were driven out by the newly-united Catholic monarchs of Aragon and Castile. Yet through contemporary European eyes, Moorish rule is typically viewed as an “interlude” in the history of the Spanish nation—a nation that, in reality, didn’t come into political being until the late 15th century. And—as many Basques, Catalonians, and Galicians would argue—a nation that has perhaps never truly existed culturally.
Nevertheless, the relative stability of political hegemony lets us overlook many cultural differences and tensions. Today’s Catalan and Basque independence movements may threaten the Spanish state, but they don’t challenge the “empire”—that is, the North Atlantic order, centred on the United States, the European Union, and institutions like the UN, NATO, and the WTO. Similarly, during the golden age of the Emirate of Córdoba, the tensions and rebellions of the Latin-speaking, post-Visigothic kingdoms to the north might have threatened various aspects of the Iberian state, but had little bearing on the sense of hegemonic stability that pervaded the Caliphate, which stretched across the Maghreb through the Middle East to India.
But the Caliphate did fall. And sooner or later, the present North Atlantic empire will lose its hegemony too. Indeed, if there is any truth to the theories of Sir John “Pasha” Glubb, we are already witnessing the final stages of Western dominance, and experiencing a transfer of power (back) towards the East.
Glubb was an English army officer who spent the best part of his career serving the newly-independent governments of Iraq and Jordan. An avid—if amateur—historian, he developed a theory on hegemonic orders that he called the “Fate of Empires.” Comparing a series of ancient and modern empires, he concluded that their average lifespan was 10 generations—about 250 years—and that, despite great geographic, technological, religious, and cultural differences, all empires follow a general pattern as they expand, develop, and finally decline and collapse. Although Glubb himself was the first to acknowledge the risks of over-simplification in his generalised model, his observations aptly describe, in broad-brushstrokes, not only the fate of past empires, but the contemporary situation in global politics today, particularly regarding the West and China.
Glubb was agnostic on whether the “laws” of history he claimed to uncover were at all deterministic, but hoped that, by understanding how empires decline and collapse, modern citizens stood a chance of avoiding their typical fate. And so, I want to consider ways in which the predicted collapse of Western hegemony might be averted. It’s another question whether or not such a collapse ought to be avoided. Glubb—as a man of his time and class—had imperialist tendencies, though his immersion in foreign cultures gave him an open-mindedness that is generally lacking in the present-day imperialists of Western conservative parties. At any rate, as we go along, I’ll suggest that if the West is to avoid the fate of past empires, it needs to stop acting like a typical empire. And to do that, it needs to move as far as possible from modern conservative policy—and its emphasis on corporate profit and economic growth—as it can.
* * *
Glubb noted that empires tend to begin with a “breakout” phase, in which an insignificant nation on the margins of an established power—say, the Macedonians before Alexander, the Arabs before Muhammed, or the Mongols before Genghis Khan—suddenly overwhelms its neighbours. This “Age of Pioneers” becomes an “Age of Conquests” when, encouraged by early successes, the rising nation takes over the power structures of its conquered neighbour and continues to expand. Glubb noted that successful new empires are not motivated simply by loot and plunder. With an emphasis on “noble” virtues—adventurousness, courage, strength, and, importantly, honesty—rising empires don’t want simply to subdue the established power; they want to become as they perceive them to be: advanced, technological hegemons. The Arabs took over Greek and Persian institutions—as the Mongols would take over Chinese and Islamic institutions—to become masters of a revitalised and expanded civilisation.
A rising empire, argued Glubb, has at its advantage an . . .
A history & politics chat from Dr. Heather Cox Richardson
She has a series of these. They are informative. Here’s one:
Why do conservatives distort and rewrite history: UK edition
Frank Trentmann, a professor of history at Birkbeck, University of London, and who previously taught at Princeton University, writes in the Washington Post. His observations show that the way that American Republican ignore and/or distort historical fact is not limited to the US conservative movement. He writes:
Britain is adrift. Yesterday evening, the British government proceeded with its threat to breach international law by overriding the protocol on Northern Ireland it had agreed with the European Union. This protocol, signed only last January, was designed to protect the fragile peace process by preventing the return of a hard border on the island of Ireland. The bill passed its first reading with a majority of 77. Time and again, Brexit Britain shows it is adamant at going alone, whatever the cost. In the midst of the first wave of covid-19, it even refused to participate in the European plan to procure medical equipment. Doctors and nurses had to make their own PPE out of trash bags.
Britain’s current isolationism bewilders its allies — but it is not accidental. This stance is linked to a distorted view of its glorious past and inborn genius that became the official history of the government in 2013, three years before the nation narrowly voted to leave the European Union.
The “long and illustrious history” is set out in the official book “Life in the UK.” It was launched by Theresa May, then the Conservative home secretary, who would later become prime minister, and has been “approved by ministers” ever since. It presents a Britain that is never wrong, almost always White and never needs international partners. The rewriting of the national past gives us a worrying window on how Britain sees itself and its place in the world. But it isn’t only symbolic: The official history is required reading for about 100,000 people every year who are applying for citizenship or settlement. Migrants have to read and remember the information, which is then tested in a multiple-choice computer exam.
[Britain’s Brexit crisis was entirely self-inflicted]
So distorted is this official account that more than 600 historians are asking for it to be scrapped until there has been a proper review. (I helped organize the petition.) The criticism focuses on the airbrushing of slavery and colonial violence. The handbook, for example, tells new citizens that slavery within the British Isles was “illegal” by the 18th century and purely an overseas operation; in fact, judges debated the issue, and enslaved people within Britain were openly advertised for sale in newspapers in the 18th century. The unwinding of Britain’s vast colonial project is summed up as “for the most part, an orderly transition.” But that ignores the chaos and violence in many places, such as the partition of India in 1947 and the extrajudicial killings in the Mau Mau Rebellion in Kenya from 1952 to 1960.
These are not accidental mistakes. They are part of a consistent pattern of deliberate cuts and rewriting. Comparison with earlier editions published under the previous Labour governments shows that migrants used to be taught that enslaved people died in the Middle Passage. In the current version, they are only “traveling in horrible conditions,” an unfortunate wording given that about 400,000 of them died on board British ships. A previous description of the slave trade as “evil” was cut, as was the fact that Liverpool and Bristol profited handsomely from it. In the page on the American Revolution, the British Parliament no longer “refused” to compromise with the Americans (correct) but “tried to compromise” (false).
Ireland falls victim to similar revisionism. Instead of colonizing Ulster by force in the early 17th century, James I now merely “encouraged” English and Scottish Protestants to settle there. The great famine of 1846 is still mentioned, but the fact that Britain might have done more to help the Irish has been excised. The deployment of the British army in Northern Ireland after 1969 vanishes altogether. This willful blindness to Ireland’s past might explain how the British government can play so casually today with its future.
Any mistake or sign of national weakness is stripped from the record. Appeasement in 1938 cannot be mentioned — even though this means Winston Churchill can no longer be celebrated for opposing it. Adolf Hitler’s invasion of Czechoslovakia becomes merely “testing Germany’s military strength in nearby countries.”
[Boris Johnson is discovering that Brexit only works when it’s a fantasy]
Nor is there room for ambivalence or regret. When the bombing of Hiroshima and Nagasaki is mentioned, you are not told about the many civilian victims but that “scientists led by New Zealand-born Ernest Rutherford, working at Manchester and then Cambridge University, were the first to ‘split the atom.’ Some British scientists went on to take part in the Manhattan Project in the United States, which developed the atomic bomb. The war was finally over.”
One can read this official history as an Anglo-Saxon fantasy of Britain after Brexit — one where, if people of non-European descent feature at all, they do so only as faceless migrants. They are almost never part of British society. Africans and people who escaped slavery and lived in 18th-century Britain have been deleted. Among more than 200 personalities one needs to remember, the only people who are not White are a handful of athletes, the architect Zaha Hadid and Sake Dean Mohamet, a co-founder of England’s first curry house in 1810.
The Home Office (essentially, Britain’s Department of Homeland Security) has been so scrupulous in taking out race that it even cut the previous reference to Hitler’s “racist ideology.” In its place, migrants are now told that “he believed that the conditions imposed on Germany by the Allies after the First World War were unfair; he also wanted to conquer more land for the German people.” That is it. There is plenty about the Second World War, studded with famous quotes from Churchill, but you would not know it was a war of extermination. In more than 180 pages, the Holocaust is not mentioned once. And this in a text that is meant to prepare people for life in the United Kingdom today, at a time of growing concern about anti-Semitism.
The British government trumpets its vision of a new “Global Britain” and dreams of striking quick free-trade deals with the United States and other countries around the world. But judging by this official text, it fundamentally misunderstands what international exchange is about. The list of inventions that “Britain has given the world” includes . . .
Continue reading.
Birth of the Business Corporation
The meme family of business organizations, and particularly the various species of business corporations, with shareholders, managers, and employees, have evolved into a particularly venomous group. Although, just as with snakes, there are some benign examples, but a great number present a danger to themselves and others through mutations such as monopolies and goal selection based solely on profit. For example, that narrow focus leads to extensive efforts to externalizing corporate costs, so that others — taxpayers — cover the costs the corporation incurs (thus inflating profits since some of the corporation’s costs — and often major costs (cf. the costs associated with cleaning up Superfund sites) don’t hit the P&L statement).
So how did this meme genus emerge? What selection pressures fostered its evolution? Anton Howes takes a look in Age of Invention:
I discussed last time how the use of patent monopolies came to England in the sixteenth century. Since then, however, I’ve developed a strong hunch that the introduction of patent monopolies may also have played a crucial role in the birth of the business corporation. I happened to be reading Ron Harris’s new book, Going the Distance, in which he stresses the unprecedented constitutions of the Dutch and English East India Companies — both of which began to emerge in the closing years of the sixteenth century. Yet the first joint-stock corporation, albeit experimental, was actually founded decades earlier, in the 1550s. Harris mentions it as a sort of obscure precursor, and it wasn’t terribly successful, but it stood out to me because its founder and first governor was also one of the key introducers of patent monopolies to England: the explorer Sebastian Cabot.
As I mentioned last time, Cabot was named on one of England’s very first patents for invention — though we’d now say it was for “discovery” — in 1496. An Italian who spent much of his career serving Spain, he was coaxed back to England in the late 1540s to pursue new voyages of exploration. Indeed, he reappeared in England at the exact time that patent monopolies for invention began to re-emerge, after a hiatus of about half a century. In 1550, Cabot obtained a certified copy of his original 1496 patent and within a couple of years English policymakers began regularly granting other patents for invention. It started as just a trickle, with one 1552 patent granted to to some enterprising merchant for introducing Norman glass-making techniques, and a 1554 patent to the German alchemist Burchard Kranich, and in the 1560s had developed into a steady stream.
Yet Cabot’s re-certification of his patent is never included in this narrative. It’s a scarcely-noted detail, perhaps because he appears not to have exploited it. Or did he? I think the fact of his re-certification — a bit of trivia that’s usually overlooked — helps explain the origins of the world’s first joint-stock corporation.
Corporations themselves, of course, were nothing new. Corporate organisations had existed for centuries in England, and indeed throughout Europe and the rest of the world: officially-recognised legal “persons” that might outlive each and any member, and which might act as a unit in terms of buying, selling, owning, and contracting. Cities, guilds, charities, universities, and various religious organisations were usually corporations. But they were not joint-stock business corporations, in the sense of their members purchasing shares and delegating commercial decision-making to a centralised management to conduct trade on their behalves. Instead, the vast majority of trade and industry was conducted by partnerships of individuals who pooled their capital without forming any legally distinct corporation. Shares might be bought in a physical ship, or even in particular trading voyages, but not in a legal entity that was both ongoing and intangible. There were many joint-stock associations, but they were not corporations.
And to the extent that some corporations in England were related to trade, such as the Company of Merchant Adventurers of London, or the Company of Merchants of the Staple, they were not joint-stock businesses at all. They were instead regulatory bodies. These corporations were granted monopolies over the trade with certain areas, or in certain commodities, to which their members then bought licenses to trade on their own account. Membership fees went towards supporting regulatory or charitable functions — resolving disputes between members, perhaps supporting members who had fallen on hard times, and representing the interests of members as a lobby group both at home and abroad — but not towards pooling capital for commercial ventures. The regulated companies were thus more akin to guilds, or to modern trade unions or professional associations, rather than firms. Members were not shareholders, but licensees who used their own capital and were subject to their own profits and losses.
Before the 1550s, then, there had been plenty of unincorporated business associations that were joint-stock, and even more unincorporated associations that were not joint-stock. There had also been a few trade-related corporations that were not joint-stock. Sebastian Cabot’s innovation was thus to fill the last quadrant of that matrix: he created a corporation that would be joint-stock, in which a wide range of shareholders could invest, entrusting their capital to managers who would conduct repeated voyages of exploration and trade on their behalves.
Cabot’s reasons for this change are poorly documented, because many of the records of the resulting company were destroyed in the Great Fire of London in 1666. Nonetheless, I think a few facts may allow us to deduce why.
For a start, . . .
Why Did World War II End?
Kevin Drum has an interesting post that makes the argument that it was not the atomic bombs that forced Japan’s surrender. I find his argument persuasive.
When the Culture Wars Hit Fort Wayne
Charlie Savage — a really fine reporter — has a lengthy and intriguing article in Politico on the nuts and bolts of civic ambition aand conflict. It begins:
FORT WAYNE, Ind.—One chilly February evening last year in this Midwestern town where I grew up, a city council member named Jason Arp proposed a resolution: Starting that summer, the city should annually celebrate a “General ‘Mad’ Anthony Wayne Day.”
My hometown is a rust-belt industrial city of a quarter-million people on the confluence of three slow rivers, today home to branches of several defense contractors and a regionally famous zoo. Wayne was an early American military leader who in 1794 built a frontier fort here, during the period when white settlers from the young United States were pushing west into the Ohio Valley and coming into conflict with Native Americans. As a piece of city business, having a day in Fort Wayne to honor Anthony Wayne might seem—from a distance at least—to be correcting an oversight. Why wouldn’t the city have a founder’s day?
At the meeting, Arp, a stocky man in his mid-40s with close-cropped hair, leaned back and put forward a glowing portrait of Wayne as a hero and role model, clicking through a slideshow. Not only had Wayne won the Northwest Indian War and established the outpost that grew into the city, but he had earlier won fame in the American Revolution for leading a charge that overran a British outpost at Stony Point, New York. This feat, Arp said, had thwarted a British attempt to capture top American military leaders like George Washington and Henry Knox after the traitor Benedict Arnold had revealed their location. “We can thank Anthony Wayne for the fact that we even have a United States of America,” he said.
He proposed that July 16, the date of the 1779 Stony Point raid, be celebrated as Wayne Day.
Arp’s account of Wayne’s life and contributions to American history is not the universal consensus among historians, and the details appeared unfamiliar to the other city council members. Watching a video of this meeting later, this did not surprise me. Local public schools had taught us almost nothing about Wayne. Occasional Cub Scout visits to the “Old Fort”—a replica fort built in the run-up to its bicentennial and staffed by costumed reenactors—were primarily about spectacles like a blacksmith hammering red-hot iron into a horseshoe and soldiers raising a flag or firing a deafening blank from a cannon, with light discussion of history. To the extent we thought about Wayne himself, the version we grew up with amounted to something like this: Indians had been giving settlers a hard time, and Wayne solved the problem.
Arp described this, in a way. His presentation detailed a military campaign in which Wayne defeated a regional alliance of native tribes, omitting most of the larger context about what had led to the war. His resolution also softened any moral discomfort raised by the basic circumstances of Wayne’s feat—he led outside invaders to victory over people trying to defend their homes—by making the claim that the Indians had been “British led,” reframing the conflict as a struggle driven by two factions of white people.
The council meeting went on to address more typical business, like approving a maintenance contract for a water treatment plant. The impact of Arp’s resolution on Wayne Day would be very different. The people in the room might not have fully appreciated it when they arrived, but the culture wars had just come to Fort Wayne.
It wasn’t surprising that Arp had been the one to fire the first shot. He had stood apart from most of his colleagues on the city council since his election in 2015, embodying a local version of the strain of Republican politics that branded itself as the Tea Party in opposition to Barack Obama and that has tightened its grip on the GOP in the Donald Trump era. Though Arp was one of seven Republicans on the nine-member council, he often found himself fighting against the majority in his own party, as well as the two Democrats. A former mortgage-backed securities trader who told me he primarily lives off his investments, he voted against budgets and redevelopment project subsidies that the council nevertheless passed; he unsuccessfully proposed to eliminate a tax that funds local libraries and public schools. (His own daughters are home-schooled.) Arp went so far as to release a scorecard on his colleagues based on whether their votes supported or opposed activity by the government, which he translated into a choice between an “authoritarian” mindset or for “liberty.” He tarred not only the two Democrats on the council as authoritarian-leaning, but also its five more traditional Republican members.
Arp pitched his Wayne Day resolution at a moment when he needed to shore up support and attract attention. Near his term’s end, he was facing a contested primary. His opponent, a more traditional Republican, won a few prominent endorsements, putting Arp’s political future in doubt. But that was before the Wayne Day fight—which would arouse the anger of a local clergyman, paralyze the city’s historical society, and surprise Fort Wayne with unhappy emissaries from a distant tribal nation in Oklahoma.
The summer of 2020 has become a time of asking questions about what America commemorates and why. The nationwide Black Lives Matter protests that followed the police killing of George Floyd in Minneapolis are growing into a broader moment of reckoning, as the push to take down Confederate Civil War monuments expands into reconsidering statues and naming honors for historical figures like Christopher Columbus, Woodrow Wilson and Margaret Sanger. Business owners are dropping longstanding Native American stereotypes as logos and mascots from products like Land O’Lakes Butter and the no-longer Washington Redskins; frictions are erupting over calls to paint over New Deal-era murals in schools that depict slavery and Indian warfare.
But these tensions usually flare around whether to take down existing things and retire established traditions. Such debates tend to be complicated by nostalgia and the impulse to preserve things as we are accustomed to them. Wayne Day was different: This was an attempt to create something new.
On one level, to grow up in Fort Wayne was to be saturated in references to Anthony Wayne and the Native Americans he fought. I opened my first savings account at a branch of Anthony Wayne Bank, across Anthony Boulevard from an ice cream parlor that served massive “Mad Anthony” sundaes. The names of tribes that originally lived here and their chiefs also adorn schools, streets, libraries and camps. The most prominent was Chief Little Turtle of the Miami tribe, whose de facto capital town, Kekionga, had stood here long before Wayne came.
Yet most of us would have been hard-pressed to detail who any of these people were, or explain the wthat Fort Wayne’s submerged and sometimes unpleasant history helped shape modern-day America.
I have no recollection of anyone explaining that the name of our main geographical feature—the Maumee River, which forms here where two lesser waterways merge and flows to Toledo, Ohio, where it empties into Lake Erie—came from the Miami tribe. Nor was I taught that as white Americans pushed west, the Miami invited refugee tribes dislodged from their homelands, the Shawnee and the Delaware, to resettle here.
The fort that Wayne built here was soon abandoned by the army as the frontier kept shifting west, and while the white settlement that grew up around the site boomed for a period as an industrial center, it was always a minor place compared with other cities that grew up around frontier forts in the Midwest, especially Chicago and Detroit. But for a period, this spot was of singular importance in North America.
As a principal Native American settlement under the control of the Miami tribe, it flourished for most of the 18th century because it controlled the shortest overland connection between two vast river networks, linking Quebec and the Great Lakes region to the Mississippi Valley and its seaport at New Orleans. This land portage made it a crossroads both for the lucrative North American fur trade, and for Algonquin tribes throughout what we now call the Midwest.
Already a crucial hub, the cluster of native towns here then served as the military headquarters for a multi-tribal alliance—sometimes called the Western Confederacy—that battled white America for control of the entire Midwest after the Revolutionary War. The war ended when General “Mad” Anthony Wayne, carrying out the policies of President George Washington, subjugated the tribal confederation’s warriors in battle and then systematically burned down their villages and destroyed their food stores ahead of winter, breaking the tribes’ will to keep resisting. Wayne died soon after negotiating a peace treaty, but his conquest unleashed hundreds of thousands of white settlers to rapidly transform the Old Northwest into the new states of Ohio, Indiana, Illinois, Michigan, and Wisconsin. Most of its original inhabitants, especially those like the Miami who lived in the lower three states, would be removed within two generations.
Perhaps it was not surprising we hadn’t been taught about the . . .
Continue reading. There’s much more and it’s interesting.
Age of Invention: The Agglomerationists
Anton Howe writes:
The other day, economic historian Tim Leunig tagged me into a comment on twitter with the line “intellectually I think the biggest change since settled agriculture was the idea that most people could live in cities and not produce food”. What’s interesting about that, I think, is the idea that this was not just an economic change, but an intellectual one. In fact, I’ve been increasingly noticing a sort of ideology, if one can call it that, which seemingly took hold in Britain in the late sixteenth century and then became increasingly influential. It was not the sort of ideology that manifested itself in elections, or even in factions, but it was certainly there. It had both vocal adherents and strenuous opponents, the adherents pushing particular policies and justifying them with reference to a common intellectual tradition. Indeed, I can think of many political and economic commentators who are its adherents today, whether or not they explicitly identify as such.
Today, the people who hold this ideology will occasionally refer to themselves as “urbanists”. They are in favour of large cities, large populations, and especially density. They believe strongly in what economists like to call “agglomeration effects” — that is, if you concentrate people more closely together, particularly in cities, then you are likely to see all sorts of benefits from their interactions. More ideas, more trade, more innovation, more growth.
Yet urbanism as a word doesn’t quite capture the full scope of the ideology. The group also heavily overlaps with natalists — people who think we should all have more babies, regardless of whether they happen to live in cities — and a whole host of other groups, from pro-immigration campaigners, to people setting up charter cities, to advocates of cheaper housing, to enthusiasts for mass transit infrastructure like buses, trams, or trains. The overall ideology is thus not just about cities per se — it seems a bit broader than that. Given the assumptions and aims that these groups hold in common, perhaps a more accurate label for their constellation of opinions and interests would be agglomerationism.
So much for today. What is the agglomerationist intellectual tradition? In the sixteenth century, one of the mantras that keeps cropping up is the idea that “the honour and strength of a prince consists in the multitude of the people” — a sentiment attributed to king Solomon. It’s a phrase that keeps cropping up in some shape or form throughout the centuries, and used to justify a whole host of agglomerationist policies. And most interestingly, it’s a phrase that begins cropping up when England was not at all urban, in the mid-sixteenth century — only about 3.5% of the English population lived in cities in 1550, far lower than the rates in the Netherlands, Italy, or Spain, each of which had urbanisation rates of over 10%. Even England’s largest city by far, London, was by European standards quite small. Both Paris and Naples were at least three times as populous (don’t even mention the vast sixteenth-century metropolises of China, or Constantinople).
Given their lack of population or density, English agglomerationists had a number of role models. One was the city of Nuremburg — through manufactures alone, it seemed, a great urban centre had emerged in a barren land. Another was France, which in the early seventeenth century seemed to draw in the riches to support itself through sheer exports. One English ambassador to France in 1609 noted that its “corn and grain alone robs all Spain of their silver and gold”, and warned that it was trying to create still new export industries like silk-making and tapestry weaving. (The English rapidly tried to do the same, though with less success.) France may not have been especially urban either, but Paris was already huge and on the rise, and the country’s massive overall population made it “the greatest united and entire force of any realm or dominion” in Christendom. Today, the population of France and Britain are about the same, but in 1600 France’s was about four times as large. Some 20 millions compared to a paltry 5. If Solomon was right, then England had a lot of catching up to do to even approach France in honour.
Most important of all the role models, however, was the Dutch Republic. Here was a nation that could not possibly feed itself using its own land, but which nonetheless was extremely populous and wealthy. It made itself, as one English admirer put it, “powerful and rich in all kinds, by merchandise, manufactory, and fullness of trade, having no commodities in their own country growing”. With low tariffs and a vast merchant fleet, Amsterdam had become the global entrepôt, sucking in commodities from all across the world and then re-exporting them to the rest of Europe. And through high-value manufactures, it paid for whatever imports of raw materials it needed to support its population. It wove and dyed English wool into expensive fine cloth, then sold it back at a profit to the very English. It built its ships from Baltic timber, then used those ships to dominate the Baltic trade. Despite having “not a timber tree growing in their country”, they somehow still were able to build and maintain the largest merchant and fishing fleet in Europe. Observers even marvelled at how a country that produced hardly any grain of its own could relieve food shortages in England, which was usually a major grain exporter.
To emulate the Dutch, English agglomerationists recommended . . .
The Oracle of Delphi: The Apex of Ancient Greece
Jimmy Mahrer has a book, The Pyramids of Giza, on-line and also available as a Kindle book. Chapter XIX begins:
After a war is lost the losers must decide among themselves who was most to blame and who was simply a victim of circumstance. But the converse is also true: after a war is won, the victors must decide among themselves who has earned the biggest share of the glory and who could have done more for the cause. Greece was no exception to this rule as it looked about itself dazedly in 479 BC, still trying to understand how its improbable deliverance had come about. The Persian War became Greece’s collective crucible of character. For many years to come, “What did you do in the war?” would be the question on every tongue, to be asked of individual men and cities alike. The answer could be either a source of the greatest pride or the most abject shame.
The actions of Delphi were scrutinized as closely as those of any other city. In these increasingly skeptical times, when some were beginning to look upon the old beliefs with a decidedly jaundiced eye, there were insinuations made about the lack of confidence in the oracle’s countrymen which many of her prophecies had conveyed before the war; there were whispers that Delphi’s supposed divine rescue from Mardonius’s army was, at best, a distortion of the full truth; there was talk of secret deals and dishonorable transactions meant to preserve Delphi’s special status after a Greek defeat.
Yet most Greeks proved willing to give Delphi and its oracle the benefit of the doubt. For, as a new sense of shared Greekness swept the land in the wake of the great victory, Delphi remained in some ineffable sense the seat of Greek consciousness. One might even say that the living soul of Greece still burned there.
Accordingly, the oracle ordered every other city in the land to extinguish the fires that burned in its temples and sacred spaces, saying that all of them had been polluted by the barbarian invaders. Then she ordered the fires relighted using torches kindled at her own temple’s sacred flame. One zealous soul from the city of Plataea ran from Delphi to his home in a single day in order to make Plataea’s temple of Artemis the first site to rekindle its flame. He dropped dead upon his arrival, whereupon he was buried with honor in the temple itself.
The oracle forgot the timidity that had marked her tenure prior to the war and leaned into her power with abandon. Apparently forgetting about her own gloomy prewar prophecies, she ordered that every city in Greece which had surrendered to the Persians, or simply hadn’t given everything it could have to the war effort, should pay a stiff tithe to the god of Delphi by way of repentance for its lack of faith. Part of this money was used to build a grand monument to the heroes of the war on the very terrace of Apollo’s temple. It took the form of a statue of the god almost as tall as the temple itself, pulverizing a Persian ship with his hands. The rest of the money was used for another monumental sculpture: three golden stars on a life-sized ship’s mast of bronze, which took pride of place at the other entrance to the temple, between the offering bowls of Croesus.
But the grandest monument of all was built using the spoils of war which the Persians themselves had left strewn over the landscape behind them as they retreated. Just in the front of the temple, there was erected a column soaring higher than any other structure in Delphi. Atop it were mounted three serpents made of gold and bronze, supporting a golden tripod whose distinctive gleam could be seen on the horizon from leagues away on a sunny day. It was Delphi’s crowning glory.
These monuments were unprecedented in that they were the shared project of all of Greece, tangible evidence of the new spirit of fellowship. And yet they were only the beginning of the redoubled frenzy of building which was suddenly going on in the town. Gold and marble poured into Delphi along with craftsmen to work them, as every Greek city — and not a few barbarian ones — attempted to outdo its peers in size and beauty. Delphi received more attention than most of the donor cities’ own central squares. Some cities even started to stamp Delphi’s likeness onto their coins; it had become a united Greece’s de facto capital.
The Pythian Games assumed a new importance and popularity that eclipsed those of the Olympic Games, long their only real rivals. They took place over five days at the end of summer on every fourth year, as they always had, but expanded dramatically in scale and variety. Not only all manners of athletic competition but every form of performance art was on offer, along with grand exhibitions of painting and sculpture. Delicacies from abroad were heaped upon the banquet tables to satisfy the spectators’ appetites, and exotic animals were herded into cages for their amusement. The Games culminated on their final day in the chariot races, the winner of which was guaranteed eternal fame and glory — not least because Pindar of Thebes, the finest poet of his generation, was always on hand to pen an ode in the champion’s honor, placing him almost on an equal footing with the legendary heroes of the Age of Gods.
Nevertheless, this was the Age of Men, and politics remained more complicated and more time-consuming than they had been of old, when the gods could be trusted to take a more active role in affairs. Thus Delphi buzzed even when neither the Pythian Games nor any of the many smaller festivals were taking place. Its people’s winter retirement to the Corycian Cave became a matter of ceremony only, for the new Greece needed Delphi the whole year round. The rulers and diplomats who came to the town still consulted with the oracle, but just as important were their consultations with one another. Delphi was the place where deals were made, treaties negotiated, transactions conducted. Most of the major cities maintained permanent embassies at Delphi, which buildings became, like so much else in the town, a way to demonstrate via their size and opulence the power and wealth of the ones who had built them.
There was no question which city came out of the Persian War with the biggest surfeit of both. While . . .
The weight of air
Anton Howes, writing on the age of invention:
Why was the steam engine invented in England? An awful lot hinges on this question, because the answer often depends on our broader theories of what caused the British Industrial Revolution as a whole. And while I never tire of saying that Britain’s acceleration of innovation was about much, much more than just the “poster boy” industries of cotton, iron, and coal, the economy’s transition to burning fossil fuels was still an unprecedented and remarkable event. Before the rise of coal, land traditionally had to be devoted to either fuel, food, or clothing: typically forest for firewood, fields for grain, and pastures for wool-bearing sheep. By 1800, however, English coal was providing fuel each year equivalent to 11 million acres of forest — an area that would have taken up a third of the country’s entire surface area, and which was many times larger than its actual forest. By digging downward for coal, Britain effectively increased its breadth.
And coal found new uses, too. It had traditionally just been one among many different fuels that could be used to heat homes, alongside turf, gorse, firewood, charcoal, and even cow dung. When such fuels were used for industry, they were generally confined to the direct application of heat, such as in baking bricks, evaporating seawater to extract salt, firing the forges for blacksmiths, and heating the furnaces for glass-makers. Over the course of the seventeenth century, however, coal had increasingly become the fuel of choice for both heating homes and for industry. Despite its drawbacks — it was sooty, smelly, and unhealthy — in places like London it remained cheap while the price of other fuels like firewood steadily increased. More and more industries were adapted to burning it. It took decades of tinkering and experimentation, for example, to reliably use coal in the smelting of iron.
Yet with the invention of the steam engine, the industrial uses of coal multiplied further. Although the earliest steam engines generally just sucked the water out of flooded mines, by the 1780s they were turning machinery too. By the 1830s, steam engines were having a noticeable impact on British economic growth, and had been applied to locomotion. Steam boats, steam carriages, steam trains, and steam ships proliferated and began to shrink the world. Rather than just a source of heat, coal became a substitute for the motive power of water, wind, and muscle.
So where did this revolutionary invention come from? There were, of course, ancient forms of steam-powered devices, such as the “aeolipile”. Described by Hero of Alexandria in the 1st century, the aeolipile consisted of a hollow ball with nozzles, configured in such a way that the steam passing into the ball and exiting through the nozzles would cause the ball to spin. But this was more like a steam turbine than a steam engine. It could not do a whole lot of lifting. The key breakthroughs came later, in the late seventeenth and early eighteenth centuries, and instead exploited vacuums. In a steam engine the main force was applied, not by the steam itself pushing a piston, but by the steam within the cylinder being doused in cold water, causing it to rapidly condense. The resulting partial vacuum meant that the weight of the air — the atmospheric pressure — did the real lifting work. The steam was not there to push, but to be condensed and thus pull. It saw its first practical applications in the 1700s thanks to the work of a Devon ironmonger, Thomas Newcomen.
Science was important here. Newcomen’s engine could never have been conceived had it not been for the basic and not at all obvious observation that the air weighed something. It then required decades of experimentation with air pumps, barometers, and even gunpowder, before it was realised that a vacuum could rapidly be created through the condensation of steam rather than by trying to suck the air out with a pump. And it was still more decades before this observation was reliably applied to exerting force. An important factor in the creation of the steam engine was thus that there was a sufficiently large and well-organised group of people experimenting with the very nature of air, sharing their observations with one another and publishing — a group of people who, in England, formalised their socialising and correspondence in the early 1660s with the creation of the Royal Society.
Yet many of the early experimenters with the nature of air, such as Evangelista Torricelli, Otto von Guericke, Denis Papin, and Christiaan Huygens, were Italian, German, French, and Dutch. The likes of Huygens and Papin may have been involved with England’s Royal Society, but the institution was just a branch of a much broader network of European scientists. Why, then, did the scientific observations only find their practical application in England? One influential answer is to do with England’s resource costs, and especially the presence of coal mines. The economic historian Robert C. Allen, for example, while fully recognising the key role played by seventeenth-century science, argues that “had the British coal industry not existed, there would have been no point going to the expense of developing the steam engine”.
Allen explains that the early engines were so energy-hungry that they were only cost-effective if applied directly to the source of their own fuel: the coal mines. It was where the fuel was cheapest. Thus, Allen argues that as a result of Britain having lots of coal mines, the nascent steam engine was kept alive as a technology for long enough that it could be developed and improved by various other inventors, until such time as it could be made cost-effective to apply it to other kinds of mine and then to rotary motion too. As Britain simply had more coal mines than anywhere else, Allen argues, British-based inventors were thus responsible for more of the steam engine’s improvements. Coal was the cradle for the infant industry.
But I don’t think this is quite right. The devil here is in the detail. Allen himself notes that although the first engine was made public in 1712 at a coal mine, Thomas Newcomen’s earliest experiments with steam engines, c.1700-10, occurred in Cornwall. The problem for Allen’s narrative here is that the earliest engines would thus have been used for pumping mines of tin, not coal. Indeed, I can hardly think of a more expensive place than Cornwall to have tried to invent a coal-fuelled steam engine. I say this because just last week I noticed some interesting details about Cornish fuel supplies in the travel accounts of Celia Fiennes, who meticulously recorded her visit to Cornwall in 1698 — just a few years before Newcomen’s very first engines were erected. Fiennes noted the same problem that Newcomen specifically set out to solve, which was that the tin mines near Cornwall’s southern coast had recently become especially prone to flooding. But she also commented on severe shortages of two of the most necessary resources: Cornwall, Fiennes noted, was almost entirely devoid of wood, which was a crucial material in the construction of any kind of pumping engine, regardless of whether it was powered by horse, water, or steam. This startup cost was likely offset for Newcomen by the fact that the government in the 1700s had begun to subsidise tin production. Investment was thus forthcoming. (It helped that one of the most powerful British ministers was the Earl of Godolphin, a major Cornish owner of tin mines.) But the other noticeably lacking resource, at least in Cornwall’s south, was coal.
Cornwall’s coal was largely shipped from Wales or Bristol, on Britain’s western coast. Yet Fiennes noted that during wartime these ships found it much riskier to sail around Land’s End and into the English Channel, due to the threat from French pirates that might be lying in wait. Cornwall’s northern coast was thus able to get Bristol coal cheaply, but during wartime the southern coast was hardly supplied with any kind of fuel at all, be it coal or even firewood. Fiennes described how her supper could be boiled by burning the abundant local gorse, but that roasting a meal was reserved only for special occasions because it required scarce firewood. When Fiennes was writing, the Nine Years’ War had just ended and the southern coast of Cornwall was beginning to receive its regular coal and wood supplies again. But an even more extensive conflict with France . . .
How did ancient cities weather crises?
Andrew Robinson has an interesting-book review in Nature:
The Life and Death of Ancient Cities: A Natural History Greg Woolf Oxford Univ. Press (2020)
For millennia, cities have generated power, wealth, creativity, knowledge and magnificent buildings. They have also incubated hunger, violence, war, inequality and disease — as we’ve so painfully experienced this year. The coronavirus pandemic has shaken our faith in urban life, as lockdowns have emptied streets that are home to more than half the world’s population. Basic supply networks have been revealed as fragile, and the densely packed social groups that are engines of income, support and enjoyment have become a source of peril.
As the pandemic forces us to contemplate the future of cities — three-quarters of the world’s people could live in urban areas by 2100 — historian Greg Woolf examines their past. His latest book is a deeply researched and ambitious “natural history” of the origins and growth of urbanism. Woolf is an expert on ancient Rome, the city with the highest population in antiquity — at its peak around 2,000 years ago, a mind-boggling one million people lived there, some 0.3% of the global population. That was in the reign of the emperor Augustus (27 BC to AD 14).
The Life and Death of Ancient Cities spans from the Bronze Age, starting in the fourth millennium BC, to the early part of the Middle Ages, in the first millennium AD. It focuses on the hundreds of ancient Mediterranean cities that sprang up during this time, including Alexandria, Antioch, Athens, Byzantium and Carthage, as well as Rome. Woolf synthesizes intriguing insights from the humanities, social sciences, climatology, geology and biology. He explains that the neoclassical buildings of modern cities, such as London’s British Museum, give a false impression. The famous centres of antiquity were “far less grandiose” — Athenian assemblies, for example, debated in the open air. He wryly notes that rats and humans thrive in cities, because both can survive on diverse food sources and cope with prolonged periods of hunger.
When did cities first appear? The answer depends on definitions. In today’s Nicaragua, notes Woolf, any settlement with street lights and electricity counts as a city. In Japan, a population greater than 50,000 is required. A prime candidate for the world’s first city is perhaps Jericho in what is now the Palestinian territories. It was founded before 9000 BC and about a millennium later had a wall — the earliest such barrier discovered. But Jericho’s population at the time is uncertain. Estimates range from a few hundred to 2,000 or 3,000. As Woolf observes, it is tricky to determine population size in early societies without written records. One option is to analyse the water supply to work out how many people it could have served, but this reveals maximum carrying capacity rather than use, and struggles to take into account public baths and fountains.
Like most specialists, Woolf prefers to give the title of first city to Uruk, in Mesopotamia. This settlement had an estimated 10,000–20,000 inhabitants in 4000 BC, rising to between 60,000 and 140,000 after a massive protective wall, ascribed to King Gilgamesh, was built around 2900 BC. Here, in the late fourth millennium BC, writing probably originated in the form of cuneiform script on clay tablets, used to record bureaucratic information such as economic transactions. One such tablet displays the world’s oldest known mathematical calculation, of the surface area of a roughly rectangular field. Yet the factors that drove the creative outburst that built the city remain mysterious. As Woolf admits: “For all the attention that has been devoted to the Uruk phenomenon, there is still no consensus about why it happened.”
Mediterranean metropolises
Cities arrived much later in the ancient Mediterranean. Athens became an important centre of the Mycenaean civilization around 1400 BC; Rome was founded in the eighth century BC; Alexandria dates from 332 BC. Mediterranean farmers generally lacked access to the flood water and fertile alluvial mud provided by Mesopotamia’s great rivers, the Tigris and Euphrates. For centuries, people lived in villages and hamlets rather than cities, which are at risk from crop failures and water shortages. Again, the driving forces are often debatable. The chief period of Roman urbanization is now known to have coincided with a period of increased temperatures during the last century BC and first two centuries AD. But, as Woolf warns, this might be a coincidence: “It is perfectly possible to explain urbanization without recourse to climate change.”
Another source of uncertainty is how ancient diseases affected urban centres. Written accounts suggest, for example, that . . .
How Pandemics Wreak Havoc—and Open Minds
Lawrence Wright writes in the New Yorker:
Great crises tend to bring profound social change, for good or ill. The consequences of wars and economic depressions have been amply studied; the consequences of pandemics, less so. This spring, in order to understand our possible future, I decided to look at the past through the eyes of Gianna Pomata, a retired professor at the Institute of the History of Medicine, at Johns Hopkins University. When we first talked, on Skype, she immediately compared COVID-19 to the bubonic plague that struck Europe in the fourteenth century—“not in the number of dead but in terms of shaking up the way people think.” She went on, “The Black Death really marks the end of the Middle Ages and the beginning of something else.” That something else was the Renaissance.
Since retiring, Pomata has returned to her home town, the old city of Bologna. “You know Bologna, right?” she asked in that first conversation, on March 27th. Decades ago, I was the best man at a wedding there. I recalled the giant churches, the red-tiled roofs, the marble walkways under arched porticoes; a stately city, low-slung, amber-hued, full of students and indomitable old couples. During the Middle Ages, Bologna was home to more than a hundred towers, the skyscrapers of their era, which served as showplaces of wealth and ambition for powerful oligarchs. Two of the remaining ones have become symbols of Bologna: one slightly out of plumb, the other as cockeyed as its cousin in Pisa. “You remember the Piazza Maggiore, the very heart of the city near the two towers?” Pomata said. “That’s where I live.”
Pomata’s country had been in a nationwide lockdown since March 10th. “In Italy, the streets are always crowded, night and day,” she said. “Our cities are medieval, made for a different way of life—not for cars but for people. Right now, to see them empty of people is so sad.” When we spoke, the number of confirmed cases in Italy had reached eighty-six thousand. Only the United States had a higher number, having just eclipsed China.
Pomata, who is sixty-nine, has brown hair, with a long, open face. That day, tortoiseshell glasses rested at half-mast on her nose, beneath upward-pointing, quizzical eyebrows. Like me, she was beginning to show the pallor of confinement. Having spent much of her adult life in the United States, her English had little accent, but she retained an Italian lilt, lingering on the broad vowels.
I asked Pomata to imagine walking out of her apartment six hundred and seventy-two years ago, during the Black Death. How would Bologna appear different? “If you try to imagine a plague-stricken city in the Middle Ages, the first thing you’d see would be dead people on the streets,” she said. “Just as we have to send the Army to take coffins to crematories in other cities, as in Bergamo right now, in the Middle Ages they couldn’t cope with so many dead. The bodies just piled up on the streets.” She paused and said, “I don’t have an idyllic vision of the Middle Ages.”
Italy at the beginning of the fourteenth century was a conglomeration of prosperous city-states that had broken free of the feudal system. Some of them, such as Venice, formed merchant republics, which became seedbeds for capitalism. Venice and other coastal cities, including Genoa, Pisa, and Amalfi, set up trading networks and established outposts throughout the Mediterranean and as far away as the Black Sea. Other Italian cities, such as Bologna, became free communes, which meant that peasants fleeing feudal estates were granted freedom once they entered the city walls. Serfs became artisans. A middle class began to form. The early fourteenth century was robust and ambitious. Then, suddenly, people began to die.
Bologna was a stronghold of medical teaching. The city’s famous university, established in 1088, is the oldest in the world. “What they had we call scholastic medicine,” Pomata told me. “When we say ‘scholastic,’ we mean something that is very abstract, not concrete, not empirical.” European scholars at the time studied a number of classical physicians—including Hippocrates, the Greek philosopher of the fifth century B.C. who is considered the father of medicine, and Galen, the second-century Roman who was the most influential medical figure in antiquity—but scholastic medicine was confounded with astrological notions. When the King of France sought to understand the cause of the plague, the medical faculty at the University of Paris blamed a triple conjunction of Saturn, Jupiter, and Mars in the fortieth degree of Aquarius, which had occurred on March 20, 1345.
“Whether it descended on us mortals through the influence of the heavenly bodies or was sent down by God in His righteous anger to chastise us because of our wickedness, it had begun some years before in the East,” Giovanni Boccaccio wrote in the Decameron, which was completed by 1353 and is set during the plague in Florence. “At its onset, in men and women alike, certain swellings would develop in the groin or under the armpits, some of which would grow like an ordinary apple and others like an egg.” These pus-filled swellings, called buboes, were inflammations of the lymph nodes. They eventually erupted. Internal organs broke down in a bloody froth, and bodies darkened with gangrene, which is why the plague came to be called the Black Death.
Before arriving in Italy, the rampaging contagion had already killed millions of people as it burned through China, Russia, India, Persia, Syria, and Asia Minor. It was said that there were entire territories where nobody was left alive. The source of the disease was sometimes thought to be “miasma,” or air that was considered unhealthy, such as sea breezes. Paradoxically, there was also a folk belief that attendants who cleaned latrines were immune, which led some people to confine themselves for hours a day amid human waste, absorbing the presumed medicinal odors. “The advice of doctors and the power of medicine appeared useless and unavailing,” Boccaccio wrote. Some people maintained that “the surest medicine for such an evil disease was to drink heavily, enjoy life’s pleasures, and go about singing and having fun, satisfying their appetites by any means available, while laughing at everything.” Others, he observed, “formed themselves into companies and lived in isolation from everyone else.” The Decameron tells of ten friends who shelter in place, entertaining one another with stories while the plague assails Florence. These ribald tales pay little heed to medieval notions of sacredness or piety; indeed, the society that the sequestered young people describe is amoral and cheerfully hypocritical. Priests are portrayed as stupid, lustful, greedy connivers. Illicit sex is exalted. The earthy realism of the Decameron, written in Italian vernacular rather than in classical Latin verse, sounded one of the opening notes of the Renaissance.
Pomata told me, “What happens after the Black Death, it’s like a wind—fresh air coming in, the fresh air of common sense.” The intellectual overthrow of the scholastic-medicine establishment in the Middle Ages was caused by doctors who set aside the classical texts and gradually turned to empirical evidence. It was a revival of medical science, which had been dismissed after the fall of ancient Rome, a thousand years earlier. “After the Black Death, nothing was the same,” Pomata said. “What I expect now is something as dramatic is going to happen, not so much in medicine but in economy and culture. Because of danger, there’s this wonderful human response, which is to think in a new way.”
In the fourteenth century, Tartar warriors in Crimea laid siege to the Black Sea port city of Caffa, which was owned by a group of wealthy Genoese traders. Like so many armies in history, the Tartars were also fighting an unseen enemy: they carried with them a horrible disease, which killed some victims in a few days, and left others to die in indolent agony. Before retreating from Caffa, the Tartar general, Khan Jani Beg, ordered the diseased bodies of dead warriors catapulted over the city walls, in one of the first instances of biological warfare. Panicked citizens took to boats, navigating through the Dardanelles into the Aegean Sea and the Mediterranean. A dozen ships made it to Sicily, in October, 1347.
Sicilians were appalled to find on their shores boats with dead men still at their oars. Other sailors, dead or barely alive, were in their bunks, covered with foul-smelling sores. The horrified Sicilians drove the ships back to sea, but it was too late. Rats and fleas . . .
Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages
A very interesting, very lengthy, profusely illustrated post by Ex Urbe at the eponymous website:
“If the Black Death caused the Renaissance, will COVID also create a golden age?”
Versions of this question have been going around as people, trying to understand the present crisis, reach for history’s most famous pandemic. Using history to understand our present is a great impulse, but it means some of the false myths we tell about the Black Death and Renaissance are doing new damage, one of the most problematic in my view being the idea that sitting back and letting COVID kill will somehow by itself naturally make the economy turn around and enter a period of growth and rising wages.
Brilliant Medievalists have been posting Black Death pieces correcting misconceptions and flailing as one does when an error refuted 50 times returns the 51st (The Middle Ages weren’t dark and bad compared to the Renaissance!!!). As a Renaissance historian, I feel it’s my job to shoulder the other half of the load by talking about what the Renaissance was like, confirming that our Medievalists are right, it wasn’t a better time to live than the Middle Ages, and to talk about where the error comes from, why we think of the Renaissance as a golden age, and where we got the myth of the bad Middle Ages.
Only half of this is a story about the Renaissance. The other half is later: Victorian Britain, Italy’s unification, World Wars I and II, the Cold War, ages in which the myth of the golden Renaissance was appropriated and retold. And yes, looking at the Black Death and Renaissance is helpful for understanding COVID-19’s likely impact, but in addition to looking at 1348 we need to look at its long aftermath, at the impact Yersinia Pestis had on 1400, and 1500, and 1600, and 1700. So:
- This post is for you if you’ve been wondering whether Black Death => Renaissance means COVID => Golden Age, and you want a more robust answer than, “No no no no no!”
- This post is for you if you’re tired of screaming The Middle Ages weren’t dark and bad! and want somewhere to link people to, to show them how the myth began.
- This post is for you if you want to understand how an age whose relics make it look golden in retrospect can also be a terrible age to live in.
- And this post is for you if want to ask what history can tell us about 2020 and come away with hope. Because comparing 2020 to the Renaissance does give me hope, but it’s not the hope of sitting back expecting the gears of history to grind on toward prosperity, and it’s not the hope for something like the Renaissance—it’s hope for something much, much better, but a thing we have to work for, all of us, and hard.
I started writing this post a few weeks ago but rapidly discovered that a thorough answer will be book-length (the book’s now nearly done in fact). What I’m sharing now is just a precis, the parts I think you’ll find most useful now. So sometimes I’ll make a claim without examples, or move quickly over important things, just linking to a book instead of explaining, because my explanation is approaching 100,000 words. That book will come, and soon, but meanwhile please trust me as I give you just urgent parts, and I promise more will follow.
Continue reading. There’s very much more, and also interesting comments.
It Can Happen Here
In the New York Review of Books Cass R. Sunstein reviews a couple of ominous books about how a public can blind itself to what is happening (review also available in complete form here):
They Thought They Were Free: The Germans, 1933–45
by Milton Mayer, with a new afterword by Richard J. Evans
University of Chicago Press, 378 pp., $20.00 (paper)Broken Lives: How Ordinary Germans Experienced the Twentieth Century
by Konrad H. Jarausch
Princeton University Press, 446 pp., $35.00<Liberal democracy has enjoyed much better days. Vladimir Putin has entrenched authoritarian rule and is firmly in charge of a resurgent Russia. In global influence, China may have surpassed the United States, and Chinese president Xi Jinping is now empowered to remain in office indefinitely. In light of recent turns toward authoritarianism in Turkey, Poland, Hungary, and the Philippines, there is widespread talk of a “democratic recession.” In the United States, President Donald Trump may not be sufficiently committed to constitutional principles of democratic government.
In such a time, we might be tempted to try to learn something from earlier turns toward authoritarianism, particularly the triumphant rise of the Nazis in Germany in the 1930s. The problem is that Nazism was so horrifying and so barbaric that for many people in nations where authoritarianism is now achieving a foothold, it is hard to see parallels between Hitler’s regime and their own governments. Many accounts of the Nazi period depict a barely imaginable series of events, a nation gone mad. That makes it easy to take comfort in the thought that it can’t happen again.
But some depictions of Hitler’s rise are more intimate and personal. They focus less on well-known leaders, significant events, state propaganda, murders, and war, and more on the details of individual lives. They help explain how people can not only participate in dreadful things but also stand by quietly and live fairly ordinary days in the midst of them. They offer lessons for people who now live with genuine horrors, and also for those to whom horrors may never come but who live in nations where democratic practices and norms are under severe pressure.
Milton Mayer’s 1955 classic They Thought They Were Free, recently republished with an afterword by the Cambridge historian Richard J. Evans, was one of the first accounts of ordinary life under Nazism. Dotted with humor and written with an improbably light touch, it provides a jarring contrast with Sebastian Haffner’s devastating, unfinished 1939 memoir, Defying Hitler, which gives a moment-by-moment, you-are-there feeling to Hitler’s rise. (The manuscript was discovered by Haffner’s son after the author’s death and published in 2000 in Germany, where it became an immediate sensation.)* A much broader perspective comes from Konrad Jarausch’s Broken Lives, an effort to reconstruct the experience of Germans across the entire twentieth century. What distinguishes the three books is their sense of intimacy. They do not focus on historic figures making transformative decisions. They explore how ordinary people attempted to navigate their lives under terrible conditions.
Haffner’s real name was Raimund Pretzel. (He used a pseudonym so as not to endanger his family while in exile in England.) He was a journalist, not a historian or political theorist, but he interrupts his riveting narrative to tackle a broad question: “What is history, and where does it take place?” He objects that most works of history give “the impression that no more than a few dozen people are involved, who happen to be ‘at the helm of the ship of state’ and whose deeds and decisions form what is called history.” In his view, that’s wrong. What matters are “we anonymous others” who are not just “pawns in the chess game,” because the “most powerful dictators, ministers, and generals are powerless against the simultaneous mass decisions taken individually and almost unconsciously by the population at large.” Haffner insists on the importance of investigating “some very peculiar, very revealing, mental processes and experiences,” involving “the private lives, emotions and thoughts of individual Germans.”
Mayer had the same aim. An American journalist of German descent, he tried to meet with Hitler in 1935. He failed, but he did travel widely in Nazi Germany. Stunned to discover a mass movement rather than a tyranny of a diabolical few, he concluded that his real interest was not in Hitler but in people like himself, to whom “something had happened that had not (or at least not yet) happened to me and my fellow-countrymen.” In 1951, he returned to Germany to find out what had made Nazism possible.
In They Thought They Were Free, Mayer decided to focus on ten people, different in many respects but with one characteristic in common: they had all been members of the Nazi Party. Eventually they agreed to talk, accepting his explanation that he hoped to enable the people of his nation to have a better understanding of Germany. Mayer was truthful about that and about nearly everything else. But he did not tell them that he was a Jew.
In the late 1930s—the period that most interested Mayer—his subjects were working as a janitor, a soldier, a cabinetmaker, an office manager, a baker, a bill collector, an inspector, a high school teacher, and a police officer. One had been a high school student. All were male. None of them occupied positions of leadership or influence. All of them referred to themselves as “wir kleine Leute, we little people.” They lived in Marburg, a university town on the river Lahn, not far from Frankfurt.
Mayer talked with them over the course of a year, under informal conditions—coffee, meals, and long, relaxed evenings. He became friends with each (and throughout he refers to them as such). As he put it, with evident surprise, “I liked them. I couldn’t help it.” They could be ironic, funny, and self-deprecating. Most of them enjoyed a joke that originated in Nazi Germany: “What is an Aryan? An Aryan is a man who is tall like Hitler, blond like Goebbels, and lithe like Göring.” They also could be wise. Speaking of the views of ordinary people under Hitler, one of them asked:
Opposition? How would anybody know? How would anybody know what somebody else opposes or doesn’t oppose? That a man says he opposes or doesn’t oppose depends upon the circumstances, where, and when, and to whom, and just how he says it. And then you must still guess why he says what he says.
When Mayer returned home, he was afraid for his own country. He felt “that it was not German Man that I had met, but Man,” and that under the right conditions, he could well have turned out as his German friends did. He learned that Nazism took over Germany not “by subversion from within, but with a whoop and a holler.” Many Germans “wanted it; they got it; and they liked it.”
Mayer’s most stunning conclusion is that with one partial exception (the teacher), none of his subjects “saw Nazism as we—you and I—saw it in any respect.” Where most of us understand Nazism as a form of tyranny, Mayer’s subjects “did not know before 1933 that Nazism was evil. They did not know between 1933 and 1945 that it was evil. And they do not know it now.” Seven years after the war, they looked back on the period from 1933 to 1939 as the best time of their lives.
Mayer suggests that even when tyrannical governments do horrific things, outsiders tend to exaggerate their effects on the actual experiences of most citizens, who focus on their own lives and “the sights which meet them in their daily rounds.” Nazism made things better for the people Mayer interviewed, not (as many think) because it restored some lost national pride but because it improved daily life. Germans had jobs and better housing. They were able to vacation in Norway or Spain through the “Strength Through Joy” program. Fewer people were hungry or cold, and the sick were more likely to receive treatment. The blessings of the New Order, as it was called, seemed to be enjoyed by “everybody.”
Even in retrospect Mayer’s subjects liked and admired Hitler. They saw him as someone who had “a feeling for masses of people” and spoke directly in opposition to the Versailles Treaty, to unemployment—to all aspects of the existing order. They applauded Hitler for his rejection of “the whole pack”—“all the parliamentary politicians and all the parliamentary parties”—and for his “cleanup of moral degenerates.” The bank clerk described Hitler as “a spellbinder, a natural orator. I think he was carried away from truth, even from truth, by his passion. Even so, he always believed what he said.”
Mayer did not bring up the topic of anti-Semitism with any of his subjects, but after a few meetings, each of them did so on his own, and they returned to it constantly. When the local synagogue was burned in 1938, most of the community was under only one obligation: “not to interfere.” Eventually Mayer showed his subjects the local newspaper from November 11, 1938, which contained a report: “In the interest of their own security, a number of male Jews were taken into custody yesterday. This morning they were sent away from the city.” None of them remembered seeing it, or indeed anything like it.
The killing of six million Jews? Fake news. Four of Mayer’s subjects insisted that the only Jews taken to concentration camps were traitors to Germany, and that the rest were permitted to leave with their property or its fair market value. The bill collector agreed that the killing of the Jews “was wrong, unless they committed treason in wartime. And of course they did.” He added that “some say it happened and some say it didn’t,” and that you “can show me pictures of skulls…but that doesn’t prove it.” In any case, “Hitler had nothing to do with it.” The tailor spoke similarly: “If it happened, it was wrong. But I don’t believe it happened.”
With evident fatigue, the baker reported, “One had no time to think. There was so much going on.” His account was similar to that of one of Mayer’s colleagues, a German philologist in the country at the time, who emphasized the devastatingly incremental nature of the descent into tyranny and said that “we had no time to think about these dreadful things that were growing, little by little, all around us.” The philologist pointed to a regime bent on diverting its people through endless dramas (often involving real or imagined enemies), and “the gradual habituation of the people, little by little, to being governed by surprise.” In his account, “each step was so small, so inconsequential, so well explained or, on occasion, ‘regretted,’” that people could no more see it “developing from day to day than a farmer in his field sees the corn growing. One day it is over his head.”
Focusing largely on 1933, in Defying Hitler Haffner offers a radically different picture, in which the true nature of Nazism was evident to many Germans from the start. Just twenty-five years old that year and studying law with the goal of becoming a judge or administrator, he describes the mounting effects of Nazism on the lives of his high-spirited friends and fellow students, who were preoccupied with fun, job prospects, and love affairs. Haffner says that as soon as the Nazis took power, he was saved by his capacity to smell the rot:
As for the Nazis, my nose left me with no doubts. It was just tiresome to talk about which of their alleged goals and intentions were still acceptable or even “historically justified” when all of it stank. How it stank! That the Nazis were enemies, my enemies and the enemies of all I held dear, was crystal clear to me from the outset.
As Haffner describes it, a form of terror began quickly, as members of the SS made their presence felt, intimidating people in public places. At the same time, citizens were distracted by an endless stream of festivities and celebrations. The intimidation, accompanied by the fervent, orchestrated pro-Nazi activity, produced an increase in fear, which led many skeptics to become Nazis. Nonetheless, people flirted, enjoyed romances, “went to the cinema, had a meal in a small wine bar, drank Chianti, and went dancing together.” Sounding here like Mayer’s subjects, Haffner writes that it was the “automatic continuation of ordinary life” that “hindered any lively, forceful reaction against the horror.”
The Myth of the Kindly General Robert E. Lee
Adam Serwer wrote in the Atlantic three years ago:
The strangest part about the continued personality cult of Robert E. Lee is how few of the qualities his admirers profess to see in him he actually possessed.
Memorial Day has the tendency to conjure up old arguments about the Civil War. That’s understandable; it was created to mourn the dead of a war in which the Union was nearly destroyed, when half the country rose up in rebellion in defense of slavery. This year, the removal of Lee’s statue in New Orleans has inspired a new round of commentary about Lee, not to mention protests on his behalf by white supremacists.
The myth of Lee goes something like this: He was a brilliant strategist and devoted Christian man who abhorred slavery and labored tirelessly after the war to bring the country back together.
There is little truth in this. Lee was a devout Christian, and historians regard him as an accomplished tactician. But despite his ability to win individual battles, his decision to fight a conventional war against the more densely populated and industrialized North is considered by many historians to have been a fatal strategic error.
But even if one conceded Lee’s military prowess, he would still be responsible for the deaths of hundreds of thousands of Americans in defense of the South’s authority to own millions of human beings as property because they are black. Lee’s elevation is a key part of a 150-year-old propaganda campaign designed to erase slavery as the cause of the war and whitewash the Confederate cause as a noble one. That ideology is known as the Lost Cause, and as the historian David Blight writes, it provided a “foundation on which Southerners built the Jim Crow system.”
There are unwitting victims of this campaign—those who lack the knowledge to separate history from sentiment. Then there are those whose reverence for Lee relies on replacing the actual Lee with a mythical figure who never truly existed.
In the Richmond Times Dispatch, R. David Cox wrote that “for white supremacist protesters to invoke his name violates Lee’s most fundamental convictions.” In the conservative publication Townhall, Jack Kerwick concluded that Lee was “among the finest human beings that has ever walked the Earth.” John Daniel Davidson, in an essay for The Federalist, opposed the removal of the Lee statute in part on the grounds that Lee “arguably did more than anyone to unite the country after the war and bind up its wounds.” Praise for Lee of this sort has flowed forth from past historians and presidents alike.
This is too divorced from Lee’s actual life to even be classed as fan fiction; it is simply historical illiteracy.
White supremacy does not “violate” Lee’s “most fundamental convictions.” White supremacy was one of Lee’s most fundamental convictions.
Lee was a slave owner—his own views on slavery were explicated in an 1856 letter that is often misquoted to give the impression that Lee was some kind of abolitionist. In the letter, he describes slavery as “a moral & political evil,” but goes on to explain that:
I think it however a greater evil to the white man than to the black race, & while my feelings are strongly enlisted in behalf of the latter, my sympathies are more strong for the former. The blacks are immeasurably better off here than in Africa, morally, socially & physically. The painful discipline they are undergoing, is necessary for their instruction as a race, & I hope will prepare & lead them to better things. How long their subjugation may be necessary is known & ordered by a wise Merciful Providence. Their emancipation will sooner result from the mild & melting influence of Christianity, than the storms & tempests of fiery Controversy.
The argument here is that slavery is bad for white people, good for black people, and most important, better than abolitionism; emancipation must wait for divine intervention. That black people might not want to be slaves does not enter into the equation; their opinion on the subject of their own bondage is not even an afterthought to Lee.
Lee’s cruelty as a slave master was not confined to physical punishment. In Reading the Man, the historian Elizabeth Brown Pryor’s portrait of Lee through his writings, Pryor writes that “Lee ruptured the Washington and Custis tradition of respecting slave families” by hiring them off to other plantations, and that “by 1860 he had broken up every family but one on the estate, some of whom had been together since Mount Vernon days.” The separation of slave families was one of the most unfathomably devastating aspects of slavery, and Pryor wrote that Lee’s slaves regarded him as “the worst man I ever see.”
The trauma of rupturing families lasted lifetimes for the enslaved—it was, as my colleague Ta-Nehisi Coates described it, “a kind of murder.” After the war, thousands of the emancipated searched desperately for kin lost to the market for human flesh, fruitlessly for most. In Reconstruction, the historian Eric Foner quotes a Freedmen’s Bureau agent who notes of the emancipated, “In their eyes, the work of emancipation was incomplete until the families which had been dispersed by slavery were reunited.”
Lee’s heavy hand on the Arlington, Virginia, plantation, Pryor writes, nearly led to a slave revolt, in part because the enslaved had been expected to be freed upon their previous master’s death, and Lee had engaged in a dubious legal interpretation of his will in order to keep them as his property, one that lasted until a Virginia court forced him to free them.
When two of his slaves escaped and were recaptured, Lee either beat them himself or ordered the overseer to “lay it on well.” Wesley Norris, one of the slaves who was whipped, recalled that “not satisfied with simply lacerating our naked flesh, Gen. Lee then ordered the overseer to thoroughly wash our backs with brine, which was done.”
Every state that seceded mentioned slavery as the cause in their declarations of secession. Lee’s beloved Virginia was no different, accusing the federal government of “perverting” its powers “not only to the injury of the people of Virginia, but to the oppression of the Southern Slaveholding States.” Lee’s decision to fight for the South can only be described as a choice to fight for the continued existence of human bondage in America—even though for the Union, it was not at first a war for emancipation.
During his invasion of Pennsylvania, Lee’s Army of Northern Virginia enslaved free black Americans and brought them back to the South as property. Pryor writes that “evidence links virtually every infantry and cavalry unit in Lee’s army” to the abduction of free black Americans, “with the activity under the supervision of senior officers.”
Soldiers under Lee’s command at the Battle of the Crater in 1864 massacred black Union soldiers who tried to surrender. Then, in a spectacle hatched by Lee’s senior corps commander, A. P. Hill, the Confederates paraded the Union survivors through the streets of Petersburg to the slurs and jeers of the southern crowd. Lee never discouraged such behavior. As the historian Richard Slotkin wrote in No Quarter: The Battle of the Crater, “his silence was permissive.”
The presence of black soldiers on the field of battle shattered every myth that the South’s slave empire was built on: the happy docility of slaves, their intellectual inferiority, their cowardice, their inability to compete with white people. As Pryor writes, “fighting against brave and competent African Americans challenged every underlying tenet of southern society.” The Confederate response to this challenge was to visit every possible atrocity and cruelty upon black soldiers whenever possible, from enslavement to execution.
As the historian James McPherson recounts in Battle Cry of Freedom, in October of that same year, Lee proposed an exchange of prisoners with the Union general Ulysses S. Grant. “Grant agreed, on condition that black soldiers be exchanged ‘the same as white soldiers.’” Lee’s response was that “negroes belonging to our citizens are not considered subjects of exchange and were not included in my proposition.” Because slavery was the cause for which Lee fought, he could hardly be expected to easily concede, even at the cost of the freedom of his own men, that black people could be treated as soldiers and not things. Grant refused the offer, telling Lee . . .
The Common Seaman in the Heroic Age of Sail 1740–1840
Come Hell or High Water, by Stephen Taylor, seems a book useful to read in conjunction with Patrick O’Brian’s series of British Naval novels that begin with the trilogy:
Master and Commander
Post Captain
HMS Surprise.
Matthew Lyons reviews the book in Literary Review:
Early in the 19th century, there were some 260,000 of them across Britain’s naval and merchant fleets. People called them Jacks, but they are mostly nameless – or nameless to history. Even on surviving muster lists, seamen’s identities can be hidden behind pseudonyms. Some of these – George Million or Jacob Blackbeard, say – express a degree of wish fulfilment. Others are more whimsical: a Mark Anthony and Julius Caesar could be found on board the Calcutta-bound Tyger in 1757.
To join them was to enter another world, with its own laws (the thirty-six Articles of War, read to them every Sunday, besides whatever strictures a captain thought fit to apply), its own rituals and its own argot. ‘All seemed strange,’ one former ship’s boy recalled of his first days on board, ‘different language and strange expressions of tongue, that I thought myself always asleep, and never properly awake.’
There were, of course, distinctions among them. The lowest of the low were the waisters, comprised of old men, boys and the most inexperienced landsmen, good for nothing but drudgery. Then came the afterguard, consisting of ordinary seamen and more skilled landsmen, who trimmed the after yards and the sails. Above them were the forecastlemen, able seamen who handled the lower ropes and saw to weighing and anchoring. Princes over all of them were the topmen (or Foremast Jacks), who went aloft to bend or reef the sails, even in the highest of seas.
As Stephen Taylor argues in this enthralling new book, it was men like these who, in the great age of sail, made the British Empire possible. He tells the story of Britain’s rise to maritime supremacy in roughly the century from 1750 to 1850, using first-hand accounts of life on the lower decks, official records – ships’ logs, muster rolls, court martials and so on – and other contemporary sources.
Because of the immediacy of these sources, and Taylor’s deft, incisive use of them, it is the men, not the nation, to whom Sons of the Waves belongs. ‘Out of the King’s service they are in general citizens of the world,’ one officer wrote of them. Jacks might have made the British Empire possible, but they were only circumstantially loyal to it.
When their personal discontent became intolerable, they deserted in their tens of thousands. Nelson himself reckoned that 42,000 deserted between 1793 and 1802 alone, a figure Taylor believes may be on the low side. Their skills made them highly prized commodities and they were happy to sail under any flag, towards any compass point. The institution that valued that commodity least was the Royal Navy.
Perhaps the most resented British naval practice in this period was . . .
The Pillage of India
I found this book review interesting because I’m getting the feeling that the US is being pillaged as well. Christopher de Ballaigue writes in the NY Review of Books:
The Anarchy: The East India Company, Corporate Violence, and the Pillage of an Empire
by William Dalrymple
Bloomsbury, 522 pp., $35.00Inglorious Empire: What the British Did to India
by Shashi Tharoor
Melbourne: Scribe, 294 pp., $17.95 (paper)In the eighteenth century a career with the East India Company was a throw of the dice for unattached young British men. Arriving in India wan and scurvy after a year at sea, many quickly succumbed to disease, madness, or one of the innumerable little wars that the company fought in order to embed itself on the subcontinent. The salary was hardly an incentive. In the 1720s junior clerks, or “writers,” received just £5 per year, not enough to live on in Bengal or Madras and a pittance when set against the handsome 8 percent annual dividend the company’s shareholders awarded themselves back in London. Such drawbacks tended to put off all but those whom circumstances had already disfavored: second sons, members of the down-at-heel Anglo-Irish gentry, dispossessed Scottish landowners who had backed the losing side in a rebellion against the crown.
Being on the company payroll was rather a means to an end; moonlighting was where the money lay in one of the richest places on earth. In 1700 India is estimated to have accounted for 27 percent of the world economy and a quarter of the global textile trade. A considerable number of company employees who survived the shock of arrival went on to make fortunes from off-books trading in textiles, saltpeter, indigo, opium, salt, tobacco, betel, rice, and sugar; sidelines also included selling Mughal-issued tax exemptions and lending money to distressed Indian grandees.
The wills of company officials in the early 1780s show that one in three left their wealth to Indian wives, or as one put it, “the excellent and respectable Mother of my two children for whom I feel unbounded love and affection and esteem.” Others went home. Newly enriched returnees elbowed their way into high society and were rewarded with a moniker, “nabob,” which derived from an Indian word for prince, nawab, and signified an Indian-made plutocrat of boundless amorality.
Neither the directors in Leadenhall Street, the company’s headquarters in the City of London, nor the Mughal authorities who had granted the company its trading privileges in return for “presents” and taxes, approved of the nabobs’ freelancing. But the directors didn’t particularly mind, provided that the thirty-odd ships that sailed east every year from England’s south coast returned laden with luxury imports, along with a share of the taxes collected from the Indian enclaves that the company controlled. All the while the authority of the emperor, the unwarlike Shah Alam, was crumbling under the pressure of repeated Maratha, Afghan, and Iranian incursions into the Mughal heartland of the Gangetic Plain. These and the foragings of another group of armed Europeans, the French Compagnie des Indes, turned what the Mughal chronicler Fakir Khair ud-Din Illahabadi called “the once peaceful abode of India” into “the abode of Anarchy.”
Through adroit use of its well-trained, disciplined armies, over the course of the eighteenth century the company expanded its influence inland from the three littoral “Presidencies” of Calcutta, Madras, and Bombay. By the 1750s, William Dalrymple tells us in The Anarchy, his new account of the rise of the company, it accounted for almost an eighth of Britain’s total imports of £8 million and contributed nearly a third of a million pounds to the home exchequer in annual customs duties.
Awell-known historian both in his native Britain and his adoptive India, where he cofounded what may be the world’s biggest literary festival, at Jaipur, Dalrymple has influenced the scholarly as well as the popular understanding of South Asian history through his use of both European and Indian sources, thus uniting the halves of a previously bisected whole. (To pick just two examples from the extensive company literature, both John Keay’s 1993 book, The Honourable Company, which also deals with its extensive involvement in Southeast Asia, and Nick Robins’s commercial history, The Corporation That Changed the World, from 2012, are entirely reliant on British sources.) Dalrymple’s ability to present events from an Indian as well as a European perspective owes much to his mining of the National Archives in Delhi and his collaboration with the late Bruce Wannell, a waspish global flaneur and gifted linguist who lived in a tent on Dalrymple’s lawn in South Delhi while translating Mughal-era texts for him.
The company was transformed into an instrument of imperialism under Robert Clive, a terse, pugnacious delinquent from Shropshire. After arriving in Madras as a writer in 1744, Clive distinguished himself on the battlefield, making up in daring what he lacked in experience. In 1752 he and a fellow officer led a company force that took prisoner almost three thousand troops from the Compagnie des Indes, for which he was rewarded with a lucrative sinecure.
In 1756, after a spell back home, Clive’s taste for conquest and treasure took him to Bengal, whose production of silks and muslins made it the biggest supplier of Asian goods to Europe. In 1757 Clive led the company’s forces to victory against both the French and the uncooperative local nawab; from defeating the latter the company received what Dalrymple calls “one of the largest corporate windfalls in history”—in modern terms around £232 million. Clive himself pocketed an astronomical £22 million, with which he went on to acquire a string of desirable British properties, including an estate outside Limerick to go with his Irish peerage, while Lady Clive, as the Salisbury Journal informed its readers, garlanded her pet ferret with a diamond necklace worth more than £2,500.
Besides his military exploits Clive was admired by the directors for his administrative vigor, and he ended his Indian career as governor of Bengal. In 1765—two years before he returned to Britain for good—he secured his most substantive legacy when he forced Shah Alam to recognize the company’s financial authority over three of his richest provinces, Bengal, Bihar, and Orissa. A Mughal chronicler lamented that the British “have appointed their own district officers, they make assessments and collections of revenue, administer justice, appoint and dismiss collectors…heaven knows what will be the eventual upshot of this state of things.”
The baneful consequences of a commercial concern enjoying political power but answering only to its shareholders became apparent during the Bengal famine of 1770–1771. Company officers exacted dues from a dying populace as diligently as they had from a healthy one. Tax evaders were publicly hanged. The following year Calcutta informed Leadenhall Street that “notwithstanding the great severity of the late famine…some increase [in revenue] has been made.”
While at least one million Bengalis were dying of the famine and its effects, some company employees enriched themselves by hoarding rice. According to one anonymous whistleblower whose account was published in the Gentleman’s Magazine back in London:
Our Gentlemen in many places purchased the rice at 120 and 140 seers a rupee [a seer was about two pounds], which they afterwards sold for 15 seers a rupee, to the Black [Indian] merchants, so that the persons principally concerned have made great fortunes by it; and one of our writers…not esteemed to be worth 1,000 rupees last year, has sent down it is said £60,000 to be remitted home this year.
In Calcutta, the same source went on, “one could not pass the streets without seeing multitudes in their last agonies,” while “numbers of dead were seen with dogs, jackalls, hogs, vultures and other birds and beasts of prey feeding on their carcases.”
Back home, denunciations of the company’s conduct equaled in vehemence anything that would be uttered by nationalist Indians in the later stages of British rule. One satire attacked the directors of the company, among them “Sir Janus Blubber,” “Caliban Clodpate,” “Sir Judas Venom,” and “Lord Vulture,” as a “scandalous confederacy to plunder and strip.” But when Clive was investigated by Parliament on charges of amassing a fortune illegally, his achievements in defeating the French and increasing company revenues counted for more than the regime of plunder he had overseen—and Parliament included company shareholders and men who owed their seats to his largesse. Clive was exonerated in May 1773. The following year he committed suicide. He had, Samuel Johnson wrote, “acquired his fortune by such crimes that his consciousness of them impelled him to cut his own throat.”
The company was now a permanent subject of controversy in Britain, which was, in strenuous, unemphatic fits, moving from absolutism to accountability. But only rarely . . .
Germans reflect on the Allies’ arrival in Europe in 1945.
Technically, I suppose, the Allies arrived in Europe on 3 September 1943 (Salerno invasion) and also (more familiar) on 6 June 1944 (Normandy invasion). (My uncle Choc took part in the Italian campaign.) But the Allies did not invade Germany until 22 March 1945.
Der Spiegel has a staff-written report:
Georg Stefan Troller sits in his apartment overlooking the rooftops of Paris, with a piece of cake in front of him. He has agreed to take a trip down memory lane. Outside his window is the city’s 7th arrondissement. Hail pelts the glass as Troller talks about the war. He tells how he, a Jew from Vienna, enlisted in the United States Army and returned to a decimated Europe. How he marched into Munich, the “capital of the Movement.” How he stood in Hitler’s apartment in the city one day and walked through the recently liberated Dachau concentration camp shortly thereafter. Even 75 years later, he hasn’t forgotten the shock he felt that day.
Troller was born in 1921. He’ll turn 99 this December. DER SPIEGEL visited him and other witnesses of that dark period in Europe’s past — in Paris, Hamburg, Moscow, Bonn and Berlin. When visits were impossible due to the pandemic, interviews were conducted over the phone. Witnesses’ memories were supplemented by diary entries that have been either published (like those of the German author Thomas Mann) or preserved at the German Diary Archive in Emmendingen (as was the case with Annemarie and Johann von Duhn, Hans Diester, Insa Radomski and 7-year-old Theodor Gruschka). The questions were always the same: How did Germans spend the summer of 1945? What influenced their day-to-day lives? Were they depressed? Exhausted? Ashamed?
Not every memory can be traced back to a specific date. To a degree, reporting from the postwar period has overwritten survivors’ personal recollections. But one thing is certain: In the collective memory, the summer of 1945 began on May 8, right after Nazi Germany’s unconditional surrender. It marked both a new beginning and a transition. Indeed, 1945 was an epochal year. There were more than 6 million dead in Germany alone, and more than 60 million dead worldwide. In Europe, 6 million Jews had been murdered. German cities lay under a billion tons of rubble. That was the present. The future lay in the division of Europe, in the disintegration of the world into blocks and in the dawning of the nuclear age.
It became clear that the end of the war would not instantly mean an end to people’s suffering. The violence spilled over into the summer, into the years of reconstruction, into generations to come. It was striking how significant people’s longing was for a life of privacy after years of mass mobilization and appeals for national unity.
And everyone learned in a very short time just what freedom meant. Hans-Jochen Vogel, who later became the head of the Social Democratic Party, tried chewing gum for the first time in his life. The journalist Wolf Schneider developed a fear of bridges. Friedrich Nowottny, who went on to head the German public broadcaster WDR, traded SS skull rings for cigarettes from American soldiers who collected them as Nazi souvenirs. The married couple Annemarie and Johann von Duhn sewed flags of the four victorious powers from rags and an old swastika banner. Future German Chancellor Helmut Schmidt experimented with a coffee substitute. A restaurant owner from Hamburg realized that “Hitler” was an unfortunate last name to have. Hans Modrow, who years later became the last prime minister of communist East Germany, met a Red Army soldier who could quote the German novelist Heinrich Heine. Marianne von Kretschmann, later Marianne von Weizsäcker, longed for school to begin. Klaus von Dohnanyi rode through Germany on a ladies’ bicycle and enjoyed the most unforgettable breakfast of his life. A young Theodor Gruschka observed that during a raid, sometimes “nude females” would appear. Fighting was still going on in the Pacific. And Martin Walser met the woman of his dreams that summer.
May 9
The day after its capitulation, the Third Reich made its last public announcement. Radio host Klaus Kahlenberg read aloud a message at 8:03 p.m. on the state-owned radio station in Flensburg. It began: “The Wehrmacht High Command announces.”
Neither the High Command nor the Wehrmacht existed anymore. Both had unconditionally surrendered the day before. “The German Wehrmacht has been honorably defeated at the hands of massively superior forces. We are broadcasting the text of the final Wehrmacht report of the war. Radio silence will be maintained for three minutes.”
Germany’s “zero hour” lasted three minutes.
Hans-Jochen Vogel pinned his German translation of an article in the U.S. Army newspaper Stars and Stripes to a bulletin board. That was his job in the Coltano prison camp near Pisa, Italy. “To translate things I considered important into German and post them on a bulletin board.”
The Russian Nikolai Pudow, a captain of the Red Army and an occupying soldier, experienced the first day of peace in a village on the Elbe River near the city of Wittenberg. The restaurants, he said, were teeming with military men in civilian clothing. Their posture exposed them as officers. “The Germans were very intimidated,” he recalls. “There were posters all over the villages: a giant ear, the enemy is listening, Red Army soldiers with bloody claws for hands. Most of the German words I used to know I’ve since forgotten. Except for ‘Untermensch (subhuman).'”
Near Greifswald, a young man with the Volkssturm, the Nazis’ last-ditch defensive army, found himself in Russian captivity: Hans Modrow, 17 years old at the time, wanted to walk home along the railway tracks, to Jasenitz. What he hadn’t considered was that the Red Army was keeping a close eye on the tracks, for fear of acts of sabotage by the Werewolves, a group of Nazi guerrilla fighters who tried to slow the Allied occupation.
Far away in Los Alamos, New Mexico, the Interim Committee on Atomic Energy met for the first time that day. Among those who attended the meeting were