Later On

A blog written for those whose interests more or less match mine.

Posts Tagged ‘philosophy

He was the most revered philosopher of his era. So why did GE Moore disappear from history?

leave a comment »

Ray Monk writes in Prospect:

I almost worship him as if he were a god. I have never felt such an extravagant admiration for anybody.” So the 22-year-old Bertrand Russell wrote to his fiancée Alys Pearsall Smith in November 1894. The object of his “extravagant admiration” was George Edward Moore (always known as “GE Moore” because he hated both his given names), who was 18 months younger than Russell and at that time just an undergraduate.  

Russell was reporting to Alys on a meeting of the Apostles, the self-selecting and self-consciously elite discussion group (founded in 1820, and still in existence today) which only the students and fellows considered to be the brightest and best were invited to join. At their meetings, a member presented a case in a short paper—usually on a philosophical, cultural or political subject, designed to display both erudition and wit—which was then put to the vote. Russell had been enlisted in his second year at Cambridge, and Moore, likewise, two years later. 

To be revered within the Apostles was to be a superstar of the British intellectual elite. In the 1890s it was a society with an exceptional reach into the worlds of culture and politics, as well as ideas. At the time of Russell’s letter to Alys, active members of the society included the philosophers James Ward and JME McTaggart, the political scientist Goldsworthy Lowes Dickinson, the polymath Edward Marsh and the art critic Roger Fry. 

It wasn’t only in Cambridge quadrangles but soon also the squares of London in which Moore’s star shone. There was plenty of cross-over between the two sets. Several of the Bloomsbury luminaries were elected to the Apostles: John Maynard Keynes, Lytton Strachey, Saxon Sydney-Turner, Desmond MacCarthy, Leonard Woolf and EM Forster. Bloomsbury would develop a veneration of Moore as great as, if not greater than, that expressed by Russell. Beatrice Webb told Leonard Woolf that, although she had known most of the distinguished men of her time, she had never met a great man. “I suppose you don’t know GE Moore,” Woolf replied. In his autobiography, he reflected that Moore was “the only great man whom I have ever met or known in the world of ordinary, real life.”  

Today, this veneration seems a little hard to understand. It is still customary (just about) to lump Moore in with Russell and Wittgenstein, as a trio exemplifying the analytic tradition of philosophy that flourished in England during the 20th century, but the reputations of Russell and Wittgenstein today are far greater. To give one small indicator, nobody has ever suggested to me that I follow my biographies of Russell and Wittgenstein with one of Moore.  

So who was GE Moore and why is there such a gap between his reputation now and his reputation in the first decades of the 20th century? And what does his fall from such exalted heights tell us about the sorts of intellects that do—and do not—shine brightly for posterity? 

 ***

George Edward Moore was born and brought up in the south London suburb of Upper Norwood, within walking distance of his school, Dulwich College, which he and his three brothers attended as day boys. . .

Continue reading.

Written by LeisureGuy

10 April 2020 at 4:45 pm

Posted in Books, Daily life, Education

Tagged with

Stanford Encyclopedia of Philosophy on Confucius

leave a comment »

Mark Csíkszentmihályi (though he seems to have dropped the diacritics) writes in the Stanford Encyclopedia of Philosophy:

At different times in Chinese history, Confucius (trad. 551–479 BCE) has been portrayed as a teacher, advisor, editor, philosopher, reformer, and prophet. The name Confucius, a Latinized combination of the surname Kong 孔 with an honorific suffix “Master” (fuzi 夫子), has also come to be used as a global metonym for different aspects of traditional East Asian society. This association of Confucius with many of the foundational concepts and cultural practices in East Asia, and his casting as a progenitor of “Eastern” thought in Early Modern Europe, make him arguably the most significant thinker in East Asian history. Yet while early sources preserve biographical details about Master Kong, dialogues and stories about him in early texts like the Analects (Lunyu 論語) reflect a diversity of representations and concerns, strands of which were later differentially selected and woven together by interpreters intent on appropriating or condemning particular associated views and traditions. This means that the philosophy of Confucius is historically underdetermined, and it is possible to trace multiple sets of coherent doctrines back to the early period, each grounded in different sets of classical sources and schools of interpretation linked to his name. After introducing key texts and interpreters, then, this entry explores three principal interconnected areas of concern: a psychology of ritual that describes how ideal social forms regulate individuals, an ethics rooted in the cultivation of a set of personal virtues, and a theory of society and politics based on normative views of the family and the state.

Each of these areas has unique features that were developed by later thinkers, some of whom have been identified as “Confucians”, even though that term is not well-defined. The Chinese term Ru (儒) predates Confucius, and connoted specialists in ritual and music, and later experts in Classical Studies. Ru is routinely translated into English as “Confucian”. Yet “Confucian” is also sometimes used in English to refer to the sage kings of antiquity who were credited with key cultural innovations by the Ru, to sacrificial practices at temples dedicated to Confucius and related figures, and to traditional features of East Asian social organization like the “bureaucracy” or “meritocracy”. For this reason, the term Confucian will be avoided in this entry, which will focus on the philosophical aspects of the thought of Confucius (the Latinization used for “Master Kong” following the English-language convention) primarily, but not exclusively, through the lens of the Analects.


1. Confucius as Chinese Philosopher and Symbol of Traditional Culture

Because of the wide range of texts and traditions identified with him, choices about which version of Confucius is authoritative have changed over time, reflecting particular political and social priorities. The portrait of Confucius as philosopher is, in part, the product of a series of modern cross-cultural interactions. In Imperial China, Confucius was identified with interpretations of the classics and moral guidelines for administrators, and therefore also with training the scholar-officials that populated the bureaucracy. At the same time, he was closely associated with the transmission of the ancient sacrificial system, and he himself received ritual offerings in temples found in all major cities. By the Han (202 BCE–220 CE), Confucius was already an authoritative figure in a number of different cultural domains, and the early commentaries show that reading texts associated with him about history, ritual, and proper behavior was important to rulers. The first commentaries to the Analects were written by tutors to the crown prince (e.g., Zhang Yu 張禹, d. 5 BCE), and select experts in the “Five Classics” (Wujing 五經) were given scholastic positions in the government. The authority of Confucius was such that during the late Han and the following period of disunity, his imprimatur was used to validate commentaries to the classics, encoded political prophecies, and esoteric doctrines.

By the Song period (960–1279), the post-Buddhist revival known as “Neo-Confucianism” anchored readings of the dialogues of Confucius to a dualism between “cosmic pattern” (li 理) and “pneumas” (qi 氣), a distinctive moral cosmology that marked the tradition off from those of Buddhism and Daoism. The Neo-Confucian interpretation of the Analects by Zhu Xi 朱熹 (1130–1200) integrated the study of the Analects into a curriculum based on the “Four Books” (Sishu 四書) that became widely influential in China, Korea, and Japan. The pre-modern Confucius was closely associated with good government, moral education, proper ritual performance, and the reciprocal obligations that people in different roles owed each other in such contexts.

When Confucius became a character in the intellectual debates of eighteenth century Europe, he became identified as China’s first philosopher. Jesuit missionaries in China sent back accounts of ancient China that portrayed Confucius as inspired by Natural Theology to pursue the good, which they considered a marked contrast with the “idolatries” of Buddhism and Daoism. Back in Europe, intellectuals read missionary descriptions and translations of Chinese literature, and writers like Gottfried Wilhelm Leibniz (1646–1716) and Nicolas-Gabriel Clerc (1726–1798) praised Confucius for his discovery of universal natural laws through reason. Enlightenment writers celebrated the moral philosophy of Confucius for its independence from the dogmatic influence of the Church. While at times he was criticized as an atheist or an advocate of despotism, many Europeans viewed Confucius as a moral philosopher whose approach was in line with rationalism and humanism.

Today, many descriptions combine these several ways of positioning Confucius, but the modern interpretation of his views has been complicated by a tendency to look back on him as an emblem of the “traditional culture” of China. In the eyes of some late nineteenth and twentieth century reformers who sought to fortify China against foreign influence, the moral teachings of Confucius had the potential to play the same role that they perceived Christianity had done in the modernization of Europe and America, or serve as the basis of a more secular spiritual renewal that would transform the population into citizens of a modern nation-state. In the twentieth century, the pursuit of modernization also led to the rejection of Confucius by some reformers in the May Fourth and New Culture movements, as well as by many in the Communist Party, who identified the traditional hierarchies implicit in his social and political philosophy with the social and economic inequalities that they sought to eliminate. In these modern debates, it is not just the status of Confucius in traditional China that made him such a potent symbol. His specific association with the curriculum of the system of education of scholar-officials in the imperial government, and of traditional moral values more generally, connected him to the aspects of tradition worth preserving, or the things that held China back from modernization, depending on one’s point of view.

As legacies of Confucius tied to traditional ritual roles and the pre-modern social structure were criticized by modernizers, a view of Confucius as a moral philosopher, already common in European readings, gained ascendancy in East Asia. The American-educated historian Hu Shi 胡適 (1891–1962) wrote an early influential history of Chinese philosophy, beginning with Laozi 老子 and Confucius, explicitly on the model of existing histories of Western philosophy. In it, Hu compared what he called the conservative aspect of the philosophy of Confucius to Socrates and Plato. Since at least that time, Confucius has been central to most histories of Chinese philosophy.

2. Sources for Confucius’s Life and Thought

Biographical treatments of Confucius, beginning with . . .

Continue reading. There’s much more.

Written by LeisureGuy

1 April 2020 at 2:13 pm

Posted in Education

Tagged with

On the Market: Taking Stock of One’s Soul

leave a comment »

I wonder if the US lost it soul in the markets. It seems so: the immediate response that the GOP had in the face of the COVID-19 crisis was to propose a tax cut (payroll taxes, in this case). They have a limited range of acceptable tools, it seems. (I’m reminded of a cartoon showing a man drowning, waving his arms, and a man on the dock sees him and calls out, “I can’t swim! Would $10 help?”

Justin E.H. Smith writes in Cabinet Magazine:

At the museum, I am standing with my spouse in front of a Flemish vanitas scene. There is an old man hunched over his accounting books, surrounded by gold coins and jewels; a skull sits on his desk, and Death himself perches undetected above his shoulder. What, I ask her, is the “takeaway” of such scenes supposed to be? That one would do well to start thinking of one’s soul, she says. And I think, but do not say: I thought of nothing but my soul for forty years, never learned the first thing about how money works, and now time is much shorter than in our youth, and I’ve managed to save so little money, and I am worried about leaving you alone in this world without me, with only the small amounts we’ve been able to put away for us, for you, as we move about from country to country, renting one modest apartment after another, like dry old students. O my love, I hate to envision you alone and frightened. Is it wrong for me now to count our coins and to keep our accounting books? Am I compromising the fate of my soul? Is this vanity?

In November of last year, I opened a brokerage account. I had been reading simple, bullet-pointed introductions to financial literacy for a few months before that, manuals “for dummies” of the sort that I am conditioned to hold in contempt when their subject is, say, Latin, or the Protestant Reformation. After this period of study, I determined I was ready to invest the bulk of the money I had to my name, around $150,000, in the stock market (an amount large enough to make me already worthy of the guillotine, for some who have nothing, and small enough to burn or to lose with no consequences, for some who have much more). The fact that I had that amount of money in the first place was largely a bureaucratic mistake. When I quit my job at a university in Canada after nine years of working there, the human-resources people closed my retirement account and sent me the full amount in a single check. That check—the “retirement” I unwittingly took with severe early-withdrawal penalties at the age of forty-one when in fact I was only moving to a job in another country—plus some of the money I had saved over just the past few years from book-contract advances, was to be the seed funding for what I hoped, and still hope, might grow into something much larger through the alchemy of capital gains.

It was driven home to me repeatedly in my early efforts to build an investment strategy that, quite apart from the question of whether the quest for wealth is sinful in the sense understood by the painters of vanitas scenes, it is most certainly and irredeemably unethical. All of the relatively low-risk index funds that are the bedrock of a sound investment portfolio are spread across so many different kinds of companies that one could not possibly keep track of all the ways each of them violates the rights and sanctity of its employees, of its customers, of the environment. And even if you are investing in individual companies (while maintaining healthy risk-buffering diversification, etc.), you must accept that the only way for you as a shareholder to get ahead is for those companies to continue to grow, even when the limits of whatever good they might do for the world, assuming they were doing good for the world to begin with, have been surpassed. That is just how capitalism works: an unceasing imperative for growth beyond any natural necessity, leading to the desolation of the earth and the exhaustion of its resources. I am a part of that now, too. I always was, to some extent, with every purchase I made, every light switch I flipped. But to become an active investor is to make it official, to solemnify the contract, as if in blood.

• • •

When I was eleven, I learned that a check is the form of currency you use when you do not have any other. . .

Continue reading.

Written by LeisureGuy

13 March 2020 at 5:45 pm

Posted in Business, Daily life

Tagged with

Immanuel Kant: What lies beyond the senses

leave a comment »

Allen W. Wood writes in the Times Literary Supplement:

Kant is the most influential of all modern philosophers. Virtually every philosophical movement since the end of the eighteenth century is some version, interpretation or variation on Kant’s “critical” philosophy, his “transcendental turn”, his “Copernican revolution”.  Kant’s dominant influence is sometimes hard to understand and, especially for non-specialists, hard to accept. And much of his influence is in fact due partly to various misunderstandings of his thought. Kant could write wittily and even elegantly when he wanted to. But he did not, like David Hume, think of himself as primarily “a man of letters”, and his aim was not, like René Descartes’s, to publicize to the world in elegant prose a new conception of nature and science. His prose was couched in the forbidding jargon he inherited from the tradition of Wolffian scholasticism in which he was educated.

I tell my students that they should be grateful for this, because that jargon is what connects Kant’s often highly original ideas to the Western philosophical tradition going back to the Greeks. Kant himself was not knowledgeable in that tradition. He came to philosophy from natural science – physics, astronomy, geology: what would then have been called “natural philosophy”. Kant became a “philosopher” in our sense of the word only when he began reflecting on the foundations of these emerging and changing departments of knowledge. It was not until relatively late in life that Kant’s interests shifted to include ethics, politics and religion (though his final decade of philosophical activity was concentrated on them). The remarkable thing about Kant, to those who study him, is the striking originality and insight present in what might to casual readers seem to be the dark corners of his obscure and forbidding prose. It is easy to find Kant mystifying or off-putting unless you have the time, patience and sympathy to discover and properly reflect on his remarkably original work.

Kant was a philosopher of the Enlightenment (Aufklärung). But what is “enlightenment”? Many in Kant’s time would have agreed with the definition put forward by one of Kant’s greatest contemporaries, Moses Mendelssohn. For him, enlightenment was the education of our understanding or theoretical faculty aiming at knowledge, in contrast to culture, which is the education of our practical faculty aiming at virtue. But Kant offered a new twist that exposed the fundamental issue facing modern human beings. “Enlightenment”, he said, is “the human being’s release from self-incurred minority.” Minority is the condition of children, who do not govern or take responsibility for their own lives but must be guided by others. Minority is “self-incurred” when it is due not (as with children) to the incapacity to direct one’s own life, but instead to a lack of courage and resolution. This, Kant thinks, is the fundamental issue facing modern individuals. We must learn to think for ourselves rather than deferring to the authority of books, doctors, lawyers and clergy and letting them take the place of our understanding and our conscience.

It is all too easy to suppose Kant is recommending that we be idiosyncratic contrarians, contemptuous of tradition, authority and the thoughts of others. But that isn’t so. By “thinking” Kant means the use of understanding or reason subject to the basic standard of universal validity. To think for oneself, by Kant’s lights, means to think from the standpoint of all others. Kant argues that the only way for individuals to become enlightened is to communicate freely with one another, so that each acquires the ability and confidence to think by using the thoughts of others as both an incitement and an (always fallible) external criterion for correct thinking. A culture of enlightenment is a public culture in which people learn to take responsibility for themselves by becoming cautiously confident of their capacities as they also become aware of their limitations.

In the title of Kant’s most famous work, Critique of Pure Reason (1781)“critique” comes from the Greek word for “judge”, and the genitive is simultaneously subjective and objective. That is, the faculty of pure reason is both the judge and the object of judgement: to take the measure of its capacities, acquiring confidence in them, but also take the measure of its limits. Kant’s Critique therefore has a double aim: to vindicate against sceptical challenges the possibility of empirical knowledge of nature (such as Newtonian physics), and to demonstrate the impossibility of theoretical knowledge about thinkable realities that cannot be perceived, such as God, the human soul and free will. Kant wants us to come to terms with what he calls our “peculiar fate”: we do, and should, raise questions about realities transcending the sensible world, but must resign ourselves to our incapacity to answer them. Kant’s account of our cognitive faculties assumes that in order to cognize, we must both sense and think. The only objects of which we can have true cognition are sensible objects, material things in space and in time. But what makes this knowledge possible must come not from sensible data but from our own faculties and therefore must be a priori. To say that cognition is a priori is to say that we must ground it in the active exercise of our own faculties, not in the data presented to them. Kant rejects the idea of innate knowledge because he takes that to be something also presented to our faculties rather than created by them. A priori knowledge comes from both our sensibility and our understanding. Sensible objects are given to us only under spatio-temporal conditions, and these conditions afford us the mathematical foundations of modern physics. But our capacity to judge the objects that come before us in this way depends on the way our active faculties of understanding, imagination and judgement connect the data of the senses in lawlike ways, so that the physical world is made up of substances causally interacting according to necessary laws and giving us a unified reality we call “nature”.

Kant’s philosophy is said to have triggered a “Copernican revolution”. Just as Copernicus changed our conception of the heavens by including the thought that we (as observers) are in motion rather than being the fixed centre of the universe, so Kant proposes to solve sceptical problems about causality, space and time by viewing the knowable world as relative to the activity of our faculties. Human knowledge is not the precise mirror of realities and relationships that exist entirely independently of us. Instead, the order of nature must be seen as an order that we, as knowers, create. Our own understanding, Kant says, is the true law-giver of nature.

One remarkable and baffling consequence of this Copernican revolution is Kant’s claim that cognizable objects, while “empirically real” are “transcendentally ideal”. By considering the world not in common sense terms or those of science, but rather in light of the conditions of the possibility of knowing it, we must accept that we cognize only “appearances” not “things as they are in themselves”. Kant calls this doctrine “transcendental idealism”: some have taken this to be either a wild metaphysical doctrine deeply at odds with common sense, or else a radical scepticism that cuts us off entirely from true reality. Kant would reply that transcendental idealism expresses only the epistemic modesty or humility that goes with acknowledging our human limitations and taking responsibility for our active role in knowing.

However transcendental idealism is understood, Kant spends much of the Critique arguing that our reason generates ideas of objects that we can never cognize because they lie beyond the bounds of the senses. These include ideas involved in thinking of nature as a completed totality, and also supernatural ideas often found in religion; the ideas of God, free will and the immortal soul. Kant claims we can have no theoretical knowledge of such objects; ideas of them can nonetheless play a role in our moral life and can also be represented indirectly (symbolically) in aesthetic and religious experiences that do not constitute a form of theoretical (scientific) knowledge.

Kant’s emphasis on our responsibility to, and for, ourselves individually, humanity’s responsibility to itself collectively and the role of human cognizers in constituting the world should make it easy for us to understand the Kantian conception of the primacy of the practical, and the centrality of ethics to his overall project. Moral philosophy rests on a fundamental principle or moral law, which is a priori in the sense that it comes from our reason as active beings rather than being a mere passive response to external influences, whether of our natural desires or the influence of others. But just as the fundamental standard of reason for Kant is universal validity for all rational beings, so the moral law is also conceived as a law for all rational beings conceived as an ideal community or realm of ends.

The basic value in Kant’s ethics is . . .

Continue reading.

Written by LeisureGuy

18 February 2020 at 4:16 pm

Posted in Books

Tagged with

The best books on The Epicureans

leave a comment »

Nigel Warburton interviews James Warren at Five Books:

For most of us, an Epicurean means someone devoted to pleasure and enjoying themselves. But the real Epicureans had a very different philosophy of how to live. Cambridge University professor James Warren talks us through the philosophy of Epicurus and explains how it’s still relevant today—particularly when it comes to facing death.

Before we get going on the books you’ve chosen, could you say something general about who the Epicureans were?

The Epicureans began as a school of philosophers in the Hellenistic period of Ancient Greece. So that’s the period historically. It’s usually dated to the death of Alexander the Great. Philosophically, it’s the death of Aristotle: they died within a year of each other, so that lines up rather neatly.

It has its origins in a school that was founded in Athens by Epicurus, after whom the school is named. That was a piece of land that he owned that was called ‘The Garden’ because that’s what it was. People would gather there and talk philosophy. Epicurus had developed a systemic view on what were, by that time, pretty much agreed to be the standard areas of philosophical inquiry. So, he had a view on natural philosophy; he had a view on what he called ‘canonic’ or logic, which includes epistemology for us, and also on ethics and political philosophy. Epicureanism became a universal kind of philosophical view on the world, the commitment to which was supposed to give you the truth about how the world is, how it works, and also to give you a recipe for living a good life as a human in that world.

So is the idea that the metaphysics is the foundation for the ethics, so that you understand the way the world is and that allows you then to live well?

To a degree. We might come to some specific cases in which a particular understanding about the nature of the world generates an immediate ethical output. So for example, understanding that the world and everything in it is generated out of atoms moving around in a void: that leads you to believe correctly, says Epicurus, that you too are a combination of atoms that have come together at this particular point, but those atoms will disperse, and when they do, you will cease to be.

Now, Epicurus thinks that certain direct ethical implications follow from that, for example, about how you should view the time after your death and so on. Other cases involve for example him thinking that a proper appreciation of natural phenomena and natural processes—particularly understanding that they’re not directed by any divine agency or for any particular natural good—will allow you to stop fearing, or being concerned unnecessarily about, certain things that might happen around you in the world. So there are cases like that.†

Otherwise, sometimes the ethics looks as if it’s separable in certain important ways from the physics. When we come to it, perhaps we can talk about how in his ethics (he’s a hedonist) he has a particular view about pleasure and certain things follow from that, that are probably independent of any particular view about the physics of pleasure.

Since you’ve mentioned Epicurus’s attitude to the time after death, this might be a good point to mention that you wrote a brilliant book called Facing Death: Epicurus and His Critics which shows that this isn’t just a scholarly interest for you. Obviously, this is founded on deep knowledge of Epicurean philosophy, but in this book you discuss Epicurus’s attitudes to death as if he’s making a contribution now to current thinking about how we should live in relation to death. Not all present-day classical philosophers do that. You’re not putting philosophy in a museum in this book—you’re showing how ancient philosophy can be relevant to our lives now. How did you come to write this book?

It emerged out of some work I was doing while I was a student and was one of those cases where it seemed to be unfinished business for me, after I’d finished writing the essay that I had to write. And, as you say, it’s one of those cases where there was a very direct and immediate kind of conversation to be had between these ancient texts and the particular arguments that Epicurus is raising, that are supposed to justify what he thinks is the correct attitude to death, but are fascinating and relatively easy to carry across without anachronism and without talking at cross-purposes between a modern sensibility and a modern approach and his own ancient world view.

It’s one of those cases where you can bracket out some of the particular bases for his premises, but supply something equivalent that we would accept, and then it’s very straightforward. I wasn’t innovative in this, I should point out. There was a lot of really important work, in particular a beautiful 10-page essay by Thomas Nagel just called ‘Death,’ reprinted in his book Mortal Questions, which in many ways was my provocation to write this, and which also encouraged a lot of really interesting work from people who aren’t specialists in the ancient texts to engage with the arguments.

What I thought I could do was reintroduce a more detailed and closer attitude to the ancient texts, because I think in some ways they were more sophisticated than some of the discussions that I saw happening gave them credit for. Also, I could talk in the other direction, and introduce or perhaps show to some classical scholars the ways in which these remain urgent and impressive and interesting arguments that we can wrestle with now.

It may be a caricature of Epicurus, but for somebody who hasn’t read or heard of Epicurus’s ideas about death, the central one for me is that when I’m here, death isn’t, and when death’s here, I’m not, so we shouldn’t be too bothered about it. What’s more, we don’t worry about the eternity before our conception, why worry about the eternity afterwards? There’s an asymmetry in our tendency to worry about the period when we might have lived. Those two thoughts seem to be absolutely transferable to the modern age and a lot of people find consolation in them.

That’s right. And I think they’re quite hard to resist as arguments because they are very straightforward. So to take the first one first—that he says death is nothing to us because ‘when it’s present we are not and when we are present it is not’—that does indeed put a very clear challenge and obstacle to anyone who wants to think that death is harmful. Because if it’s harmful, whom does it harm? We can talk about the death of a person being harmful to others, that’s a natural and an obvious kind of thought. But is it harmful to the person themselves? If it is—or if you want to try to claim that it is—then the immediate question is, well: when is it harmful? It can’t be harmful after the person is no longer there, because there’s nothing to harm. It seems equally odd to claim that it’s harmful before it has happened, because here’s a living person. Now, there are all sorts of quite sophisticated ways that people have tried to come up with answers to that, but it’s quite interesting that you have to say something quite sophisticated to mount a defence of what you might think is a very intuitive thought, that death can harm the person who dies.

The second of the arguments that you mentioned, one I like to label a symmetry argument, is equally fascinating, because it also then introduces questions about our attitudes to the past and to the future, in particular the prenatal past, the time before I was born and the post-mortem future, the time after I’ve died. Now, most people don’t think that it’s at all possible or even conceivable that they were harmed before they were born. There are some qualifications you might have to put on that, things that might have happened before you were born that lead to harm that you experience during your lifetime, but you weren’t harmed at that time, when you weren’t born yet. Let’s also specify that being born means whatever the time is when I come into existence, so it might not be the time when you pop out, just to avoid that controversial area.

So we tend not to think that’s even conceivable, but then Epicurus says, ‘What’s so different then, about the time that’s going to be there after your death?’ To him they look like mirror images of one another. It’s really tricky to justify that asymmetry. Or—and I think this is also an interesting twist—if you think that the time after your death might indeed be harmful for you, why doesn’t it also therefore turn out, that you should be concerned about all of that time before you are alive; perhaps if you’re going to have a symmetrical view of things then either neither is harmful or both are harmful. And if you want to justify an asymmetrical view, then why should it be the case that the prenatal time is not harmful, but the post-mortem time is?

I can imagine somebody like Woody Allen saying: “Oh my God, now I’m worried about the time before my death, and it’s made me even more neurotic,” but for most people it has the opposite effect.

Think of all those great things you missed. If you’re worried that after you’ve died you’re going to miss out on various things, you’ve missed out on a huge amount already! That’s clearly a kind of absurdity that an Epicurean might use to their advantage to say, ‘It’s not just that we happen not to be worried about the time before we were born, but that we’re correct to do.’ And then if you add that to a claim that the two times are symmetrical, then it should follow that we should also feel the same way about the time after our deaths.

I would say to anybody reading this that Facing Death provides an excellent way into these arguments, to see the complexity of the secondary arguments that Epicurus has given rise to and it’s a very clear critical engagement with the profound questions that he raises that are there for any of us, particularly if we don’t have a belief in an afterlife. I think it gets more complicated if you believe in the possibility of an afterlife; but if you don’t, then Epicurus still has a lot to say. So now I think we should get onto the books you’ve chosen. Your first choice is a new translation of Diogenes Laertius, whose final book is devoted to Epicurus. Who was Diogenes Laertius? He shouldn’t be confused with Diogenes the Cynic, by the way. There are a few Diogenes around and they get muddled up sometimes.

Yes, and there is, unfortunately, another Epicurean Diogenes, Diogenes of Oinoanda, who constructed a massive wall on which he inscribed all sorts of Epicurean philosophical texts, which was a very odd thing to do. The question ‘Who was Diogenes Laertius?’ is a very good one, but we don’t have a very good answer to it. We know very little about Diogenes Laertius, who we think was writing in the early part of the third century AD.

This is quite a time after the foundation of the Epicurean school. But what he does is he constructs a work in ten books that is a history of philosophy of a kind, which attempts to trace philosophical thinking from its origins in the sixth century BC all the way through to—and the latest people he mentions tend to be the second century AD—but more or less things seem to have come to an end in terms of innovation, according to Diogenes, by about the middle of the Hellenistic period. Epicurus is the subject of the final book, because he stands at the end of one of these lines of influence of one philosopher to another.

Diogenes is very keen on making this a personal story. He’s both interested in telling us all sorts of odd and fantastical stories about the individual philosophers, but he’s also very interested in showing how each of these individual philosophers was influenced directly and personally by other predecessors. I think of it as a family tree of philosophies with Epicurus as the final branch of one of those family trees.

There is a really lovely new volume, the OUP translation, which is beautifully illustrated and so on and they’ve managed to produce it at a reasonable price for once, although it’ll hurt your back carrying it around.

That makes it sound a bit like Giorgio Vasari’s The Lives of the Artists.

Yes, and I think Vasari might well have known Diogenes; it was a sort of model for him . . . We get to Book 10 of Diogenes Laertes’ book, and here’s Epicurus. Diogenes goes through his familiar procedure. We start off with some biographical information and he tells us about where Epicurus came from, he then lists the works he wrote, tells us something about his lifestyle, how he died. Diogenes also often drops in little poems of his own about these philosophers, which tend to be quite awful.

What’s interesting for us in particular about the Epicurean book is that Diogenes quotes three letters in their entirety that Epicurus wrote to various students, each of which summarises a particular area of Epicurean philosophy.

Do you think this is a reliable source?

Yes, it looks to be genuine Epicurean language. It’s clear that this is the language that we know Epicurus used elsewhere. We know it’s the kind of thing he did. Diogenes offers these summaries as a way of presenting Epicurean philosophy to his readers. One letter summarises Epicurus’s views on physics, one on meteorology and cosmology, the third his views on ethics. Then, at the end, we have a collection of maxims that Epicurus produced that are supposed to be a handy guide for the Epicurean to go back to constantly to reinforce their views.

Could you give an example of one or two of the maxims?

The first two are really interesting. The first one is about the gods and the first maxim says, ‘The blessed and immortal has no troubles himself and causes none for anyone else. Hence, he has nothing to do with resentments and partisanship, all such impulses are sign of weakness.’ So it’s a summary of an argument that says, the gods are immortal and perfectly happy beings and because they’re perfectly happy, it can’t be the case that they engage in any sort of shenanigans with mortals. They don’t care what we do; they’re not upset if we don’t sacrifice to them properly. They don’t get pleased if we do, because that would mean they’re somehow dependent for their happiness on us. And the upshot of that is: don’t worry about them. They’re not going to be angry with you. They’re not going to save you. So all of those fears that people have concerned with divinities can be set aside. They’re just empty fears and you needn’t worry about them.

And then the second one is—and this shows again how central this was to Epicurus’s view about getting things right—the maxim about death. He gives a slightly different argument from the one that we were discussing. It says, ‘Death is nothing to us for what has been dissolved has no feeling and what has no feeling is nothing to us.’ So the idea is when you’re dead, you’re not going to perceive anything. And because you can’t perceive anything, Epicurus thinks you can’t be harmed or benefited. That’s in part because he thinks the only harm you can endure is pain and the only benefit that you can have is pleasure. And so death can’t be good or bad for you in that respect.

But there are about 40 of these maxims that cover the various aspects of his philosophy. I think they were supposed to be recited and memorised as a handy aide-memoire for Epicureans to have at the front of their minds.

So would this Book 10 of Diogenes Laertius’s book be quite a good way of getting a handle on Epicureanism? Because it seems to be written almost as a teaching text.

That’s right. It’s a really excellent introduction to Epicureanism. It gives you a summary of the kind of person Epicurus was.

What kind of a person was he?

He comes across as a really quite unusual person. But all of these philosophers do, in one way or another: that’s what what you expect of a significant philosopher.

We learn about how when Epicurus set up his school in Athens, he scandalised people by allowing all-comers to join, including women, including people that others claimed to be prostitutes. He seemed to think there were no barriers to achieving happiness and understanding the world based on gender or profession or anything like that. You also get his last will and testament that Diogenes sets out. That’s something he does more than once in the book. Aristotle’s will is in there too. Epicurus’s tells you something about the concern he has for the people in his school and making sure that they are able to retain ownership of the garden after his death and so on.

Then we move quite quickly onto a summary of the philosophy in Diogenes’s words and then there are these three, relatively short, letters that cover most of the major points of interest for Epicureanism.

After this conversation, I’m going to go and buy that book for sure. The second book that you’ve chosen is a translation of Lucretius. We haven’t mentioned Lucretius yet, so perhaps we ought to say who he was and why he’s so important in relation to Epicurus? . . .

Continue reading. There’s lots more, and to my mind Epicurus is a preeminent philosopher. It’s too bad that so much of his writing was lost.

Written by LeisureGuy

4 January 2020 at 12:50 pm

Posted in Books, Daily life

Tagged with

Why belief in reality is a dangerous mistake

leave a comment »

Most of us, most of the time, have the sense that we are connected to the real.  The immediate world around us, the objects and people, the buildings and the natural world seem unquestionably present.  Not only in the sense that we are experiencing them but in the sense that they exist independently of us out there in the real world.

Some, we imagine slightly crazed, philosophers may have doubted the existence of those objects and the real world and proposed that it is all a dream and a product of our subjective imagination.  We feel we know better.   Aside from moments of mental instability or those who have taken rather too many psychoactive substances,  we have an abiding sense that the world we experience is for the most part only too real.

This notion of reality is so close to us and so central to our culture that it is hard for us to imagine how it could be otherwise.  It has not always been so.  Our confidence in our access to the real is no doubt in part a product of the success of the Enlightenment and the remarkable achievements of science over the last three hundred and fifty years.  Philosophical realism – the idea that such an independent reality can be described by us – has within academic philosophy been supported by many in the analytic school.  I have argued however that it is a mistake.  A mistake that limits our ability to intervene successfully in the world and encourages division and conflict.

The idea of a reality that we can uncover through precise observation and reason is at the heart of the Enlightenment and enabled its advocates to champion human capacity over the authority of the word of God.  As children of the Enlightenment we are taught the story of Galileo with the metaphysical moral that by peering through his telescope he was able to observe the reality of the Jupiter’s moons and challenge the power and authority of the Church.  Reality though turns out itself to be theological notion.  For the real, like God, is not in the end observable.  Nor can we give an account of how our theories are able to reach through our experience and our particular context to describe an independent reality that we can identify as the ultimate character of the world.  Realists often imagine that they are the ones with their feet on the ground.  The ones without attachments to strange metaphysical frameworks.   Yet realism involves the presumption of something that accounts for all there is, supports our theories, is found everywhere, but is inaccessible and indescribable.  Such descriptions are strangely similar to those that have been used by monotheists to describe their god.  For a simple reason.  ‘The real’ is the god of the Enlightenment.

Of course, the vision of the early scientists and philosophers of the Enlightenment was a great and profound one that was to transform the circumstances of everyone.  Instead of the idea that the world was either unknowable or our knowledge of it came from a higher authority, the proposition that we are capable of uncovering the character of the world from our own observations and investigations was a liberating and transformational shift that propelled research and discovery.  It heralded a new age in which we could see human history as a continuous form of progress that gradually provided a more and more accurate account of the world.  It led to the great theories of science that seemingly uncovered the underlying laws that governed the universe and accurately described its development.

The problem is that the Enlightenment strategy of observing the world and applying reason to determine what is actually the case has uncovered its own limitation and failure, identifying our inability to describe reality. Once only a concern of those with an unusually rigorous turn of mind, it has more recently invaded much of our cultural space as a result of the widespread recognition of the importance of context. We have come to see our theories and accounts of the world as the product of a particular time, a particular culture, a particular language, a particular organism. It is no longer clear to us how these theories can escape their particularity and their context in order to describe the character of the world independently of those constraints.

In addition, the idea that we are able to accurately describe an independent reality requires a theory about how our theories and language are hooked onto the world.  Yet no such theory to support realism has been forthcoming, indeed there has been a shortage of any theories that lay out clearly the metaphysics required to make a realist account of the world possible.  At a commonsense level we assume our words refer to things out there in the world.  Providing an account of how they do so and what sort of things there must be to make this possible becomes more perplexing the more it is pursued.  Wittgenstein, close to the beginning of analytic philosophy, was one of the few to follow through the metaphysics required to make a realist account of language work. Critically however he concluded that any attempt to describe the relationship between language and the world must fail.  It must do so because such a theory would itself have to stand outside of language in order to catch sight of how language itself relates to the world.

Despite Wittgenstein’s identification of the impossibility of a realist theory of language, many philosophers of the analytic school have continued to pursue the realist project, though usually without making any serious attempt to develop an ontology that could make sense of how this could be achieved.  Instead a narrow piecemeal approach has often been adopted, as if in a scientific manner it is possible to make small inroads towards a bigger overall theory without needing to have in mind how the overall theory might be formulated.  All of which would be fine if it wasn’t for the problem that in principle, due to unavoidable paradoxes of self-reference highlighted by Wittgenstein in the Tractatus, such a theory cannot get off the ground. Not surprising therefore that the American philosopher Hilary Putnam, who had spent his career within the analytic school and was one of its leading proponents, concluded, ‘the project to describe the relationship between language and the world is a shambles’.

Meanwhile more broadly in Western culture there is increasing scepticism of the Enlightenment idea that we are progressing towards a better and more accurate theory of the world. Almost every humanities discipline, with the possible exception of some philosophy departments in the English-speaking world, has been absorbed with this question of perspective to such an extent that in some cases, sociology and anthropology for example, it has changed the very character of the discipline itself.  And now more widely in culture as a whole, post truth has invaded our news and politics highlighting our seeming inability to find an uncontentious frame for truth.

Given the profound challenges to the notion of the real why are some philosophers still so attached to it?  And why are most of us still convinced at an everyday level that we are able to access that reality?  There is I think a straight-forward explanation.  Despite the lack of a decent theory, and despite our increasing awareness of the impossibility of an objective account, we are inclined to think that without the notion of reality there is no explanation for the success of our theories and in particular of our scientific theories. Furthermore, realists often assume that the abandonment of the real has the consequence that anything goes, that each perspective is equally valuable.  The strengths and successes of the Enlightenment, our understanding of the world and our culture is imagined to be at risk if we give up on the real and with it the notion that there are correct and incorrect accounts of the world.

But we do not need the Enlightenment god ‘reality’ to make sense of the success of our theories, or to direct our future researches and investigations.  Any more than we need a religious god to make sense of the diversity and wonder of the natural world.  Our theories and our language are a means to intervene successfully in the world, we do not have to imagine that they are uncovering an ultimate truth, a transcendent reality, in order to be effective.  Heisenberg, the quantum physicist, who along with Einstein is perhaps the most influential scientist of the twentieth century, came to the same conclusion.  In his book, Physics and Philosophy, strangely – or perhaps on reflection not so strangely – largely overlooked by realist philosophers, Heisenberg gives up on the notion that science is the uncovering of an ultimate reality.  He argues: ‘We have learnt that exact science is possible without the basis of dogmatic realism’ and goes on to say that in the interpretation of quantum mechanics now central to contemporary physics and with which he is associated, the Copenhagen interpretation, ‘objective reality has evaporated’.

The 18th century German philosopher Immanuel Kant started from the assumption of knowledge and the success of science, and attempted to create a philosophical framework that would account for how that knowledge was possible.  Our current predicament is the reverse.  We have to start from the assumption that we have no knowledge of an independent reality and formulate a theory that accounts for how nevertheless we are able to be so precise and effective in our interventions in the world. I have sought to put forward such an account.  It proposes that  . . .

Continue reading.

Written by LeisureGuy

28 December 2019 at 3:25 pm

Posted in Books, Daily life, Education, Science

Tagged with

What We Owe Each Other: T. M. Scanlon’s Egalitarian Philosophy

leave a comment »

I’ve been watching The Good Place, and though for me it was somewhat slow-starting, it has gotten better and better and taken me into some moral philosophy reading. Martin O’Neill wrote in Boston Review in June 2016:

Some years ago, I had the privilege of studying in graduate school at Harvard under T. M. Scanlon—Tim, as everyone who knows him calls him. As of a few days ago, he has taught his last class as a full-time member of the Harvard philosophy department, where he arrived from Princeton in 1984. But, though he is freshly retired, he has, I hope and expect, not taught his last student. Because Scanlon’s intellectual contributions are important and enduring.

Scanlon is a modest man, so he might not appreciate my saying it, but he stands as one of the most powerful and insightful moral and political philosophers of recent decades. His largest book, What We Owe to Each Other (1998), develops and defends a distinctive approach to interpersonal morality, known as contractualism. Scanlon’s idea is that interpersonal morality—giving others their due—involves being able to justify your conduct to others. Doing right by other people means treating them in ways they cannot “reasonably reject.” More recent work includes a subtle account of the role and function of moral blame in Moral Dimensions (2008) and, in 2014’s Being Realistic About Reasons, a defense of a kind of moral realism, the claim that moral truths exist independently of humans’ beliefs and attitudes.

While Scanlon has been a system-builder in moral philosophy, his work in political philosophy, by contrast, focuses on particular values. His 2003 book The Difficulty of Tolerance includes an account of freedom of expression as well as insightful essays on toleration, human rights, and punishment, among other topics. Now Scanlon is at work on a book whose subject has concerned him for a long time, but which has in just the past five or so years emerged as a central axis of political debate: inequality.

Scanlon’s ideas about equality are philosophically significant. They also have the potential to inform how we ought to approach day-to-day politics.

To see how, it helps first to return to a long-running intellectual dispute over the value of equality and the meaning of egalitarianism. Before I crossed the Atlantic, I studied at Oxford, where, in the 1990s, two important figures of recent political philosophy, G. A. Cohen and Ronald Dworkin, found themselves pitted against each other. The sticking point was the nature and substance of egalitarianism.

In the debates between Cohen and Dworkin, it had somehow come to seem obvious that, whatever else might be said of egalitarian views, equality demanded equal distribution of something. The core question for egalitarians of this stripe was formulated with great clarity by the economist and philosopher Amartya Sen in 1979: “Equality of What?”

Dworkin’s answer was equality of resources. He argued that an egalitarian state should take no particular interest in individuals’ levels of subjective welfare (for which those individuals should take responsibility themselves) but instead should ensure that individuals have an equal claim to the resources needed to form and pursue their own plans and ambitions. The philosopher Richard Arneson, in contrast, endorsed equality of opportunity for welfare. His idea—which was later labelled “luck egalitarianism”—was that individual welfare levels should be a matter of distributive concern for egalitarians. But individuals need not be entitled to a particular level of welfare itself. Rather, individuals’ need equal opportunity to exercise choice and responsibility in their pursuit of welfare. Sen himself favored equality of capabilities, defined as opportunities to achieve particular kinds of valuable individual functionings or states. And Cohen, like Arneson a proponent of a form of luck egalitarianism, went in for a kind of hybrid objective or equilisandum—“access to advantage,” which combined elements of some of these other views.

To my mind, it wasn’t obvious who was correct. It was, however, obvious that all of these writers conceptualized the issues correctly: an egalitarian society distributed some good or other equally to all members.

But my secure sense of confidence, widely shared by political philosophers of my background and training, ran aground against the rocks of Scanlon’s understated resistance to the assumptions of the Oxford view. He argued that the concern with inequality is not some abstract interest in a particular kind of distributive pattern. (He also pointed out that this perspective provides easy grist for anti-egalitarians and those on the political right.) There is, on Scanlon’s view, a great deal more to the normative significance of equality. We don’t just want to see equal distribution of some thing. We want to live together, on terms of equal recognition, in ways that avoid interpersonal domination, prevent the emergence of stigmatizing differences in status, allow people to retain the self-respect that comes with seeing themselves as equal to others, and preserve the kind of background equality that can be a precondition for fair competition in the political and economic domains.

Scanlon’s account of equality isn’t simple; it resists capture in a one-line slogan. It is, one might say, frustratingly complicated. But that is completely right and proper, because the normative reality of our political lives just is frustratingly complicated. Our philosophical thinking about political values should be as simple as possible, but not simpler.

***

What if democratic societies followed Scanlon and thought about the value of equality as embedded in the character of social relations? Our governments would approach policy questions in a new way.

For example, imagine that you get to choose between two policies to equalize income. The first increases unionization rates, thereby driving up workers’ bargaining power and wages. The second leaves unionization rates low and doesn’t help workers gain bargaining power, but it does pay a wage subsidy in the form of a government transfer payment. This compensates for the bad outcomes of the labor market.

On the distributive model that we argued about at Oxford, we might be indifferent to the choice between these two policies. The decision would come down to empirical facts about which policy delivers more equal amounts of our salient good—whether resources or overall welfare.

So, to fill in some empirical assumptions, let’s further imagine a world in which the second policy, with low rates of unionization, is more economically efficient but involves the creation of jobs that are in some ways sites of domination, injurious to the self-respect of workers. An egalitarian who only cares about the distribution of one “master good”—e.g., welfare or resources—might say that we can then compensate workers for the welfare deficit they experience at work by appropriately increasing the level of their wage subsidy. What matters is just how well off people are left overall.

By contrast, a view such as Scanlon’s, emphasizing the irreducible egalitarian significance of people’s status and self-respect and their protection from social domination, will be much more reluctant to collapse everything into a calculus of overall economic outcomes. A more respectful work environment might therefore be a demand of equality, even if it incurs some cost in terms of economic efficiency.

The distributive approach to equality fits with a model of egalitarian public policy that is essentially compensatory in nature. It may be seen as just a brute fact that, in the economic arena, many people lack opportunities or suffer indignities and harm to their sense of standing and self-respect. A state concerned with promoting greater equality could then come along after the fact and redistribute goods or welfare toward those who have lost out in economic life.

But, on the social egalitarian model that Scanlon advances, ex post compensation is not good enough. Instead, a state concerned with equality must ensure, from the start, that people are able to pursue lives of robust, individual agency within the economic domain, with a secure sense of their standing as equals among others. Instead of being concerned only with redistribution, egalitarian public policy should incline toward predistribution, which aims to reshape economic institutions so that they foster egalitarian social relationships, as well as more evenly distributed economic rewards.

In the political domain, it has been interesting to see that social democratic parties in many places have lately been thinking hard about what an economic agenda focused on predistribution might look like. . .

Continue reading. There’s more.

It’s interesting to see the transition from (moral) philosophy to (political) practice.

Written by LeisureGuy

28 December 2019 at 10:46 am

Pragmatism endures

leave a comment »

From Aeon and written by:

Cheryl Misak, a professor of philosophy at the University of Toronto. She is the author of Cambridge Pragmatism: From Peirce and James to Ramsey and Wittgenstein(2016). Her biography Frank Ramsey: A Sheer Excess of Powers will be published by Oxford University Press in 2020; and

Robert B Talisse, W Alton Jones Professor of Philosophy and chair of the Philosophy Department at Vanderbilt University in Tennessee. He is the author of, most recently, Engaging Political Philosophy (2015); Pragmatism, Pluralism, and the Nature of Philosophy (2017), co-authored with Scott Aikin; and Overdoing Democracy (2019).

They write:

At the dawn of the 20th century, there emerged in the United States a distinctive philosophical movement known as pragmatism. Although the term is often used today to denote the blunt desire to get results, the founders of pragmatism – Charles Sanders Peirce (1839-1914), William James (1842-1910), John Dewey (1859-1952), Chauncey Wright (1830-75) and Oliver Wendell Holmes Jr (1841-1935) – were subtle thinkers. Each made significant contributions in areas ranging from logic, epistemology, philosophy of language, legal philosophy, philosophy of science, ethics, aesthetics, philosophy of religion and political philosophy. Despite their differences, they were animated by a common interpretation of philosophical empiricism that emphasises the role of action in our thinking, from the habitual and mundane to the experimental and creative. The core of pragmatism is Peirce’s ‘pragmatic maxim’, which proposes to analyse the meaning of our concepts by looking to how they guide action.

It is fitting that one of the earliest books about the development of pragmatism should be titled Meaning and Action (1968). In that work, the American philosopher H S Thayer presented a view of pragmatism’s founding that has become standard:

Pragmatism is a method of philosophising often identified as a theory of meaning first stated by Charles Peirce in the 1870s; revived primarily as a theory of truth in 1898 by William James; and further developed, expanded, and disseminated by John Dewey.

There are two tightly related ideas at play here. First, there is the view that Peirce and James formulated versions of pragmatism that are partial precursors to the systematic pragmatism of Dewey. Second, there is the notion that the story of pragmatism’s founding is the story of philosophical differences withering away, unifying in Dewey’s philosophy. This developmental view of the history of pragmatism is wrong.

One needn’t scour pragmatism’s initiating documents in order to identify points of substantive disagreement among Peirce, James and Dewey. Pragmatism was founded amid a well-known dispute between Peirce and James over its central idea, the ‘pragmatic maxim’. Peirce proposed the pragmatic maxim as a tool for dispensing with metaphysical nonsense; for him, pragmatism was strictly a ‘method of ascertaining the meanings of hard words and abstract concepts’. The core of this method is the idea that we must look to the upshot of our concepts in order to understand them.

To get a sense of how the pragmatic maxim operates, consider one of Peirce’s own applications: the Catholic doctrine of transubstantiation. This is the view that in the Mass, bread and wine are metaphysically transformed into the body and blood of Christ, despite there being no change at all in their sensible properties. In what, Peirce asks, could this transformation consist? His answer is that the very idea of something being blood but in every conceivable way being empirically indistinguishable from wine is nonsense, ‘senseless jargon’. By insisting that words and statements be analysed according to ‘what is tangible and conceivably practical’, Peirce aspired to ‘dismiss make-believes’ from philosophy, and thereby set upon the path of proper enquiry.

James was dissatisfied with Peirce’s formulation of the maxim. Instead, he proposed a broader rendition according to which the point of pragmatism is not to dispel metaphysical nonsense, as Peirce had alleged, but rather to settle metaphysical disputes. James proposed that one should include among the practical effects of a statement the psychological impacts of believing it. Whereas Peirce argued that the pragmatic maxim exposes the meaninglessness of the doctrine of transubstantiation, James thought that pragmatism afforded a decisive case in favour of it. The idea that one can ‘feed upon the very substance of divinity’ has ‘tremendous effect’ and thus is the ‘only pragmatic application’ of the idea of a substance. For James, the pragmatic maxim serves to resolve rather than dissolve longstanding philosophical debates.

This difference regarding the pragmatic maxim underlies a monumental dispute between Peirce and James over truth. Peirce argued that a belief is true if it would be ‘indefeasible’; or perfectly satisfactory; or would not be improved upon; or would never lead to disappointment; or would forever meet the challenges of reasons, argument and evidence. James meanwhile set out his view on truth and objectivity thus:

Any idea upon which we can ride … any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, simplifying, saving labour, is … true instrumentally.

‘Satisfactorily,’ for James, ‘means more satisfactorily to ourselves, and individuals will emphasise their points of satisfaction differently. To a certain degree, therefore, everything here is plastic.’ Peirce did not think that truth was plastic. He told James: ‘I thought your Will to Believe was a very exaggerated utterance, such as injures a serious man very much.’ He scorned what he took to be James’s view: ‘Oh, I could not believe so-and-so, because I should be wretched if I did.’

When Dewey is brought into the picture, the story of pragmatism is shown to be anything but straightforwardly developmental, where one philosopher’s thought naturally leads to the next one’s. According to Dewey, pragmatism was neither in the business of dismissing nonsense nor of settling metaphysical disputes. He sought a way of doing philosophy that was unhindered by the traditional puzzles and problematics. He resisted the Peircean strategy of proposing a test of meaning and, instead, socialised philosophy, arguing that the traditional philosophical problems naturally arose out of the social and intellectual conditions of a pre-Darwinian age.

Dewey contended that, since these conditions no longer obtain, the traditional philosophical problems should be simply abandoned as ‘chaff’, replaced by new difficulties arising from Darwinian science. In Dewey’s view, Darwinism shows that the world contains no fixed essences or immutable natures. This realisation sets the problem of revising our philosophical and moral ideas so that they are better suited to serve as tools for directing change. According to Dewey, the leading philosophical problem for a post-Darwin epoch is that of keeping our values in step with our technological power, so that they might guide society towards greater freedom.

In this respect, Dewey breaks decisively with James: his pragmatism is not aimed at resolving disputes, but rather at showing that nonpragmatic philosophical programmes are nonviable. Here, Dewey might at first seem allied with Peirce, but Dewey’s stance towards the philosophical tradition is more extreme. To be sure, Peirce’s maxim would have it that many traditional metaphysical statements are nonsense; however, it also leaves a great number of philosophical debates standing. For example, Peirce thought that the dispute between nominalism and realism (does reality consist only of concrete particulars or is generality real as well?) was a real and important philosophical dispute. He proposed his maxim as a way to ensure that such legitimate philosophical debates could proceed profitably. Metaphysics, ‘in its present condition’, is a ‘puny, rickety, and scrofulous science’, but it need not remain so. The pragmatic maxim will sweep ‘all metaphysical rubbish out of one’s house. Each abstraction is either pronounced gibberish or is provided with a plain, practical definition.’

Dewey, by contrast, aimed his criticisms not at specific statements, but at entire philosophical programmes. He dismissed Cartesianism, Kantianism, Humeanism, Platonism, Aristotelianism and nearly every other philosophical school as instantiations of the common defect of employing some or other archaic dualism. Again, Dewey’s charge is that all such approaches are obsolete: not meaningless, but unfit and useless tendencies to be gotten over. Whereas Peirce saw pragmatism as a rule for conducting philosophical enquiry, Dewey saw pragmatism as a philosophical programme for restructuring philosophy and society.

These philosophical differences were well recognised by the classical pragmatists themselves. The work of James and those he influenced led Peirce in 1905 to officially renounce the term pragmatism; he rebaptised his philosophy pragmaticism, a name he hoped was ‘ugly enough to be safe from kidnappers’, which it certainly was. Dewey also strenuously distanced himself from James’s theory of truth. In personal correspondence with Dewey, Peirce complained that Dewey’s philosophy was ‘too loose’ and employed too many ‘slipshod arguments’.

To be clear, the account we have just offered leaves aside many crucial details. However, what has been registered is enough to show that it is an error to present pragmatism as a doctrine initially proposed by Peirce, refined by James, and culminating in Dewey’s writings. Rather, what one finds in the classical pragmatists is a series of substantive disputes about enduring philosophical topics, including meaning, truth, knowledge, value, experience and the nature of philosophy itself.

There is another common misunderstanding about the history of pragmatism that is best articulated by the more recent pragmatist Richard Rorty:

Along about 1945, American philosophers were, for better or worse, bored with Dewey, and thus with pragmatism. They were sick of being told that pragmatism was the philosophy of American democracy, that Dewey was the great American intellectual figure of their century, and the like. They wanted something new, something they could get their philosophical teeth into. What showed up, thanks to Hitler and various other historical contingencies, was logical empiricism, an early version of what we now call ‘analytic philosophy’.

In other words, his popular ‘eclipse narrative’ (as we’ll call it) holds that pragmatism dominated professional philosophy in America throughout Dewey’s heyday, from the early 1900s until the early ’40s. Then, largely due to the war in Europe and the resulting influx of academics to the US, professional philosophy in the US took a ‘linguistic turn’ and began fixating on the technical and methodological issues that today are associated with ‘analytic philosophy’, a tradition originating in the work of Gottlob Frege in Germany; Bertrand Russell, G E Moore and Ludwig Wittgenstein in England; and Rudolf Carnap and Moritz Schlick in Austria.

Rorty took the new analytic philosophy to have been a malignant force in American philosophy departments, an invasion that displaced pragmatism. Crucially, the displacement is said to have been achieved not by way of a critical engagement with the pragmatists’ arguments and commitments, but instead simply by declaring pragmatism soft and insufficiently rigorous. Pragmatism was, in this telling, eclipsed as philosophers in the US began taking their intellectual cues from the analytic philosophers. Having gained strongholds in nearly all the elite PhD-granting universities in the US, the analytics swiftly trained the next several generations of professional philosophers. Pragmatism, America’s homegrown philosophy, thus was driven underground, where the remaining loyalists built scholarly networks devoted to keeping the classical idiom alive.

Yet there is also a resurrection in the eclipse narrative. It goes on to say that analytic philosophy eventually proved itself too self-absorbed and socially irrelevant to be sustainable. Recovering from the analytic fad, philosophers in the US, notably Rorty, Hilary Putnam and Cornel West, rediscovered pragmatism in their landmark works of the 1970s and ’80s. Hence ‘neo-pragmatism’ came to the fore as a leading ‘post-analytic’ development in professional philosophy. The eclipse seems to have been undone.

Well, not quite. The resurrection story is tinged with resentment. It is alleged that neo-pragmatism is too analytic and not closely tied to the classical texts. It has drifted off course, not authentically pragmatist. Pragmatism’s resurrection occasioned a second eclipse: although the philosophical mainstream is now once again attuned to some of the vocabulary and ideas of pragmatism, it has received them in the corrupted form promoted by the neo-pragmatists. On this view, classical pragmatism remains unjustifiably occluded.

Consequently, there is a growing literature devoted to repackaging Dewey’s pragmatism. Work in this genre embraces the tacit assumption that nonpragmatists are simply ignorant of pragmatism; accordingly, a recurring theme is that Dewey’s philosophy must be rediscovered so that it can ‘revitalise’ mainstream philosophy. The steady production of volumes devoted to establishing Dewey’s ‘continuing relevance’, ‘discovering’ his ideas and recapturing his ‘lessons’ is suggestive.

The upshot, tragic for the prospects of pragmatism, is that . . .

Continue reading. There’s much more.

Written by LeisureGuy

18 November 2019 at 9:18 am

Posted in Books

Tagged with

What John Rawls Missed

leave a comment »

Jedediah Briton-Purdy, who teaches at Columbia Law School, writes in the New Republic:

John Rawls, who died in 2002, was the most influential American philosopher of the twentieth century. His great work, A Theory of Justice, appeared in 1971 and defined the field of political philosophy for generations. It set out standards for a just society in the form of two principles. First, a just society would protect the strongest set of civil liberties and personal rights compatible with everyone else having the same rights. Second, it would tolerate economic inequalities only if they improved the situation of the poorest and most marginalized (for example, by paying doctors well to encourage people to enter a socially necessary profession).

Taken seriously, Rawls’s principles would require a radical transformation: no hedge funds unless allowing them to operate will benefit the homeless? No Silicon Valley IPOs unless they make life better for farmworkers in the Central Valley? A just society would be very different from anything the United States has ever been. Rawls argued that justice would be compatible with either democratic socialism or a “property-owning democracy” of roughly equal smallholders. One thing was clear: America could not remain as it was, on pain of injustice.

It did not remain as it was, but Rawls’s vision did not triumph either. A Theory of Justice was published in 1971, just before economic inequality began its long ascent from its lowest level in history to today’s Second Gilded Age. Richard Nixon’s “Southern strategy” was reorganizing American politics around resistance to equal rights. Within a decade, Ronald Reagan and Margaret Thatcher would lead the English-speaking world sharply away from anything resembling Rawls’s egalitarianism. Yet his philosophical stature only increased. Even his critics—the libertarian Robert Nozick, the feminist Susan Moller Okin, the communitarian Michael Sandel—ended up confirming the central and inescapable place of his thought. By the end of his life, philosophical thinking about equality, war, political authority, dissent and obedience, and global order took place on a terrain that Rawls towered over—in the shadow of justice.

That shadow provides the title of Katrina Forrester’s extraordinary study of Rawls’s thought and its legacy. Over the last 50 years, she argues, Rawls’s centrality has shaped the very idea of what philosophy is. Working in his aftermath, political philosophers have tended to emphasize ideals of consensus-seeking deliberation, legalistic formulations of political problems, and the dilemmas of individual choice in terrible situations such as war. Certain other questions have been quietly kept out: notably, the central place of conflict and collective action in politics, the tendency of capitalist democracy to fall into plutocracy, and the deep role of racism and colonialism in shaping American society and world order.

Yet as Forrester’s book demonstrates, Rawls’s approach to philosophizing about politics was never the only one, however much his influence has made it seem so. Instead, his theory of justice emerged from his distinctive experience of the exceptional decades after World War II. By tracing those historical circumstances—the political and economic assumptions of the postwar years, as well as the ways philosophy was done then—Forrester shows how Rawls’s thinking, with its strengths and blind spots, came to seem natural. Her aim is to open space for problems that Rawls neglected. What would it mean to pursue a just society while grappling with how deeply unjust and divided ours is, with how it got and stays that way?


Although Rawls’s principles of justice were in many ways radical, they were not novel. He is often thought of as the philosopher of 1960s Great Society reformism, because his principles seemed to elaborate on the goals of the civil rights movement and the war on poverty. What was new was Rawls’s mode of argument. He asked a question fundamental in political philosophy: Can any society be justified to all its members, in light of the inequalities it contains, the burdens it imposes (who empties the bedpans and gets up at midnight to make sure the subways keep running?), and the violence it deals out through police, prisons, and wars? If it cannot be just, then some of us are living in a kind of prison, and others are the wardens. If, however, justification is possible, then we might be able to create a world in which we actually approach one another as free and equal persons.

To imagine such a world, we have to shake off the habits of this one and picture ourselves as able to reset all our institutions and social relations, keeping only those that are just—that is, are justifiable to everyone who has to live within them. Rawls proposed a technique for doing this, a thought experiment that he called the “original position.” It invites us to imagine a situation in which people are to choose the world in which they will live. The key is that they choose behind a “veil of ignorance,” that is, they do not know where they would fall in the distribution of privilege and vulnerability in the societies they imagine. Would you choose to live in the United States today if you didn’t know whether you would be Elon Musk or an undocumented immigrant?

Rawls argued that, faced with this uncertainty, people would choose the world that provided the best position for the least advantaged, worst-off class of people. If you don’t know where you will fall, you will want the worst possibility to be as acceptable as possible. Economics-minded critics argued that this was too risk-averse, that one might gamble for the Silicon Valley jackpot at the risk of picking lettuce instead. But this criticism misconstrued the project: Rawls’s argument was a way of setting out exactly what it meant to justify a social world even to the people picking lettuce. If the question is, “Can this world be justified to me as a free and equal person?” Rawls was not prepared to accept, “Yes, because you might have been Elon Musk!” as an answer.

Conservative critics such as the Straussian Allan Bloom (later famous for his polemic The Closing of the American Mind) accused Rawls of cherry-picking principles to suit the liberal prejudices of the moment. In Rawls’s hands, the original position gave philosophy’s imprimatur to the democratic welfare state as well as to the civil disobedience of the civil rights movement and resistance to the Vietnam War. Friendlier readers interpreted Rawls in light of the conflicts of the early 1970s too. Philosopher Marshall Cohen’s New York Times review of A Theory of Justice welcomed a defense of American liberalism “at a time when these principles are persistently being obscured and betrayed”—presumably in Vietnam and at home by the Nixon administration.


Both of these responses, Forrester argues, miss key features of Rawls’s project. Her story begins in the decade after World War II, when Rawls undertook the work that became A Theory of Justice. A watershed event for Rawls was the 1953 publication of Ludwig Wittgenstein’s Philosophical Investigations, which along with Wittgenstein’s other late work helped to inspire a broader philosophical turn to “ordinary language.” When Rawls visited Oxford in the academic year of 1952 to ’53, this approach was richly elaborated there. It was the new philosophical frontier of the age, full of untried possibility.

Ordinary-language philosophers turned away from highly technical questions about the fundamental nature of language (What makes a sentence true? Does every word in a true statement refer to some definite object in the world?). Instead they asked how language works from the point of view of a clear-minded speaker and listener. Everyone lives inside a language, they reasoned, knows how to use its grammar, and recognizes misuse and confusion. We have to get over the philosophical impulse to seize sentences and sweat them, inquisition-style, until they confirm their truth or confess their falsehood. Philosophy is less about achieving a new kind of knowledge, more about making clear what we already know. Philosophers began to think about language and social practices such as law the way we think about games. There is no such thing as hitting a triple outside of baseball: Try as you might, you cannot do it alone, or in a group of people who have never heard of baseball and want you, please, to take your stick off the soccer pitch. But once you are playing baseball, it is clear whether or not a triple has been hit. Even close cases, such as a photo-finish race to beat a throw from an outfielder, just confirm that we know what a triple is.

The legal philosopher H.L.A. Hart argued that law, too, is a game in this way. There is no “natural law” that tells you whether you “really” must obey a law you dislike, as both dissenters and defenders of existing law had long hoped to show. But once you are involved in legal argument, you tacitly accept that certain things count as law. If you start to insist that Leviticus trumps the San Francisco municipal code, then you have become the person waving a baseball bat on the soccer pitch, hoping to get to third base in a game where third base does not exist. Forrester argues that Rawls wanted to elucidate society itself as a “game” of this sort. Social morality, which is the topic of justice, had its own tacit rules, and drawing those out could help to make clear what people already knew when not distracted by self-interest or prejudice. Like the rule book for a well-established sport, the original position and the principles that Rawls drew from it did not dictate some new morality. They helped to spell out the terms of a social practice.

If Rawls’s approach to justice emerged from the philosophical currents of the 1950s, it also formed in response to political concerns. Born in Baltimore in 1921, Rawls saw the rise of the administrative state through the 1930s and ’40s, as New Deal programs led to the establishment of an alphabet soup of government agencies to implement them: the SEC, the FHA, the PWA, the NLRB, and many more. Although Rawls was not an anti-New Deal reactionary, he shared the worries of some liberals and centrists that the expanded American state would end up interfering with personal autonomy through perennial supervision of the economy. He preferred to think that if the state established the right set of operating principles and guardrails, people would be able to get along on their own, with no more than modest political intrusion or contest.

It was bold, if not implausible, to posit a neutral and abiding set of principles in American society, which was torn by bloody labor conflict in the ’30s and ’40s, and sent its pacifists and revolutionaries to prison or worse. But Rawls wasn’t alone in doing so: The decades in which he developed his theory formed the high-water mark of the “consensus” schools of American political science and history. It became conventional to say that Americans had mostly agreed on the essential principles of liberty, equality, and democracy—and, less abstractly, private property, regulated markets, and courts of law. Conflict was the exception. Radical dissenters were outliers. The idea of consensus was essential to Rawls’s project: If Americans deeply agreed on justice, then the hidden logic of that agreement, drawn out through the original position, could both guide and limit the state.

A Theory of Justice was both radical and conservative. Yes, it proposed a sweeping reconstruction of “the basic structure” of American life—Rawls’s term for the key institutions of public life, such as government and the economy. At the same time, it described the principles of reconstruction as ones that Americans already held. This strategy of squaring the circle might seem odd: How can a country be committed to principles it routinely and pervasively defies and ignores? Yet it’s also peculiarly American. The American political myth (meaning not a simple fiction but a kind of shared master-story) is “constitutional redemption,” the idea that moral truths are woven deep into the country’s character, imperfectly expressed in the Constitution and existing institutions, but awaiting realization in “a more perfect union.” This was how Frederick Douglass and Abraham Lincoln talked about freedom and equality in the 1860s, and how Martin Luther King and Lyndon Baines Johnson talked about the same values in the mid-1960s. Constitutional redemption was the defining ideal of Cold War liberal patriotism. Its strategies became, by subtle philosophical transformation, the strategy of A Theory of Justice: to say that Americans already are what they have never yet been—and that this ideal is also incipiently universal, if other peoples can make their way to it.


Forrester is a subtle intellectual historian as well as a political theorist, and she does not imply that one book, even a work as field-defining as A Theory of Justice, can in fact define a field. In the Shadow of Justice also tells the story of a network of Rawls’s contemporaries and the generation-plus that followed him. These thinkers continued a search for the impersonal perspective on politics that Rawls had put at the heart of the field. Ironically, however, the consensus Rawls had counted on was already gone by the polarized late 1960s, which saw violent backlash against the civil rights movement, vicious clashes over the Vietnam War, and acts of domestic terrorism from both the militant left and the racist right. There was little more reason in 1971 to think that Americans shared an abiding consensus than there is in 2019. In the face of polarization, the thinkers in political philosophy’s mainstream persisted in presenting themselves as above mere political conflict, claiming a neutral ground that no longer existed.

In Forrester’s telling, the philosophers in Rawls’s milieu aimed to engage with the radical challenges of the 1960s and 1970s, but tended to formulations that blunted the sharpest criticisms of American life. Confronted with civil disobedience against the Vietnam War and racial subordination, Rawls and his cohort developed the canonical modern image of civil disobedience: as an appeal to the country’s higher principles, a fragment of lawbreaking in support of a larger fidelity to law. Those dissenters who disobeyed because they considered the U.S. government illegitimate, at least in some respects, were written out of the story.

When black activists and scholars proposed reparations for slavery and Jim Crow, the philosophers responded that justice asks whether people are being treated as equals today, not the “historical” question of how inequalities arose. Rawls similarly hurried past segregation in his work; he reasoned that it was so manifestly unjust that there was nothing a philosopher should say about it except that it should be abolished completely. But maybe a philosopher who was trying to distill the country’s most basic values should have lingered over just how deeply the legacies of Jim Crow and slavery shaped that country. What did the vicious and often successful resistance to the civil rights movement reveal about the American grammar of justice?

A similar ahistorical impulse governed when Rawls and others turned to the problem of international justice. Colonialism and empire largely receded from sight, as did postcolonial political efforts to develop redistributive regimes such as the short-lived New International Economic Order. In The Law of Peoples, Rawls imagined an original position for representatives of nation-states, interested in fair rules of international order. But he didn’t propose redress for newly independent countries, which would be starting out poorer than the colonial powers that had dominated them for years. There is a fine line between distilling problems to issues of principle and losing track of the settings altogether.

A part of what happened in these decades was that the technique of Rawls’s arguments came loose from the setting in which it had originally made sense. The discipline became increasingly remote from moral and political experience. What, asked some next-generation Rawlsians, would be the result of an original position for the whole world? The question moves far away from Rawls’s own effort to draw out the principles to which his audience was already committed. Where was the consensus, what were the institutions, for a philosophy of global justice? . . .

Continue reading.

Written by LeisureGuy

10 November 2019 at 7:24 am

The crisis in physics is not only about physics

with one comment

Sabine Hossenfelder, aka Bee, Research Fellow at the Frankfurt Institute for Advanced Studies, writes in her blog Back Re(Action):

In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then.

The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars.

With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.

The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing. [The Dunning-Kruger effect in action — and demonstrating that the problem is caused by ignorance, not stupidity. It’s a defect that derives from lack of knowledge, not lack of intelligence. – LG]

The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.

But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.

Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.

This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that . . .

Continue reading. There’s much more.

Written by LeisureGuy

8 November 2019 at 9:09 am

Posted in Education, Memes, Science

Tagged with

Choose your path — and avoid the fear of missing out

leave a comment »

Epicurus advocates enjoying the daily pleasures that life presents. That requires being present for those pleasure and not becoming preoccupied by absent pleasures.

I mention this idea in the long post on my diet — how some who choose a plant-based diet become preoccupied with the foods they’re not eating (the steaks, ribs, roasts, sausage, duck breast, scrambled eggs, triple-cream cheese, and so on) to the extent that they fail to focus on the pleasures of the food they are eating. Their focus is on denial (no meat! no dairy! no eggs!) and not on affirmation; they keep looking back at what they once had and ignore all the wonders of the new possibilities open to them. I wrote:

How to be happy with your diet

Look at the variety of whole plant-based foods and the meals you can make with them. If you focus your attention on what you can/should eat and not dwell on what you can’t (or shouldn’t) eat, you’ll feel much more satisfied with your lot. If you constantly obsess about foods you should avoid, you’ll make yourself unhappy and undermine your will to eat well. I mention this because it seems that people have a tendency to focus on what they lack and not on what they have. (“We look before and after, And pine for what is not; Our sincerest laughter With some pain is fraught; Our sweetest songs are those that tell of saddest thought.” – from To a Skylark, by Percy Bysshe Shelley.)

This is a specific example of a more general situation — namely, whenever you choose a direction you necessarily must forsake other directions.

The Road Not Taken – by Robert Frost

Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;

Then took the other, as just as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Though as for that the passing there
Had worn them really about the same,

And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way,
I doubted if I should ever come back.

I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.

Having chosen a path, an Epicurean will enjoy the pleasure of that path and not fret about what s/he has missed from all other possible paths. Whatever you do and whatever you have, you can think of myriads of things you aren’t doing and don’t have. Do not let yourself become attached to those absences.

For example, if you choose a whole-food plant-based diet, do not consider it as rejecting meat, dairy, and eggs (the negative view, which focuses on the path abandoned), but rather look for the pleasures of the path now chosen.

More generally, we necessarily move from one day to next and from one season to the next. We are always moving on, changing, and (hopefully) growing in wisdom and stature and in favor with God and man. That growth suggests a path that constantly presents new vistas and new choices. Each stage along the way is an abandonment of the previous stage. One can focus on the loss of the previous stage or look for the pleasures the new stage brings. If you follow Epicurus, it is obvious that you should enjoy the pleasures.

I mention this in Leisureguy’s Guide to Gourmet Shaving:

I realized recently that this book might have been more accurately titled The Epicure’s Guide to Shaving, for Epicurus[i] would surely approve making necessary tasks enjoyable. He thought that chance encounters of atoms falling through the void, randomly interacting, produced — after much time — us and the world in which we live. In his view we cease to exist when we die, while the atoms of our body continue to tumble along through time and space.

Because Epicurus believed that life is a one-shot deal, he made enjoying life a high priority. A dissolute lifestyle tends to have highly unpleasant consequences, so it makes sense to seek enjoyment first in the small things of life, which is what we mostly encounter day to day. Learning new ideas and mastering new skills are examples of activities that provide enjoyment without harm.

Take, as a random example, the morning shave: an Epicurean who shaves will seek a way to derive enjoyment from the task: to spend his (limited) time doing things he doesn’t enjoy makes no sense when he could instead do them enjoyably. Moreover, an enjoyable task requires little willpower: you are drawn to the task rather than having to push yourself. Indeed, a task can even be restorative and energizing; rather than draining you, a task approached properly can provide both enjoyment and a satisfying sense of fulfillment.

The psychologist Mihály Csíkszentmihályi wrote several books on a mental state he termed “flow”: a focused, absorbing, satisfying involvement in what is happening in the moment[ii]. So another way to state the Epicurean position is that one should arrange his or her life to maximize the opportunities for flow to occur. Flow is a mental experience, so introspection combined with an attitude that encourages the enjoyment of small things—to look for joy, and to think about how to find more occasions of joy—is an obvious step.

This book is my contribution to an Epicurean lifestyle: the book offers a way to make a necessary chore enjoyable. But don’t stop just at shaving.

[i] Epicurus: See Catherine Wilson’s book and (of course) his own writings, and there’s also the Wikipedia entry on Epicurus.

[ii] Mihály Csíkszentmihályi: See the Wikipedia article on flow. Each person can find activities appropriate for him or her that will promote flow: rock climbing, painting or drawing, gardening, cooking, playing a musical instrument, and the like. Csíkszentmihályi defined the term in his studies and in the fascinating book that emerged from them, Flow: The Psychology of Optimal Experience.

This is why I see as wrong-headed the effort by some who take up a plant-based diet to focus on trying to mimic the foods left behind: seeking imitation bacon, imitation sausage, imitation roast, imitation burgers, imitation cheese. Those strike me as distractions that prevent appreciation of the new vistas that the new direction offers. That approach amounts to looking back at the past and longing for it.

The Chambered Nautilus – by Oliver Wendell Holmes, Sr.

This is the ship of pearl, which, poets feign,
Sails the unshadowed main,—
The venturous bark that flings
On the sweet summer wind its purpled wings
In gulfs enchanted, where the Siren sings,
And coral reefs lie bare,
Where the cold sea-maids rise to sun their streaming hair.

Its webs of living gauze no more unfurl;
Wrecked is the ship of pearl!
And every chambered cell,
Where its dim dreaming life was wont to dwell,
As the frail tenant shaped his growing shell,
Before thee lies revealed,—
Its irised ceiling rent, its sunless crypt unsealed!

Year after year beheld the silent toil
That spread his lustrous coil;
Still, as the spiral grew,
He left the past year’s dwelling for the new,
Stole with soft step its shining archway through,
Built up its idle door,
Stretched in his last-found home, and knew the old no more.

Thanks for the heavenly message brought by thee,
Child of the wandering sea,
Cast from her lap, forlorn!
From thy dead lips a clearer note is born
Than ever Triton blew from wreathèd horn!
While on mine ear it rings,
Through the deep caves of thought I hear a voice that sings:—

Build thee more stately mansions, O my soul,
As the swift seasons roll!
Leave thy low-vaulted past!
Let each new temple, nobler than the last,
Shut thee from heaven with a dome more vast,
Till thou at length art free,
Leaving thine outgrown shell by life’s unresting sea!

To cling to your current mansion and fear to abandon it prevents means you will not encounter (nor appreciate) new pleasures. It’s bad enough when this fear means one fails to appreciate the pleasures of a new dietary direction, but such fears can cost more: they can imprison one in a life of misery because their focus is totally on what would be lost by moving on. Consider, for example, a terrible marriage in which two remain together only because each fears the loss of wealth and possessions (the house, the lifestyle, the cars, …). Their wealth and possessions are a prison, and they remain in their current chamber of misery, never moving to a dome more vast. They view taking a new path only as the loss of the old path, and they cannot see the possibility of pleasures that lie unseen ahead.

Epicurus would, I think, see this as a tragedy. They have but one life, and to remain stuck in a miserable situation, never considering the joys that could await them in a new stage, always clinging to what they now have, holding back from moving on: that enacts a terrible price.

Rabbi Ben Ezra – by Robert Browning

Grow old along with me!
The best is yet to be,
The last of life, for which the first was made:
Our times are in His hand
Who saith “A whole I planned,
Youth shows but half; trust God: see all, nor be afraid!”

That’s but the first stanza of a long poem. It presents a two-chambered version of the nautilus: youth and old age. Don’t cling to youth, for it must pass. Enjoy the pleasures of a fuller experience.

Those who fear change do not grow, for growth is change. They fear to leave the mansions of the past to see what pleasures lie ahead — they cannot make this leap of faith. If your attention is totally taken by what was, potential new pleasures pass by unobserved and unexperienced.

So if you choose a whole-food plant-based diet, embrace it. See where it takes you. Explore the new mansion.

Update: This morning another poem occurred to me on somewhat the same theme — a theme that seems to appeal to poets because being distracted from what is here now before one by pining for what is not is a common human (and uniquely human) condition. Here’s the poem:

Maud Muller – by John Greenleaf Whittier

Maud Muller, on a summer’s day,
Raked the meadow sweet with hay.

Beneath her torn hat glowed the wealth
Of simple beauty and rustic health.

Singing, she wrought, and her merry glee
The mock-bird echoed from his tree.

But when she glanced to the far-off town,
White from its hill-slope looking down,

The sweet song died, and a vague unrest
And a nameless longing filled her breast,—

A wish that she hardly dared to own,
For something better than she had known.

The Judge rode slowly down the lane,
Smoothing his horse’s chestnut mane.

He drew his bridle in the shade
Of the apple-trees to greet the maid,

And ask a draught from the spring that flowed
Through the meadow across the road.

She stooped where the cool spring bubbled up,
And filled for him her small tin cup,

And blushed as she gave it, looking down
On her feet so bare, and her tattered gown.

“Thanks!” said the Judge; “a sweeter draught
From a fairer hand was never quaffed.”

He spoke of the grass and flowers and trees,
Of the singing birds and the humming bees;

Then talked of the haying, and wondered whether
The cloud in the west would bring foul weather.

And Maud forgot her brier-torn gown
And her graceful ankles bare and brown;

And listened, while a pleased surprise
Looked from her long-lashed hazel eyes.

At last, like one who for delay
Seeks a vain excuse, he rode away.

Maud Muller looked and sighed: “Ah me!
That I the Judge’s bride might be!

“He would dress me up in silks so fine,
And praise and toast me at his wine.

“My father should wear a broadcloth coat;
My brother should sail a painted boat.

“I’d dress my mother so grand and gay,
And the baby should have a new toy each day.

“And I’d feed the hungry and clothe the poor,
And all should bless me who left our door.”

The Judge looked back as he climbed the hill,
And saw Maud Muller standing still.

“A form more fair, a face more sweet,
Ne’er hath it been my lot to meet.

“And her modest answer and graceful air
Show her wise and good as she is fair.

“Would she were mine, and I to-day,
Like her, a harvester of hay:

“No doubtful balance of rights and wrongs,
Nor weary lawyers with endless tongues,

“But low of cattle and song of birds,
And health and quiet and loving words.”

But he thought of his sisters proud and cold,
And his mother vain of her rank and gold.

So, closing his heart, the Judge rode on,
And Maud was left in the field alone.

But the lawyers smiled that afternoon,
When he hummed in court an old love-tune;

And the young girl mused beside the well,
Till the rain on the unraked clover fell.

He wedded a wife of richest dower,
Who lived for fashion, as he for power.

Yet oft, in his marble hearth’s bright glow,
He watched a picture come and go;

And sweet Maud Muller’s hazel eyes
Looked out in their innocent surprise.

Oft, when the wine in his glass was red,
He longed for the wayside well instead;

And closed his eyes on his garnished rooms
To dream of meadows and clover-blooms.

And the proud man sighed, with a secret pain,
“Ah, that I were free again!

“Free as when I rode that day,
Where the barefoot maiden raked her hay.”

She wedded a man unlearned and poor,
And many children played round her door.

But care and sorrow, and childbirth pain,
Left their traces on heart and brain.

And oft, when the summer sun shone hot
On the new-mown hay in the meadow lot,

And she heard the little spring brook fall
Over the roadside, through the wall,

In the shade of the apple-tree again
She saw a rider draw his rein.

And, gazing down with timid grace,
She felt his pleased eyes read her face.

Sometimes her narrow kitchen walls
Stretched away into stately halls;

The weary wheel to a spinet turned,
The tallow candle an astral burned,

And for him who sat by the chimney lug,
Dozing and grumbling o’er pipe and mug,

A manly form at her side she saw,
And joy was duty and love was law.

Then she took up her burden of life again,
Saying only, “It might have been.”

Alas for maiden, alas for Judge,
For rich repiner and household drudge!

God pity them both! and pity us all,
Who vainly the dreams of youth recall.

For of all sad words of tongue or pen,
The saddest are these: “It might have been!”

Ah, well! for us all some sweet hope lies
Deeply buried from human eyes;

And, in the hereafter, angels may
Roll the stone from its grave away!

Written by LeisureGuy

8 November 2019 at 3:57 am

The basic epistemological problem: It’s impossible to see the world as it is

leave a comment »

Written by LeisureGuy

6 November 2019 at 5:43 pm

Posted in Evolution, Science

Tagged with

How to be an Epicurean

with 2 comments

The preface to the sixth edition of Leisureguy’s Guide to Gourmet Shaving (the preface is included also in the current (seventh) edition) begins:

I REALIZED recently that this book might have been more accurately titled The Epicure’s Guide to Shaving, for Epicurus[i] would surely approve making necessary tasks enjoyable. He thought that chance encounters of atoms falling through the void, randomly interacting, produced—after much time—us and the world in which we live. In his view we cease to exist when we die, while the atoms of our body continue to tumble along through time and space.

Because Epicurus believed that life is a one-shot deal, he made enjoying life a high priority. A dissolute lifestyle tends to have highly unpleasant consequences, so it makes sense to seek enjoyment first in the small things of life, which is what we mostly encounter day to day. Learning new ideas and mastering new skills are examples of activities that provide enjoyment without harm.

Take, as a random example, the morning shave: an Epicurean who shaves will seek a way to derive enjoyment from the task: to spend his (limited) time doing things he doesn’t enjoy makes no sense when he could instead do them enjoyably. Moreover, an enjoyable task requires little willpower: you are drawn to the task rather than having to push yourself. Indeed, a task can even be restorative and energizing; rather than draining you, a task approached properly can provide both enjoyment and a satisfying sense of fulfillment.

The psychologist Mihály Csíkszentmihályi wrote several books on a mental state he termed “flow”: a focused, absorbing, satisfying involvement in what is happening in the moment[ii]. So another way to state the Epicurean position is that one should arrange his or her life to maximize the opportunities for flow to occur. Flow is a mental experience, so introspection combined with an attitude that encourages the enjoyment of small things—to look for joy, and to think about how to find more occasions of joy—is an obvious step.

This book is my contribution to an Epicurean lifestyle: the book offers a way to make a necessary chore enjoyable. But don’t stop just at shaving.

[i] Epicurus: See https://tinyurl.com/7kafxfj and (of course) the Wikipedia entry.

[ii] Mihály Csíkszentmihályi: See https://tinyurl.com/a5f4s. Each person can find activities appropriate for him or her that will promote flow: rock climbing, painting or drawing, gardening, cooking, playing a musical instrument, and the like. Csíkszentmihályi defined the term in his studies and in the fascinating book that emerged from them, Flow: The Psychology of Optimal Experience (https://tinyurl.com/ywzrea for inexpensive copies).

Catherine Wilson, most recently Anniversary Professor of Philosophy at the University of York and now Visiting Presidential Professor at CUNY Graduate Center in New York, whose latest book is How to Be an Epicurean (2019), published simultaneously in the UK under the title The Pleasure Principle, writes in Aeon:

Like many people, I am skeptical of any book, lecture or article offering to divulge the secrets of happiness. To me, happiness is episodic. It’s there at a moment of insight over drinks with a friend, when hearing a new and affecting piece of music on the radio, sharing confidences with a relative or waking up from a good night’s sleep after a bout of the flu. Happiness is a feeling of in-the-moment joy that can’t be chased and caught and which can’t last very long.

But satisfaction with how things are going is different than happiness. Satisfaction has to do with the qualities and arrangements of life that make us want to get out of bed in the morning, find out what’s happening in the world, and get on with whatever the day brings. There are obstacles to satisfaction, and they can be, if not entirely removed, at least lowered. Some writers argue that satisfaction mostly depends on my genes, where I live and the season of the year, or how other people, including the government, are treating me. Nevertheless, psychology and the sharing of first-person experience acquired over many generations, can actually help.

So can philosophy. The major schools of philosophy in antiquity – Platonism, Stoicism, Aristotelianism and, my favourite, Epicureanism, addressed the question of the good life directly. The philosophers all subscribed to an ideal of ‘life according to nature’, by which they meant both human and nonhuman nature, while disagreeing among themselves about what that entailed. Their original writings, most of them widely accessible, readable and thought-provoking, remain a resource, not just for philosophy students and specialists, but for everyone interested in the topics of nature, society and wellbeing.

What was a ‘school’ of philosophy for the ancient Greeks and Romans? Essentially, it was a group that shared common beliefs and values. Its members would meet regularly to listen to lectures by the leader, to discuss the philosophical issues among themselves and with occasional visitors, and to work out how to defend their views against the objections of their competitors’ schools. Accounts of the lectures and discussions might make their way into written texts, crafted by the leader or his students. Philosophy was not, however, a form of public education. Between 40 and 80 per cent of the population of Athens in the first few centuries BCE were male and female slaves. Some of them might serve and entertain at philosophical functions but did not participate.

Plato, who collected the thoughts and discussions of his 5th-century BCE teacher Socrates, emphasised the cultivation of the four virtues of wisdom, courage, moderation and justice. Plato considered these virtues, and other ‘forms’ such as truth and beauty, more real than anything composed of matter. Virtue, he thought, was the route and the only route to eudaimonia, usually translated as ‘welfare’ or ‘flourishing’. Dishonesty, cowardice, gluttonous, lustful, intemperate behaviour and mistreatment of others could produce only a disordered and unhappy personality.

The audiences that Socrates and Plato meant to address consisted most typically of ambitious and spoiled young men from top Athenian families who needed to be set straight. Was Plato’s theory of human flourishing through virtue meant to apply to women? Plato, Aristotle and the Stoics led all-male academies. The women of the time were largely confined to the household, at least the respectable ones. Their domestic occupations would not have given them opportunity to display courage (mostly understood as courage in battle), or wisdom (as they lacked an education and experience of the world outside the home), or moderation (as they had no sexual freedom and did not take part in heavy-drinking parties), or justice (as they had no scope to judge adult men and to mete out rewards and punishments). Plato’s pupil, Aristotle, writing in the 4th century BCE stated explicitly that virtue was different for men and for women. For women, obedience was the top virtue and so presumably conducive to their flourishing.

Aristotle wrote on a much wider range of subjects than Plato had, from marine biology to human reproduction, from political organisation to drama and rhetoric. In ethics, he pointed out that some supposed virtues could be too much of a good thing. Too much courage was foolhardiness; too much moderation was stinginess and asceticism. Too much wisdom might make you seem pompous, I suppose, and a fanatical commitment to justice would exclude mercy and forgiveness, which seem virtuous. But Aristotle’s main contribution to moral philosophy is often considered to be his point that to be happy you have to be somewhat lucky. If you are born with a terrible, progressive disease, or into the middle of a war, or if you happen to have powerful enemies who impede you at every turn, your chances of flourishing are lower than otherwise. For eudaimonia, you not only have to practise virtue; you need friends, your health and a decent income.

A third major school of philosophy, Stoicism, represented by a number of teachers and writers in the Greek and Roman traditions, including Epictetus and Seneca, reverted to the Platonic view that external events cannot diminish the wellbeing of the good person. The world, they thought, is ruled by providence; all that happens is fated to happen, and we must embrace our individual fates and the past and the future that has been determined for us. As things could not have happened otherwise, regret and remorse over past decisions and actions are pointless.

Not only regret, but all emotions, including anger, pity and love, are ‘diseases’ of the soul in need of a cure, though a general benevolence towards humanity was permissible. An emotional reaction, they maintained, always involves the illusion that some external event, a rejection letter, or a friend’s betrayal, or meeting someone fantastic, or being tortured, is objectively bad or good for you. An emotion, they said, is just a bodily disturbance that causes mental disturbance. To restore tranquility, one should remember that these things happen all the time, that they were fated to happen, and that the self is an ‘inner citadel’ that can withstand any attack.

Stoicism has many adherents even today because it offers explicit coping mechanisms for everyday adversities. Psychotherapeutic techniques that involve getting distance or perspective on individual problems have a lot of overlap with Stoic techniques. But there are many problems with Stoicism – and psychotherapy. The major one, in my opinion, is that these techniques haven’t been proven. I have found no well-designed and methodologically sound empirical study showing that emotionally troubled people who undergo perspective-inducing therapy fare better, after some given length of time, than emotionally troubled people who just wait for time to heal their wounds.

A second problem with Stoic practices is that emotions make life feel worth living. Emotional numbness and absence of motivation is the main feature of depression. Drugs that reduce affect are widely disliked by patients who have been prescribed them. Recent empirical work suggests that we need the emotions to make decisions; otherwise we just waffle endlessly, making up rationales and counter-rationales for some course of action. And finally, the Stoic claim that pity for the suffering of others just makes you feel bad yourself is deeply inhuman.

The fourth major philosophy of antiquity was developed in the 3rd century BCE in Athens by Epicurus and taken up by his 1st-century BCE Roman follower, Titus Carus Lucretius, the author of the great didactic poem ‘On the Nature of Things’. Epicureanism challenged both the overall organisation and the accounts of the way to eudaimonia of the other philosophical schools. Epicurus and his followers formed a sort of commune based in Epicurus’s house, surrounded by a ‘garden’, outside the city walls. The Epicureans took their meals in common, discussed science and ethics, and socialised. Women were included in the sect, and their flourishing was not understood differently to that of men. Epicurus was notorious for his nonmarital relationships that combined sex and philosophy.

Plato, Aristotle and the Stoics each made a place in their systems for a god, or godlike intelligences, as the creator or the rulers of the world. And in their various ways, they all agreed that matter by itself was dead, illusory and devoid of any characteristics except being a lump. Spiritual entities, such as Plato’s forms, or Aristotle’s souls, or the Stoic’s world-enlivening pneuma, had to be brought in to explain life, thought and the changes observed in nature.

Epicurus, by contrast, was a materialist. All that really existed, he declared, were indestructible atoms – tiny mobile particles, invisible to the naked eye, with various shapes and sizes, but devoid of colour, odour, flavour and sound, and separated by void space. In combination, they gave rise to the physical world and all its phenomena, including thought and perception. The atoms had formed the world by themselves – originally sticking together just by chance and growing into larger stable complexes. If there were gods, they too were made of atoms. But there was no need to appeal to the gods to explain any happenings on Earth or in the sky – or for that matter in history or in anyone’s personal life. The soul was composed of atoms as well; it dissipated into the air at death, so there was no immortality, or resurrection, or transmigration of souls.

Their theory of nature had ethical consequences for the Epicureans. Prayer was useless, and there was no hell, regardless of what the priests taught, for the wicked. The life of eudaimonia was simply one in which pleasure dominated over pain. This required prudence, and the ability to tell the difference between experiences and occupations conventionally assumed to be pleasurable and those that were truly pleasurable.

The Epicureans had no patience for the Stoic claim that human beings are self-sufficient, without need for the approval, goodwill or assistance of others. They doubted that the mind could, or should try to repress or dissolve emotions. To be happy, they insisted, we need to be engaged with external things and with other people. When things go badly, we will suffer, and there is no real cure except time and distraction. So it’s essential to be aware of the most frequent external causes of misfortune and to steer clear of them before misfortunes happen. As the future is not predetermined, and as humans have free will, this is possible.

Political ambition and wealth-seeking almost always cause anxiety and disappointment. So does romantic love when unrequited, which sociologists tell us is most of the time. So try not to get or remain snared! (Obsession with someone unavailable will fade quicker with no contact, according to Epicurus, and, according to Lucretius, temporary diversion with just about any willing bystander can help.) Many painful illnesses can be avoided by prudent behaviour and correct choice of food and drink, and, when those befall us despite our best efforts, intense pains are short-lived and long-lived pains are mild.

Rather than aiming specifically to maximise pleasure, the Epicureans concentrated on minimising pains, the pains that arise from failures of ‘choice and avoidance’. They knew that immediate intuition about costs and benefits is unreliable. One must sometimes sacrifice appealing food and drink in the short term to avoid the long-term pains of addiction and poor health; and sacrifice sexual opportunity to avoid humiliation, anger or social or economic fallout. But there is nothing virtuous about poverty and deprivation, and no one’s misery is ever deserved. Martyrdom for a cause is pointless, and, if we punish wrongdoers, it should be only for reasons of deterrence, not for revenge; if punishment doesn’t work, it is morally wrong to punish.

But if life is limited to this life, and if virtues such as wisdom, moderation and justice are only abstract ideas in atomic minds, why be moral?

The Epicureans had two answers to this question. One was that the people around you resent stupidity, cowardice, self-indulgence and injustice – the opposites of the traditional virtues. So, if you habitually engage in them, you will find yourself socially excluded and perhaps even punished by the law. Nonconformity to morality brings pain.

The other answer was that it is possible to have an entirely pleasant life without causing injury to others through dishonesty, immoderation or other vices. The sources of innocent pleasure are all around us: in the sensory enjoyment of music, food, landscapes and artworks, and especially, Epicurus thought, in the study of nature and society, and in conversing with friends. Unlike Aristotle, who thought one’s friends should be chosen for their virtue (rather than for their advantage), Epicurus thought that friends were just people who thought more or less the same way you did, whom you just happened to like.

Although few of us want to drop out and join a residential philosophical cult in the suburbs, carrying the Epicurean perspective into daily life can be of personal value.

A first point of departure for thinking about Epicureanism in a contemporary context is the fact that competition for power, esteem and financial reward (none of which the Epicureans regarded as real goods) is built into every aspect of our society. We are urged to strive for promotions and better salaries, for the best GPAs, test scores and university places, for recognition and approval from colleagues, for the best possible mate in terms of looks and status. Advertisements on the New York subway urge me to get a diploma, bid for construction contracts, initiate and win lucrative lawsuits, and fix my face and figure. My glossy alumni magazine glorifies those faculty who discovered or invented something patentable, or who at least seem to be on track to do so, and its advertising urges me to invest my wealth with prestigious firms to acquire even more wealth. The bestselling self-help books advertised on Amazon, and lining the shelves in the airport newsvendors, promise to boost me to a top position where I can make all the decisions and boss others around, and to crush the self-defeating behaviour preventing me from finding lasting love.

This success-driven focus of contemporary life is complemented by a focus on the passive consumption of supposed comfort- and pleasure-inducing objects, such as speciality mattresses and bamboo-fibre socks. . .

Continue reading. There’s much more to read (and to ponder).

Two book recommendations (included in the list of books I find myself repeatedly recommending):

No Contest: The Case Against Competition

Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes

Written by LeisureGuy

5 November 2019 at 8:41 am

Posted in Books, Daily life, Mental Health, Shaving

Tagged with

Which is more fundamental: processes or things?

leave a comment »

My immediate response on reading the title is that processes are fundamental, because things are simply slow-motion processes. I think most people understand that they themselves are processes, a tree is a process, and even rocks and mountains. Still, I thought this is of interest.

Celso Vieira, who has a PhD in philosophy from the Federal University of Minas Gerais in Brazil and lives in Belo Horizonte (where he started the first Brazilian chapter of the effective altruist group The Life You Can Save) writes in Aeon:

Metaphysics is the attempt to understand how existence works by examining the building blocks of reality, the distinctions between mental and physical entities, and the fundamental questions of being and reality. But metaphysics is not only an arcane branch of philosophy: human beings use metaphysical assumptions to navigate the world. Assumptions about what exists and what is fundamental exert a powerful influence on our lives. Indeed, the less aware we are of our metaphysical assumptions, the more we are subject to them.

Western metaphysics tends to rely on the paradigm of substances. We often see the world as a world of things, composed of atomic molecules, natural kinds, galaxies. Objects are the paradigmatic mode of existence, the basic building blocks of the Universe. What exists exists as an object. That is to say, things are of a certain kind, they have some specific qualities and well-defined spatial and temporal limits. For instance: Fido is my dog, he is grey, and was born one year ago. (It’s worth noting that such a simple statement will give rise to a litany of metaphysical disputes within substance metaphysics: realists believe that universals, such as the natural kind ‘dogs’, exist while nominalists believe them to be only intellectual abstractions.)

Though substance metaphysics seems to undergird Western ‘common sense’, I think it is wrong. To see this, consider the cliché about the glass of water: is it half-empty or half-full? The question assumes a static arrangement of things serving as a basis for either an optimistic or a pessimistic interpretation. One can engage in interminable disputes about the correct description of the physical set-up, or about the legitimacy of the psychological evaluations. But what if the isolated frame ‘a glass of water’ fails to give the relevant information? Anyone would prefer an emptier glass that is getting full to a fuller one getting empty. Any analysis lacking information about change misses the point, which is just what substance metaphysics is missing. Process philosophers, meanwhile, think we should go beyond looking at the world as a set of static unrelated items, and instead examine the processes that make up the world. Processes, not objects, are fundamental.

Continue reading.

I’ve long been interested in process theology, which applies process to the idea of God. That was initially developed by Charles Hartshorne working in the light of Alfred North Whitehead’s Process and Reality. (And see also A Key to Whitehead’s Process and Reality.)

Written by LeisureGuy

30 September 2019 at 8:13 pm

Posted in Books

Tagged with

You don’t have a right to believe whatever you want to

leave a comment »

Daniel DeNicola, professor and chair of philosophy at Gettysburg College in Pennsylvania and author of Understanding Ignorance: The Surprising Impact of What We Don’t Know (which received the 2018 PROSE Award in Philosophy from the Association of American Publishers), writes in Aeon:

Do we have the right to believe whatever we want to believe? This supposed right is often claimed as the last resort of the wilfully ignorant, the person who is cornered by evidence and mounting opinion: ‘I believe climate change is a hoax whatever anyone else says, and I have a right to believe it!’ But is there such a right?

We do recognise the right to know certain things. I have a right to know the conditions of my employment, the physician’s diagnosis of my ailments, the grades I achieved at school, the name of my accuser and the nature of the charges, and so on. But belief is not knowledge.

Beliefs are factive: to believe is to take to be true. It would be absurd, as the analytic philosopher G E Moore observed in the 1940s, to say: ‘It is raining, but I don’t believe that it is raining.’ Beliefs aspire to truth – but they do not entail it. Beliefs can be false, unwarranted by evidence or reasoned consideration. They can also be morally repugnant. Among likely candidates: beliefs that are sexist, racist or homophobic; the belief that proper upbringing of a child requires ‘breaking the will’ and severe corporal punishment; the belief that the elderly should routinely be euthanised; the belief that ‘ethnic cleansing’ is a political solution, and so on. If we find these morally wrong, we condemn not only the potential acts that spring from such beliefs, but the content of the belief itself, the act of believing it, and thus the believer.

Such judgments can imply that believing is a voluntary act. But beliefs are often more like states of mind or attitudes than decisive actions. Some beliefs, such as personal values, are not deliberately chosen; they are ‘inherited’ from parents and ‘acquired’ from peers, acquired inadvertently, inculcated by institutions and authorities, or assumed from hearsay. For this reason, I think, it is not always the coming-to-hold-this-belief that is problematic; it is rather the sustaining of such beliefs, the refusal to disbelieve or discard them that can be voluntary and ethically wrong.

If the content of a belief is judged morally wrong, it is also thought to be false. The belief that one race is less than fully human is not only a morally repugnant, racist tenet; it is also thought to be a false claim – though not by the believer. The falsity of a belief is a necessary but not sufficient condition for a belief to be morally wrong; neither is the ugliness of the content sufficient for a belief to be morally wrong. Alas, there are indeed morally repugnant truths, but it is not the believing that makes them so. Their moral ugliness is embedded in the world, not in one’s belief about the world.

‘Who are you to tell me what to believe?’ replies the zealot. It is a misguided challenge: it implies that certifying one’s beliefs is a matter of someone’sauthority. It ignores the role of reality. Believing has what philosophers call a ‘mind-to-world direction of fit’. Our beliefs are intended to reflect the real world – and it is on this point that beliefs can go haywire. There are irresponsible beliefs; more precisely, there are beliefs that are acquired and retained in an irresponsible way. One might disregard evidence; accept gossip, rumour, or testimony from dubious sources; ignore incoherence with one’s other beliefs; embrace wishful thinking; or display a predilection for conspiracy theories.

I do not mean to revert to the stern evidentialism of the 19th-century mathematical philosopher William K Clifford, who claimed: ‘It is wrong, always, everywhere, and for anyone, to believe anything upon insufficient evidence.’ Clifford was trying to prevent irresponsible ‘overbelief’, in which wishful thinking, blind faith or sentiment (rather than evidence) stimulate or justify belief. This is too restrictive. In any complex society, one has to rely on the testimony of reliable sources, expert judgment and the best available evidence. Moreover, as the psychologist William James responded in 1896, some of our most important beliefs about the world and the human prospect must be formed without the possibility of sufficient evidence. In such circumstances (which are sometimes defined narrowly, sometimes more broadly in James’s writings), one’s ‘will to believe’ entitles us to choose to believe the alternative that projects a better life.

In exploring the varieties of religious experience, James would remind us that the ‘right to believe’ can establish a climate of religious tolerance. Those religions that define themselves by required beliefs (creeds) have engaged in repression, torture and countless wars against non-believers that can cease only with recognition of a mutual ‘right to believe’. Yet, even in this context, extremely intolerant beliefs cannot be tolerated. Rights have limits and carry responsibilities.

Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.

Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked:  . . .

Continue reading.

Written by LeisureGuy

9 February 2019 at 11:17 am

Posted in Books, Daily life

Tagged with

The why of reality

leave a comment »

Nathanael Stein, associate professor in the department of philosophy at Florida State University, writes in Aeon:

The easy question came first, a few months after my son turned four: ‘Are we real?’ It was abrupt, but not quite out of nowhere, and I was able to answer quickly. Yes, we’re real – but Elsa and Anna, dolls of two characters from Frozen, are not. Done. Then there was a follow-up a few weeks later that came just as abruptly, while splashing around a pool: ‘Daddy, why are we real?’

I don’t have a ready answer this time, partly because I don’t really understand the question. Four-year-olds ask Why? a lot – the stereotype is true, maybe even an understatement – and they use Why? ambiguously. Like little Aristotles with their legs dangling from their car seats, their Whys are ‘said in many different ways’. Sometimes these Whys even fall under neat, Aristotelian types: they might be asking what the point of something is, or how it’s made, or even asking for a criterion. Usually, you can feel your way by context.

But sometimes, like now, I have no idea what my son is asking me to explain. He’s learning about the world, and learning how to ask questions about it at the same time, so there are at least two moving targets. My only clue so far is that he previously wondered whether he was real, which made it sound like he was trying to sort things into real and not-real. So maybe the follow-up is a request for a definition: What makes something real? What distinguishes the real things from the unreal ones? If so, this could be a bit awkward. ‘Why’-questions at their most straightforward correspond to ‘Because’-answers, where the ‘because’ refers to something other than what we’re trying to explain. You’re cranky because you haven’t eaten; we’re driving because we need to get food; this food is healthy because it has the nutrients you need. But when the question is ‘Why am I real?’, what other thing is there to fill in the blank after ‘because’?

I have a professional interest in this query. The notion of reality is one of the most basic and most abstract ones we have. Raising questions about the very idea of what’s real has led to some of the most important, classic work in philosophy – from Parmenides to Aristotle to Avicenna to Aquinas to Immanuel Kant. It also, however, has a tendency to produce the kind of frustrating, easily caricatured work that leads people – including many philosophers – to wonder whether certain questions are simply pointless or even illegitimate, and to adopt a kind of skeptical stance towards abstract questions in general. That attitude can be helpfully critical, but it can also be facile and self-stultifying, and it likes to masquerade as pragmatic good sense.

So how does that kind of question get started? It’s easy enough to notice when a child starts confronting questions about good and bad, right and wrong. That’s one reason for thinking that these questions have good credentials. But when, if ever, does reality itself become an object of curiosity, or puzzlement, or wonder – and why?

Just as there are multiple ways of asking or answering the question Why?, there are surprisingly many ways of distinguishing what’s real from what isn’t. One of the most familiar contrasts we draw is between reality and appearance. The distinction is at least as old as philosophy, and often we say that’s about when it started – proto-philosophers and proto-scientists of the 6th and 5th century BCE such as Thales, Anaxagoras, Empedocles and Leucippus making assertions about the fundamental reality underlying natural phenomena: the basic elements, atoms, or even numbers, if we can trust what we’ve heard about Pythagoras. Here we get the origin of the thought that what things are made of is what’s real (or ‘really real’, as we say when we’re at a loss for how to describe it), whereas the familiar objects at the human scale are somehow of lesser status.

We just contrasted reality with appearance, but at least a table is thought to be derived from something real, insofar as it’s constituted of matter. At its most fundamental, matter might not really be the way it appears to us, but it’s really there, and it really does appear a certain way. Its appearance, we might say, is the way it manifests itself. In a different way, though, reality is sometimes contrasted with what we might call mere appearance, like the motion we create on screens: pixels are turning on and off, and changing colour, so there’s change going on, but nothing that seems to be moving really is. This is different again from the kind of illusion of motion we get from certain patterns: here there’s no change in the object at all, but still an appearance of motion. So there are at least three types of things that might be contrasted as appearance rather than reality: manifestations, mere appearances, and illusions.

We also distinguish the following: what’s real from what’s merely imagined or dreamt; the essential nature of things (what they ‘really’ are) from their contingent attributes; what has existed at least at some time from what never has. Dinosaurs and ancestors are real in this last sense, but unicorns aren’t.

There are even more ways we distinguish some things as real and others as unreal – at least a dozen by my count. Reality itself might be solid, but the word is mush. So far, though, none of these distinctions seem to get quite what might be bothering my son. In terms of how we grow up, maybe the one I’m being asked about is more basic. For a four-year-old, real is usually contrasted, not with what’s derivative, or with illusion, or non-existence, but with pretend or make-believe.

It’s a distinction that seems too obvious to be worth noticing for an adult – of course, a bed isn’t a boat, no matter how many pieces of string or scarves you drape around it. As Plato saw, though, it’s urgent for us to see how the distinction behaves in the presence of children. After all, children learn a lot by pretending, and they use make-believe to get a grip on what they experience. In turn, this sort of play is closely connected to the whole realm of stories we hear and watch, as opposed to the ones we act out ourselves. The philosopher Kendall Walton has even argued that we should understand our adult relationship to narrative artworks, such as novels and films, as a sophisticated version of childhood games of make-believe, so that the characters in the stories function as ‘props’. (The connection itself, as he notes, has been made before, going back at least to the Austrian-born historian E H Gombrich.)

Ahuge amount of what children come to experience or notice in their first few years are things we introduce to them through pictures and stories – from objects and animals to things such as work, conflict, daydreaming, fear, safety and more. We use images and stories to prepare them for reality, and that preparation influences how they confront it. At least, so we imagine, so we hope, and so we worry.

Plato is worried (and hopeful) too. Perhaps surprisingly, he groups stories with music and the other arts for the purposes of education, and thinks that we need to be very careful which ones we tell the future rulers of his ideal city in the Republic:

Don’t you understand that we first tell stories to children? These are false, on the whole, though they have some truth in them … You know, don’t you, that the beginning of any process is most important, especially for anything young and tender? It’s at that time that it is most malleable and takes on any pattern one wishes to impress on it.

Plato thinks that the malleability in this case has a distinct source: the young can’t distinguish the allegorical or non-literal meaning of a story from its surface meaning. His solution – censorship – is notorious, and can sound radical, but in some degree or other almost everyone accepts his statement of the problem, and some version of his solution.

But it’s important to see why Plato thinks we need these early stories that are ‘false on the whole, but contain some truth’. The general goal of all this early exposure to the right kind of stories, music and art isn’t any literal transmission of information, but rather that ‘something of those fine works will strike their eyes and ears like a breeze that brings health from a good place, leading them unwittingly, from childhood on, to resemblance, friendship, and harmony with the beauty of reason’.

The result is that . . .

Continue reading.

I tend to view unicorns (for example) as real examples of cultural constructs. That is, there is physical reality (existing independent of human perception and understanding: the DIng an sich) and there is the reality of human cognizance.

Example: Is color real? That is, does color exist in objective physical reality? No, of course not. In objective reality we find light at various wave lengths, but “color” is a human perception and if there is no perception, the “color” experience does not occur. “Color” is in here, not out there. A tree falling in a forest sets air waves in motion, but “sound” requires the perception of those waves.

Is the play “Hamlet” real? You can show me thin sheets of a light-colored material with dark marks on it. That, of course, is not “Hamlet” (the play), and if you did not actually know the appropriate language and conventions, it would just be an odd artefact. The reality of “Hamlet” is within human culture and not part of objective reality (the phenomena themselves).

From Bacteria to Bach and Back: The Evolution of Minds discusses such issues in a very interesting ay. Recommended.

Written by LeisureGuy

8 February 2019 at 10:08 am

Posted in Books, Daily life

Tagged with

Believing without evidence is always morally wrong

with 6 comments

Francisco Mejia Uribe, an executive director at Goldman Sachs in Hong Kong, has degrees in philosophy and economics from the University of Los Andes in Bogotá, Colombia, blogs at The Philosopher Blog, and writes in Aeon:

You have probably never heard of William Kingdon Clifford. He is not in the pantheon of great philosophers – perhaps because his life was cut short at the age of 33 – but I cannot think of anyone whose ideas are more relevant for our interconnected, AI-driven, digital age. This might seem strange given that we are talking about a Victorian Briton whose most famous philosophical work is an essay nearly 150 years ago. However, reality has caught up with Clifford. His once seemingly exaggerated claim that ‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’ is no longer hyperbole but a technical reality.

In ‘The Ethics of Belief’ (1877), Clifford gives three arguments as to why we have a moral obligation to believe responsibly, that is, to believe only what we have sufficient evidence for, and what we have diligently investigated. His first argument starts with the simple observation that our beliefs influence our actions. Everyone would agree that our behaviour is shaped by what we take to be true about the world – which is to say, by what we believe. If I believe that it is raining outside, I’ll bring an umbrella. If I believe taxis don’t take credit cards, I make sure I have some cash before jumping into one. And if I believe that stealing is wrong, then I will pay for my goods before leaving the store.

What we believe is then of tremendous practical importance. False beliefs about physical or social facts lead us into poor habits of action that in the most extreme cases could threaten our survival. If the singer R Kelly genuinely believed the words of his song ‘I Believe I Can Fly’ (1996), I can guarantee you he would not be around by now.

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

The most natural objection to this first argument is that while it might be true that some of our beliefs do lead to actions that can be devastating for others, in reality most of what we believe is probably inconsequential for our fellow humans. As such, claiming as Clifford did that it is wrong in all cases to believe on insufficient evidence seems like a stretch. I think critics had a point – had – but that is no longer so. In a world in which just about everyone’s beliefs are instantly shareable, at minimal cost, to a global audience, every single belief has the capacity to be truly consequential in the way Clifford imagined. If you still believe this is an exaggeration, think about how beliefs fashioned in a cave in Afghanistan lead to acts that ended lives in New York, Paris and London. Or consider how influential the ramblings pouring through your social media feeds have become in your very own daily behaviour. In the digital global village that we now inhabit, false beliefs cast a wider social net, hence Clifford’s argument might have been hyperbole when he first made it, but is no longer so today.

The second argument Clifford provides to back his claim that it is always wrong to believe on insufficient evidence is that poor practices of belief-formation turn us into careless, credulous believers. Clifford puts it nicely: ‘No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may someday explode into overt action, and leave its stamp upon our character.’ Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news pedlars, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, . . .

Continue reading.

Clifford’s position is a strong argument for a good education and training in critical thinking skills, which enable us to look critically at our beliefs and judge whether they are justified.

Written by LeisureGuy

5 November 2018 at 8:37 am

Posted in Daily life, Education

Tagged with

To be happier, focus on what’s within your control

leave a comment »

Massimo Pigliucci, a professor of philosophy at City College and at the Graduate Center of the City University of New York and author of How to Be a Stoic: Ancient Wisdom for Modern Living, writes in Aeon:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

This is the Serenity Prayer, originally written by the American theologian Reinhold Niebuhr around 1934, and commonly used by Alcoholics Anonymous and similar organisations. It is not just a key step toward recovery from addiction, it is a recipe for a happy life, meaning a life of serenity arrived at by consciously taking what life throws at us with equanimity.

The sentiment behind the prayer is very old, found in 8th-century Buddhist manuscripts, as well as in 11th-century Jewish philosophy. The oldest version I can think of, however, goes back to the Stoic philosopher Epictetus. Active in the 2nd century in Rome and then Nicopolis, in western Greece, Epictetus argued that:

We are responsible for some things, while there are others for which we cannot be held responsible. The former include our judgment, our impulse, our desire, aversion and our mental faculties in general; the latter include the body, material possessions, our reputation, status – in a word, anything not in our power to control. … [I]f you have the right idea about what really belongs to you and what does not, you will never be subject to force or hindrance, you will never blame or criticise anyone, and everything you do will be done willingly. You won’t have a single rival, no one to hurt you, because you will be proof against harm of any kind.

I call this Epictetus’ promise: if you truly understand the difference between what is and what is not under your control, and act accordingly, you will become psychologically invincible, impervious to the ups and downs of fortune.

Of course, this is far easier said than done. It requires a lot of mindful practice. But I can assure you from personal experience that it works. For instance, last year I was in Rome, working, as it happened, on a book on Stoicism. One late afternoon I headed to the subway stop near the Colosseum. As soon as I entered the crowded subway car, I felt an unusually strong resistance to moving forward. A young fellow right in front of me was blocking my way, and I couldn’t understand why. Then the realisation hit, a second too late. While my attention was focused on him, his confederate had slipped his hand in my left front pocket, seized my wallet, and was now stepping outside of the car, immediately followed by his accomplice. The doors closed, the train moved on, and I found myself with no cash, no driver’s licence, and a couple of credit cards to cancel and replace.

Before I started practising Stoicism, this would have been a pretty bad experience, and I would not have reacted well. I would have been upset, irritated and angry. This foul mood would have spilled over the rest of the evening. Moreover, the shock of the episode, as relatively mild as the attack had been, would have probably lasted for days, with a destructive alternation of anger and regret.

But I had been practicing Stoicism for a couple of years. So my first thought was of Epictetus’ promise. I couldn’t control the thieves in Rome, and I couldn’t go back and change what had happened. I could, however, accept what had happened and file it away for future reference, focusing instead on having a nice time during the rest of my stay. After all, nothing tragic had happened. I thought about this. And it worked. I joined my evening company, related what happened, and proceeded to enjoy the movie, the dinner, and the conversation. My brother was amazed that I took things with such equanimity and that I was so calm about it. But that’s precisely the power of internalising the Stoic dichotomy of control.

And its efficacy is not limited to minor life inconveniences, as in the episode just described. James Stockdale, a fighter-jet pilot during the Vietnam War, was shot down and spent seven and a half years in Hoa Lo prison, where he was tortured and often put in isolation. He credits Epictetus for surviving the ordeal by immediately applying the dichotomy of control to his extreme situation as a captive, which not only saved his life, but also allowed him to coordinate the resistance from inside the prison, in his position as senior ranking officer.

Most of us don’t find ourselves in Stockdale’s predicament, but once you begin paying attention, the dichotomy of control has countless applications to everyday life, and all of them have to do with one crucial move: shifting your goals from external outcomes to internal achievements.

For example, let’s say that you are preparing your résumé for a possible job promotion. If your goal is to get the promotion, you are setting yourself up for a possible disappointment. There is no guarantee that you will get it, because the outcome is not (entirely) under your control. Sure, you can influence it, but it also depends on a number of variables that are independent of your efforts, including possible competition from other employees, or perhaps the fact that your boss, for whatever unfathomable reason, really doesn’t like you.

That’s why your goal should be internal: if you adopt the Stoic way, you would conscientiously put together the best résumé that you can, and then mentally prepare to accept whatever outcome with equanimity, knowing that sometimes the universe will favour you, and other times it will not. What do you gain by being anxious over something you don’t control? Or angry at a result that was not your doing? You are simply adding a self-inflicted injury to the situation, compromising your happiness and serenity.

This is no counsel for passive acceptance of whatever happens. After all, . . .

Continue reading.

Written by LeisureGuy

3 November 2018 at 10:11 am

Posted in Books, Daily life, Mental Health

Tagged with ,

Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science

leave a comment »

Interesting article that clarified for me the nature of the social constructionist view of reality. If I understand it correctly, a social constructionist would agree in the abstract statement that there is an objective physical reality and science works to discover what it is, but the social construction of theories and their acceptance evolves in the context of human culture, at one remove from physical reality, and thus the socially accepted “facts” may or may not correspond to physical reality (which is why good science is important). Ava Kofman reports in the NY Times Magazine:

In the summer of 1996, during an international anthropology conference in southeastern Brazil, Bruno Latour, France’s most famous and misunderstood philosopher, was approached by an anxious-looking developmental psychologist. The psychologist had a delicate question, and for this reason he requested that Latour meet him in a secluded spot — beside a lake at the Swiss-style resort where they were staying. Removing from his pocket a piece of paper on which he’d scribbled some notes, the psychologist hesitated before asking, “Do you believe in reality?”

For a moment, Latour thought he was being set up for a joke. His early work, it was true, had done more than that of any other living thinker to unsettle the traditional understanding of how we acquire knowledge of what’s real. It had long been taken for granted, for example, that scientific facts and entities, like cells and quarks and prions, existed “out there” in the world before they were discovered by scientists. Latour turned this notion on its head. In a series of controversial books in the 1970s and 1980s, he argued that scientific facts should instead be seen as a product of scientific inquiry. Facts, Latour said, were “networked”; they stood or fell not on the strength of their inherent veracity but on the strength of the institutions and practices that produced them and made them intelligible. If this network broke down, the facts would go with them.

Still, Latour had never seen himself as doing anything so radical, or absurd, as calling into question the existence of reality. As a founder of the new academic discipline of science and technology studies, or S.T.S., Latour regarded himself and his colleagues as allies of science. Of course he believed in reality, he told the psychologist, convinced that the conversation was in jest. From the look of relief on the man’s face, however, Latour realized that the question had been posed in earnest. “I had to switch interpretations fast enough to comprehend both the monster he was seeing me as,” he later wrote of the encounter, “and his touching openness of mind in daring to address such a monster privately. It must have taken courage for him to meet with one of these creatures that threatened, in his view, the whole establishment of science.”

Latour’s interlocutor was not the only person who felt that the establishment of science was under attack. The mid-1990s were the years of the so-called science wars, a series of heated public debates between “realists,” who held that facts were objective and free-standing, and “social constructionists,” like Latour, who believed that such facts were created by scientific research. To hint at any of the contention and compromise that went on behind the scenes, the realists feared, would give succor to the enemies of progress: creationists, anti‐vaxxers, flat‐earthers and cranks of all stripes. If scientific knowledge was socially produced — and thus partial, fallible, contingent — how could that not weaken its claims on reality? At the height of the conflict, the physicist Alan Sokal, who was under the impression that Latour and his S.T.S. colleagues thought that “the laws of physics are mere social conventions,” invited them to jump out the window of his 21st-floor apartment.

At the time, the science wars struck most people outside the academy, if they noticed them at all, as an overheated scholastic squabble. Lately, however, these debates have begun to look more like a prelude to the post-truth era in which society as a whole is presently condemned to live. The past decade has seen a precipitous rise not just in anti-scientific thinking — last year, only 37 percent of conservative Republicans believed in the occurrence of global warning, down from 50 percent in 2008 — but in all manner of reactionary obscurantism, from online conspiracy theories to the much-discussed death of expertise. The election of Donald Trump, a president who invents the facts to suit his mood and goes after the credibility of anyone who contradicts him, would seem to represent the culmination of this epistemic rot. “Do you believe in reality?” is now the question that half of America wants to ask the president and his legion of supporters.

“I think we were so happy to develop all this critique because we were so sure of the authority of science,” Latour reflected this spring. “And that the authority of science would be shared because there was a common world.” We were seated at the dining-room table of his daughter’s apartment in the 19th Arrondissement of Paris, where Latour, who is 71, was babysitting for his 8-year-old grandson, Ulysse. The apartment, he told me proudly, was purchased with the money that came with the award of the 2013 Holberg Prize, known as the Nobel of the humanities, for what the jury heralded as his “reinterpretation of modernity.” He was wearing a purple turtleneck sweater, his favorite burgundy slacks and sensible black walking shoes. He has a full head of dark, disheveled hair, and his vigorously overgrown eyebrows sweep several unsettling centimeters up beyond the rim of his round spectacles, like a nun’s cornette. “Even this notion of a common world we didn’t have to articulate, because it was obvious,” he continued. “Now we have people who no longer share the idea that there is a common world. And that of course changes everything.”

Those who worried that Latour’s early work was opening a Pandora’s box may feel that their fears have been more than borne out. Indeed, commentators on the left and the right, possibly overstating the reach of French theory, have recently leveled blame for our current state of affairs at “postmodernists” like Latour. By showing that scientific facts are the product of all-too-human procedures, these critics charge, Latour — whether he intended to or not — gave license to a pernicious anything-goes relativism that cynical conservatives were only too happy to appropriate for their own ends. Latour himself has sometimes worried about the same thing. As early as 2004 he publicly expressed the fear that his critical “weapons,” or at least a grotesque caricature of them, were being “smuggled” to the other side, as corporate-funded climate skeptics used arguments about the constructed nature of knowledge to sow doubt around the scientific consensus on climate change.

But Latour believes that if the climate skeptics and other junk scientists have made anything clear, it’s that the traditional image of facts was never sustainable to begin with. “The way I see it, I was doing the same thing and saying the same thing,” he told me, removing his glasses. “Then the situation changed.” If anything, our current post-truth moment is less a product of Latour’s ideas than a validation of them. In the way that a person notices her body only once something goes wrong with it, we are becoming conscious of the role that Latourian networks play in producing and sustaining knowledge only now that those networks are under assault.

This, in essence, is the premise of Latour’s latest book, “Down to Earth,” an illuminating and counterintuitive analysis of the present post-truth moment, which will be published in the United States next month. What journalists, scientists and other experts fail to grasp, Latour argues, is that “facts remain robust only when they are supported by a common culture, by institutions that can be trusted, by a more or less decent public life, by more or less reliable media.” With the rise of alternative facts, it has become clear that whether or not a statement is believed depends far less on its veracity than on the conditions of its “construction” — that is, who is making it, to whom it’s being addressed and from which institutions it emerges and is made visible. A greater understanding of the circumstances out of which misinformation arises and the communities in which it takes root, Latour contends, will better equip us to combat it.

[Emphasis added—and I’ve recently experienced this in an on-line exchange with a conservative. I quoted a list of facts (which could easily be fact-checked), but the list was dismissed altogether because a) the facts put conservatives in a bad light, and b) the facts were published in a liberal publication. My interlocutor didn’t bother to check whether the facts I cited were true or not, but simply dismissed them because of the publication in which they appeared. I did point out that these facts (which exposed some of the idiocy of conservative positions) would never appear in a conservative publication (for obvious reasons), so their appearance in a liberal publication was natural enough. The important point—which my interlocutor was unable to grasp—was whether the facts reported were true or not. He had moved to a plane in which the “truth” of facts depended solely on who spoke them. – LG]

Philosophers have traditionally recognized a division between facts and values — between, say, scientific knowledge on one hand and human judgments on the other. Latour believes that this is specious. Many of his books are attempts to illuminate, as he has written, “both the history of humans’ involvement in the making of scientific facts and the sciences’ involvement in the making of human history.” In a formulation that was galling to both sociologists and scientists, he once argued that Louis Pasteur did not just, as is commonly accepted, discover microbes; rather, he collaborated with them.

Latour likes to say that he has been attuned from an early age to the ways in which human beings influence their natural environment. His affluent family, proprietors of the prominent winemaking business Maison Louis Latour, had been cultivating the same Burgundy vineyards for more than 150 years when Bruno, the youngest of eight children, was born there in 1947. An older brother was already being groomed to run the family firm, so Latour was encouraged to pursue a classical education. At 17, he was sent to Saint-Louis de Gonzague, one of the most prestigious schools in Paris, where he mingled with other young members of the French elite. Although he was a wealthy and well-read Catholic, he found himself completely unprepared for the virulent snobbery of the capital. He was made to feel like the proud, provincial hero of a Balzac novel who arrives in Paris and soon discovers how little he knows about the ways of the world. It was at Saint-Louis de Gonzague that he began to study philosophy, a compulsory subject in the final year of French high school. The first text he was assigned was Nietzsche’s “The Birth of Tragedy”; unlike “all the confusion of mathematics,” it immediately struck him as clear and perfectly rational. [! – LG]

In 1966, he began his undergraduate study at the University of Dijon, where he developed an interest in epistemology — the branch of philosophy concerned with how knowledge is made — but even then he had started to suspect that most of what he was learning was “probably wrong.” Philosophers talked about science as though it were a purely cognitive enterprise, a matter of sheer intellectual virtuosity, and about scientists (when they talked about them at all) as logical, objective, heroic.

These suspicions only deepened over the following years, which Latour spent in the Ivory Coast, under the auspices of a sort of French Peace Corps to avoid military service. As he wrote his doctoral dissertation, he taught philosophy at a technical school in Abidjan and volunteered to work on a study commissioned by the French government. His task was to find out why French companies, which still owned and operated many of the factories in postcolonial Abidjan, were having such difficulty recruiting “competent” black executives. It took less than a day for Latour to realize that the premise was flawed. “The question was absurd because they did everything not to have black executives,” he told me. In the French-run engineering schools, black students were taught abstract theories without receiving any practical exposure to the actual machinery they were expected to use. When they were subsequently unable to understand technical drawings, they were accused of having “premodern,” “African” minds. “It was clearly a racist situation,” he said, “which was hidden behind cognitive, pseudohistorical and cultural explanations.”

In Abidjan, Latour began to wonder what it would look like to study scientific knowledge not as a cognitive process but as an embodied cultural practice enabled by instruments, machinery and specific historical conditions. Would the mind of a scientist or an engineer from, say, California seem any more “modern” or “rational” than that of one from the Ivory Coast if it were studied independent of the education, the laboratory and the tools that shaped it and made its work possible?

Before leaving Dijon for Abidjan, Latour met Roger Guillemin, a biologist who would soon go on to win the Nobel Prize for his work on hormone production in the brain. Guillemin later invited him to study his laboratory at the Salk Institute in San Diego, and so beginning in 1975, Latour spent two years there as a sort of participant-observer, following scientists around as they went about their daily work. Part of Latour’s immersion in the lab involved conducting actual experiments, and his co-workers would often gather around to watch. They couldn’t believe that someone could be, as he put it, “so bad and clumsy.” He found pipetting especially difficult. Anytime the slightest thought crossed his mind, he would forget where he placed the instrument and have to start all over again. He later realized that it was precisely his lack of aptitude for lab work that led him to pay such close attention to the intricate, mundane labor involved in the manufacture of objectivity.

When he presented his early findings at the first meeting of the newly established Society for Social Studies of Science, in 1976, many of his colleagues were taken aback by a series of black-and-white photographic slides depicting scientists on the job, as though they were chimpanzees. It was felt that scientists were the only ones who could speak with authority on behalf of science; there was something blasphemous about subjecting the discipline, supposedly the apex of modern society, to the kind of cold scrutiny that anthropologists traditionally reserved for “premodern” peoples. Not everyone felt the same way, however. The previous year, in California, Latour met Steve Woolgar, a British sociologist, who was intrigued by his unorthodox approach. Woolgar turned Latour on to the work of other sociologists and anthropologists, like Michael Lynch, Sharon Traweek and Harold Garfinkel, who had also begun to study science as a social practice. Latour, in turn, invited Woolgar to spend a few weeks with him studying his primates at the Salk Institute.

The two men collaborated on “Laboratory Life,” which after its publication in 1979 became a founding text in the nascent field of science and technology studies and, by academic standards, a breakthrough success. The book continues to challenge some of our most deeply held notions about how knowledge is made. No one had ever contested that scientists were human beings, but most people believed that by following the scientific method, scientists were able to arrive at objective facts that transcended their human origins. A decade and a half earlier, in his best seller, “The Structure of Scientific Revolutions,” the physicist-turned-philosopher Thomas Kuhn had done much to weaken the Whig interpretation of science by showing how historical advances were governed by contingency and debate. What Latour observed firsthand in Guillemin’s lab made the traditional view of science look like little more than a self-serving fiction.

Day-to-day research — what he termed science in the making — appeared not so much as a stepwise progression toward rational truth as a disorderly mass of stray observations, inconclusive results and fledgling explanations. Far from simply discovering facts, scientists seemed to be, as Latour and Woolgar wrote in “Laboratory Life,” “in the business of being convinced and convincing others.” During the process of arguing over uncertain data, scientists foregrounded the reality that they were, in some essential sense, always speaking for the facts; and yet, as soon as their propositions were turned into indisputable statements and peer-reviewed papers — what Latour called ready-made science — they claimed that such facts had always spoken for themselves. That is, only once the scientific community accepted something as true were the all-too-human processes behind it effectively erased or, as Latour put it, black-boxed.

In the 1980s, Latour helped to develop and advocate for a new approach to sociological research called Actor-Network Theory. While controversial at the time, it has since been adopted as a methodological tool not just in sociology but also in a range of disciplines, like urban design and public health. From his studies of laboratories, Latour had seen how an apparently weak and isolated item — a scientific instrument, a scrap of paper, a photograph, a bacterial culture — could acquire enormous power because of the complicated network of other items, known as actors, that were mobilized around it. The more socially “networked” a fact was (the more people and things involved in its production), the more effectively it could refute its less-plausible alternatives. The medical revolution commonly attributed to the genius of Pasteur, he argued, should instead be seen as a result of an association between not just doctors, nurses and hygienists but also worms, milk, sputum, parasites, cows and farms. Science was “social,” then, not merely because it was performed by people (this, he thought, was a reductive misunderstanding of the word “social”); rather, science was social because it brought together a multitude of human and nonhuman entities and harnessed their collective power to act on and transform the world.

In the fall of 2016, the hottest year on record, Latour took a plane from Paris to Calgary, Canada, where he was due to deliver a lecture on “the now-obsolete notion of nature.” Several hours into the flight, above the Baffin ice sheets to the west of Greenland, he peered out the window. What he saw startled him. That year the North Pole was melting at an accelerated pace. The tundra below, rent with fissures, reminded him of the agonized face from Edvard Munch’s painting “The Scream.”

“It was as though the ice was sending me a message,” Latour recalled in March. Dressed in a striking suit (straw-colored tie, blue waistcoat), he was speaking to a sold-out theater of some 200 people in Strasbourg as part of the city’s biennial puppetry festival. Although Latour is a figure of international renown on the academic circuit, his lecture — a sort of anti-TED Talk on climate change featuring an array of surreal images and acoustical effects — was anything but a traditional conference paper. Throughout the performance, Latour’s looming figure was hidden behind images projected onto a screen, so that it seemed as though he were being swallowed by his own PowerPoint presentation. The effect was a bit like watching “An Inconvenient Truth,” if Al Gore had been a coltish French philosopher who said things like “Scientists, artists, and social scientists like myself are beginning to propose what we call — and maybe it’s too exaggerated — a new cosmology.”

The idea that we can stand back and behold nature at a distance, as something discrete from our actions, is an illusion, Latour says. This was the message that the melting ice sheets were sending him. “My activity in this plane going to Canada was actually having an effect on the very spectacle of nature that I was seeing,” he told his Strasbourg audience. “In that sense, there is no outside anymore.” Appropriately enough, the show, which he has performed in several cities across Europe and will bring to New York this week, is called “Inside.” In our current environmental crisis, he continued, a new image of the earth is needed — one that recognizes that there is no such thing as a view from nowhere and that we are always implicated in the creation of our view. With the advent of the Anthropocene, a word proposed by scientists around the turn of the century to designate a new epoch in which humanity has become tantamount to a geological force, Latour’s idea that humans and nonhumans are acting together — and that the earth reacts to those actions — now sounds a lot like common sense. “He is really the thinker of the Anthropocene,” Philippe Pignarre, Latour’s French publisher of 40 years, told me. “A lot of scientists in France didn’t like him originally because he treated them like other workers, and they believed in having a special relationship to the truth. But now they are using his work. He is at the center of people who want to think about the world.”

“Inside” draws heavily on “Down to Earth,” his new book, which has been highly praised in France since its release there last fall. Scientists, he writes, have largely looked at the problem of climate-change denial through the lens of rational empiricism that has governed their profession for centuries; many limit their domain to science, thinking it inappropriate to weigh in on political questions or to speak in an emotional register to communicate urgency. Even though the evidence in support of global warming has long been overwhelming, some scientists continue to believe that the problem of denialism can be solved through ever more data and greater public education. Political scientists, meanwhile, have shown that so-called “irrational” individuals, especially those who are highly educated, in some cases actually hold onto their opinions more strongly when faced with facts that contradict them. Instead of accusing Trump supporters and climate denialists of irrationality, Latour argues that it is untenable to talk about scientific facts as though their rightness alone will be persuasive. In this respect, “Down to Earth” extends the sociological analysis that he brought to bear on factory workers in Abidjan and scientists in California to the minds of anti-scientific voters, looking at the ways in which the reception of seemingly universal knowledge is shaped by the values and local circumstances of those to whom it is being communicated.

Latour believes that if scientists were transparent about how science really functions — as a process in which people, politics, institutions, peer review and so forth all play their parts — they would be in a stronger position to convince people of their claims. Climatologists, he says, must recognize that, as nature’s designated representatives, they have always been political actors, and that they are now combatants in a war whose outcome will have planetary ramifications. We would be in a much better situation, he has told scientists, if they stopped pretending that “the others” — the climate-change deniers — “are the ones engaged in politics and that you are engaged ‘only in science.’ ” In certain respects, new efforts like the March for Science, which has sought to underscore the indispensable role that science plays (or ought to play) in policy decisions, and groups like 314 Action, which are supporting the campaigns of scientists and engineers running for public office, represent an important if belated acknowledgment from today’s scientists that they need, as one of the March’s slogans put it, to step out of the lab and into the streets. (To this Latour might add that the lab has never been truly separate from the streets; that it seems to be is merely a result of scientific culture’s attempt to pass itself off as above the fray.)

Of course, the risk inherent in this embrace of politics is that climate deniers will seize on any acknowledgment of the social factors involved in science to discredit it even further. In a New York Times Op-Ed, a coastal geologist argued that the March for Science would “reinforce the narrative from skeptical conservatives that scientists are an interest group and politicize their data, research and findings for their own ends.” [This is exactly parallel to my on-line encounter with the conservative who denied any facts reported in a liberal publication. – LG] This was what happened in the infamous 2009 incident now known as Climategate, when emails to and from scientists at the University of East Anglia, a leading center for climate research in Britain, were hacked, revealing exactly the kinds of messy debates that Latour documented in “Laboratory Life.” Climate skeptics cited this as proof that the scientists weren’t really discovering climate change but simply massaging the data to fit their preconceptions. Certainly the incident did not, as scholars of science and technology studies might have hoped, lead the public to a deeper understanding of the controversy and negotiation that govern all good science in the making.

Some might see this discouraging episode as a reason to back away from a more openly pugnacious approach on the part of scientists. Latour does not. . .

Continue reading. There’s much more.

Later in the article:

At a meeting between French industrialists and a climatologist a few years ago, Latour was struck when he heard the scientist defend his results not on the basis of the unimpeachable authority of science but by laying out to his audience his manufacturing secrets: “the large number of researchers involved in climate analysis, the complex system for verifying data, the articles and reports, the principle of peer evaluation, the vast network of weather stations, floating weather buoys, satellites and computers that ensure the flow of information.” The climate denialists, by contrast, the scientist said, had none of this institutional architecture. Latour realized he was witnessing the beginnings a seismic rhetorical shift: from scientists appealing to transcendent, capital-T Truth to touting the robust networks through which truth is, and has always been, established.

Written by LeisureGuy

25 October 2018 at 8:15 am

Posted in Books, Daily life, Politics

Tagged with

Unpublished and Untenured, a Philosopher Who Inspired a Cult Following Image

leave a comment »

James Ryerson writes in the NY Times Book Review:

Ever since completing his Ph.D. at the University of Pittsburgh in 1993, the Israeli philosopher Irad Kimhi has been building the résumé of an academic failure. After a six-year stint at Yale in the ’90s that did not lead to a permanent job, he has bounced around from school to school, stringing together a series of short-term lectureships and temporary teaching positions in the United States, Europe and Israel. As of June, his curriculum vitae listed no publications to date — not even a journal article. At 60, he remains unknown to most scholars in his field.

Among a circle of philosophers who have worked or interacted with Kimhi, however, he has a towering reputation. His dissertation adviser, Robert Brandom, describes him as “truly brilliant, a deep and original philosopher.” Jonathan Lear, who helped hire Kimhi at Yale, says that to hear Kimhi talk is to experience “living philosophy, the real thing.” The philosopher and physicist David Z. Albert, a close friend of Kimhi’s, calls him “the best and most energetic and most surprising conversationalist I have ever met, a volcano of theories and opinions and provocations about absolutely everything.” (Kimhi and Albert appear to have been inspirations for the two brainy protagonists of Rivka Galchen’s short story “The Region of Unlikeness.”)

To his admirers, Kimhi is a hidden giant, a profound thinker who, because of a personality at once madly undisciplined and obsessively perfectionistic, has been unable to commit his ideas to paper. As a result, he has not been able to share his insights — about logic, language, metaphysics, theology, psychoanalysis, aesthetics and literature — with the wider academic world.

This type of character, though unusual, is not unheard-of in philosophy. Unlike, say, history or sociology, philosophy has long reserved a place for the occasional talent who struggles or declines to publish. The tradition dates back to Socrates, who not only didn’t write but also disparaged writing as too rigid a medium to capture “the living, breathing discourse of the man who knows.” (Plato’s words, of course.) Even as recently as the second half of the 20th century, many philosophy departments still employed a resident Socratic figure — a nonpublishing legend like Sidney Morgenbesser of Columbia or Rogers Albritton of Harvard — as if to provide a daily reminder that the discipline’s founding virtues of intellectual spontaneity, dialectical responsiveness and lack of dogmatism did not lend themselves naturally to the settled view of a treatise.

Kimhi may be the last of this dying breed. Certainly the days of granting tenure to such people are gone. Which is why Kimhi’s supporters have held out hope that a book he was working on for many years would eventually be published. The philosopher Robert Pippin, who has helped secure positions for Kimhi at the University of Chicago, explains that drafts of the manuscript have circulated to great excitement, if among “a very curated audience.” Harvard University Press was interested in publishing the book as early as 2011, but Kimhi, ever the perfectionist, was reluctant to let it go, forever refining and refashioning. Perhaps his foot-dragging was an expression of doubt, too: Could any book live up to his reputation?

Now, at long last, with the publication in July of THINKING AND BEING (Harvard University, $39.95), the world can find out.

It is not easy to summarize Kimhi’s book. Though only 166 pages, it strives to do a lot in a short space, aiming to overthrow views about logic and metaphysics that have prevailed in philosophy for a century. And though characterized by a precision of expression, the book is not what you would call lucid. Reading it is less about working through a series of rigorous, detailed arguments — the dominant mode of contemporary Anglophone philosophy — and more about getting accustomed to a radically different way of looking at fundamental philosophical questions, including What is thinking? and What is the relationship between thinking and the world?

For a taste of the ambition of Kimhi’s project, consider his ideas about the nature of logic. For many decades, our understanding of logic has been defined by a distinction between the “force” and “content” of a proposition — that is, between the act of asserting something and what it is you are asserting. If we don’t draw this distinction, according to a standard view of logic, it is not clear why the following sort of inference is valid:

Premise 1: P —> Q [e.g., “If it’s raining, then things are wet”]

Premise 2: P [“It’s raining”]

Conclusion: Q [“Things are wet”]

Note that this conclusion follows only if P (“it’s raining”) is unambiguously the same thing in each of the premises. But in the first premise, P is not asserted (“it’s raining” is entertained as a possibility), whereas in the second premise P is asserted (“it’s raining” is presented as fact). Therefore, according to this view, the assertion or “force” of P must be external to logic. An assertion is a psychological attitude (“I think … ”), a fact about what someone happens to believe. Logic, by contrast, concerns the abstract relations that hold among the “contents” — roughly, the meanings — of propositions.

In other words, logic provides us not with an empirical understanding of how our thinking actually works (that’s the purview of psychology), but with a normative understanding of how thinking should work. There is no “I” in logic.

Kimhi argues that this view is wrong, and that the distinction between psychology and logic has led our understanding of thinking astray. Consider that the following statement does not, according to the standard view, constitute a logical contradiction: “It’s raining, but I don’t believe it’s raining.” Why? Because the first part of the sentence concerns a state of affairs in the world (“it’s raining”), whereas the second part concerns someone’s state of mind (“I don’t believe it’s raining”).

Kimhi wants to rescue the intuition that it is a logical contradiction to say, “It’s raining, but I don’t believe it’s raining.” But to do this, he has to reject the idea that when you assert a proposition, what you are doing is adding psychological force (“I think … ”) to abstract content (“it’s raining”). Instead, Kimhi argues that a self-conscious, first-person perspective — an “I” — is internal to logic. For him, to judge that “it’s raining” is the same as judging “I believe it’s raining,” which is the same as judging “it’s false that it’s not raining.” All are facets of a single act of mind.

One consequence of Kimhi’s view is that . . .

Continue reading.

Written by LeisureGuy

8 October 2018 at 3:55 pm

Posted in Books, Daily life

Tagged with

%d bloggers like this: