Later On

A blog written for those whose interests more or less match mine.

Posts Tagged ‘philosophy

You don’t have a right to believe whatever you want to

leave a comment »

Daniel DeNicola, professor and chair of philosophy at Gettysburg College in Pennsylvania and author of Understanding Ignorance: The Surprising Impact of What We Don’t Know (which received the 2018 PROSE Award in Philosophy from the Association of American Publishers), writes in Aeon:

Do we have the right to believe whatever we want to believe? This supposed right is often claimed as the last resort of the wilfully ignorant, the person who is cornered by evidence and mounting opinion: ‘I believe climate change is a hoax whatever anyone else says, and I have a right to believe it!’ But is there such a right?

We do recognise the right to know certain things. I have a right to know the conditions of my employment, the physician’s diagnosis of my ailments, the grades I achieved at school, the name of my accuser and the nature of the charges, and so on. But belief is not knowledge.

Beliefs are factive: to believe is to take to be true. It would be absurd, as the analytic philosopher G E Moore observed in the 1940s, to say: ‘It is raining, but I don’t believe that it is raining.’ Beliefs aspire to truth – but they do not entail it. Beliefs can be false, unwarranted by evidence or reasoned consideration. They can also be morally repugnant. Among likely candidates: beliefs that are sexist, racist or homophobic; the belief that proper upbringing of a child requires ‘breaking the will’ and severe corporal punishment; the belief that the elderly should routinely be euthanised; the belief that ‘ethnic cleansing’ is a political solution, and so on. If we find these morally wrong, we condemn not only the potential acts that spring from such beliefs, but the content of the belief itself, the act of believing it, and thus the believer.

Such judgments can imply that believing is a voluntary act. But beliefs are often more like states of mind or attitudes than decisive actions. Some beliefs, such as personal values, are not deliberately chosen; they are ‘inherited’ from parents and ‘acquired’ from peers, acquired inadvertently, inculcated by institutions and authorities, or assumed from hearsay. For this reason, I think, it is not always the coming-to-hold-this-belief that is problematic; it is rather the sustaining of such beliefs, the refusal to disbelieve or discard them that can be voluntary and ethically wrong.

If the content of a belief is judged morally wrong, it is also thought to be false. The belief that one race is less than fully human is not only a morally repugnant, racist tenet; it is also thought to be a false claim – though not by the believer. The falsity of a belief is a necessary but not sufficient condition for a belief to be morally wrong; neither is the ugliness of the content sufficient for a belief to be morally wrong. Alas, there are indeed morally repugnant truths, but it is not the believing that makes them so. Their moral ugliness is embedded in the world, not in one’s belief about the world.

‘Who are you to tell me what to believe?’ replies the zealot. It is a misguided challenge: it implies that certifying one’s beliefs is a matter of someone’sauthority. It ignores the role of reality. Believing has what philosophers call a ‘mind-to-world direction of fit’. Our beliefs are intended to reflect the real world – and it is on this point that beliefs can go haywire. There are irresponsible beliefs; more precisely, there are beliefs that are acquired and retained in an irresponsible way. One might disregard evidence; accept gossip, rumour, or testimony from dubious sources; ignore incoherence with one’s other beliefs; embrace wishful thinking; or display a predilection for conspiracy theories.

I do not mean to revert to the stern evidentialism of the 19th-century mathematical philosopher William K Clifford, who claimed: ‘It is wrong, always, everywhere, and for anyone, to believe anything upon insufficient evidence.’ Clifford was trying to prevent irresponsible ‘overbelief’, in which wishful thinking, blind faith or sentiment (rather than evidence) stimulate or justify belief. This is too restrictive. In any complex society, one has to rely on the testimony of reliable sources, expert judgment and the best available evidence. Moreover, as the psychologist William James responded in 1896, some of our most important beliefs about the world and the human prospect must be formed without the possibility of sufficient evidence. In such circumstances (which are sometimes defined narrowly, sometimes more broadly in James’s writings), one’s ‘will to believe’ entitles us to choose to believe the alternative that projects a better life.

In exploring the varieties of religious experience, James would remind us that the ‘right to believe’ can establish a climate of religious tolerance. Those religions that define themselves by required beliefs (creeds) have engaged in repression, torture and countless wars against non-believers that can cease only with recognition of a mutual ‘right to believe’. Yet, even in this context, extremely intolerant beliefs cannot be tolerated. Rights have limits and carry responsibilities.

Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.

Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked:  . . .

Continue reading.

Written by LeisureGuy

9 February 2019 at 11:17 am

Posted in Books, Daily life

Tagged with

The why of reality

leave a comment »

Nathanael Stein, associate professor in the department of philosophy at Florida State University, writes in Aeon:

The easy question came first, a few months after my son turned four: ‘Are we real?’ It was abrupt, but not quite out of nowhere, and I was able to answer quickly. Yes, we’re real – but Elsa and Anna, dolls of two characters from Frozen, are not. Done. Then there was a follow-up a few weeks later that came just as abruptly, while splashing around a pool: ‘Daddy, why are we real?’

I don’t have a ready answer this time, partly because I don’t really understand the question. Four-year-olds ask Why? a lot – the stereotype is true, maybe even an understatement – and they use Why? ambiguously. Like little Aristotles with their legs dangling from their car seats, their Whys are ‘said in many different ways’. Sometimes these Whys even fall under neat, Aristotelian types: they might be asking what the point of something is, or how it’s made, or even asking for a criterion. Usually, you can feel your way by context.

But sometimes, like now, I have no idea what my son is asking me to explain. He’s learning about the world, and learning how to ask questions about it at the same time, so there are at least two moving targets. My only clue so far is that he previously wondered whether he was real, which made it sound like he was trying to sort things into real and not-real. So maybe the follow-up is a request for a definition: What makes something real? What distinguishes the real things from the unreal ones? If so, this could be a bit awkward. ‘Why’-questions at their most straightforward correspond to ‘Because’-answers, where the ‘because’ refers to something other than what we’re trying to explain. You’re cranky because you haven’t eaten; we’re driving because we need to get food; this food is healthy because it has the nutrients you need. But when the question is ‘Why am I real?’, what other thing is there to fill in the blank after ‘because’?

I have a professional interest in this query. The notion of reality is one of the most basic and most abstract ones we have. Raising questions about the very idea of what’s real has led to some of the most important, classic work in philosophy – from Parmenides to Aristotle to Avicenna to Aquinas to Immanuel Kant. It also, however, has a tendency to produce the kind of frustrating, easily caricatured work that leads people – including many philosophers – to wonder whether certain questions are simply pointless or even illegitimate, and to adopt a kind of skeptical stance towards abstract questions in general. That attitude can be helpfully critical, but it can also be facile and self-stultifying, and it likes to masquerade as pragmatic good sense.

So how does that kind of question get started? It’s easy enough to notice when a child starts confronting questions about good and bad, right and wrong. That’s one reason for thinking that these questions have good credentials. But when, if ever, does reality itself become an object of curiosity, or puzzlement, or wonder – and why?

Just as there are multiple ways of asking or answering the question Why?, there are surprisingly many ways of distinguishing what’s real from what isn’t. One of the most familiar contrasts we draw is between reality and appearance. The distinction is at least as old as philosophy, and often we say that’s about when it started – proto-philosophers and proto-scientists of the 6th and 5th century BCE such as Thales, Anaxagoras, Empedocles and Leucippus making assertions about the fundamental reality underlying natural phenomena: the basic elements, atoms, or even numbers, if we can trust what we’ve heard about Pythagoras. Here we get the origin of the thought that what things are made of is what’s real (or ‘really real’, as we say when we’re at a loss for how to describe it), whereas the familiar objects at the human scale are somehow of lesser status.

We just contrasted reality with appearance, but at least a table is thought to be derived from something real, insofar as it’s constituted of matter. At its most fundamental, matter might not really be the way it appears to us, but it’s really there, and it really does appear a certain way. Its appearance, we might say, is the way it manifests itself. In a different way, though, reality is sometimes contrasted with what we might call mere appearance, like the motion we create on screens: pixels are turning on and off, and changing colour, so there’s change going on, but nothing that seems to be moving really is. This is different again from the kind of illusion of motion we get from certain patterns: here there’s no change in the object at all, but still an appearance of motion. So there are at least three types of things that might be contrasted as appearance rather than reality: manifestations, mere appearances, and illusions.

We also distinguish the following: what’s real from what’s merely imagined or dreamt; the essential nature of things (what they ‘really’ are) from their contingent attributes; what has existed at least at some time from what never has. Dinosaurs and ancestors are real in this last sense, but unicorns aren’t.

There are even more ways we distinguish some things as real and others as unreal – at least a dozen by my count. Reality itself might be solid, but the word is mush. So far, though, none of these distinctions seem to get quite what might be bothering my son. In terms of how we grow up, maybe the one I’m being asked about is more basic. For a four-year-old, real is usually contrasted, not with what’s derivative, or with illusion, or non-existence, but with pretend or make-believe.

It’s a distinction that seems too obvious to be worth noticing for an adult – of course, a bed isn’t a boat, no matter how many pieces of string or scarves you drape around it. As Plato saw, though, it’s urgent for us to see how the distinction behaves in the presence of children. After all, children learn a lot by pretending, and they use make-believe to get a grip on what they experience. In turn, this sort of play is closely connected to the whole realm of stories we hear and watch, as opposed to the ones we act out ourselves. The philosopher Kendall Walton has even argued that we should understand our adult relationship to narrative artworks, such as novels and films, as a sophisticated version of childhood games of make-believe, so that the characters in the stories function as ‘props’. (The connection itself, as he notes, has been made before, going back at least to the Austrian-born historian E H Gombrich.)

Ahuge amount of what children come to experience or notice in their first few years are things we introduce to them through pictures and stories – from objects and animals to things such as work, conflict, daydreaming, fear, safety and more. We use images and stories to prepare them for reality, and that preparation influences how they confront it. At least, so we imagine, so we hope, and so we worry.

Plato is worried (and hopeful) too. Perhaps surprisingly, he groups stories with music and the other arts for the purposes of education, and thinks that we need to be very careful which ones we tell the future rulers of his ideal city in the Republic:

Don’t you understand that we first tell stories to children? These are false, on the whole, though they have some truth in them … You know, don’t you, that the beginning of any process is most important, especially for anything young and tender? It’s at that time that it is most malleable and takes on any pattern one wishes to impress on it.

Plato thinks that the malleability in this case has a distinct source: the young can’t distinguish the allegorical or non-literal meaning of a story from its surface meaning. His solution – censorship – is notorious, and can sound radical, but in some degree or other almost everyone accepts his statement of the problem, and some version of his solution.

But it’s important to see why Plato thinks we need these early stories that are ‘false on the whole, but contain some truth’. The general goal of all this early exposure to the right kind of stories, music and art isn’t any literal transmission of information, but rather that ‘something of those fine works will strike their eyes and ears like a breeze that brings health from a good place, leading them unwittingly, from childhood on, to resemblance, friendship, and harmony with the beauty of reason’.

The result is that . . .

Continue reading.

I tend to view unicorns (for example) as real examples of cultural constructs. That is, there is physical reality (existing independent of human perception and understanding: the DIng an sich) and there is the reality of human cognizance.

Example: Is color real? That is, does color exist in objective physical reality? No, of course not. In objective reality we find light at various wave lengths, but “color” is a human perception and if there is no perception, the “color” experience does not occur. “Color” is in here, not out there. A tree falling in a forest sets air waves in motion, but “sound” requires the perception of those waves.

Is the play “Hamlet” real? You can show me thin sheets of a light-colored material with dark marks on it. That, of course, is not “Hamlet” (the play), and if you did not actually know the appropriate language and conventions, it would just be an odd artefact. The reality of “Hamlet” is within human culture and not part of objective reality (the phenomena themselves).

From Bacteria to Bach and Back: The Evolution of Minds discusses such issues in a very interesting ay. Recommended.

Written by LeisureGuy

8 February 2019 at 10:08 am

Posted in Books, Daily life

Tagged with

Believing without evidence is always morally wrong

with 6 comments

Francisco Mejia Uribe, an executive director at Goldman Sachs in Hong Kong, has degrees in philosophy and economics from the University of Los Andes in Bogotá, Colombia, blogs at The Philosopher Blog, and writes in Aeon:

You have probably never heard of William Kingdon Clifford. He is not in the pantheon of great philosophers – perhaps because his life was cut short at the age of 33 – but I cannot think of anyone whose ideas are more relevant for our interconnected, AI-driven, digital age. This might seem strange given that we are talking about a Victorian Briton whose most famous philosophical work is an essay nearly 150 years ago. However, reality has caught up with Clifford. His once seemingly exaggerated claim that ‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’ is no longer hyperbole but a technical reality.

In ‘The Ethics of Belief’ (1877), Clifford gives three arguments as to why we have a moral obligation to believe responsibly, that is, to believe only what we have sufficient evidence for, and what we have diligently investigated. His first argument starts with the simple observation that our beliefs influence our actions. Everyone would agree that our behaviour is shaped by what we take to be true about the world – which is to say, by what we believe. If I believe that it is raining outside, I’ll bring an umbrella. If I believe taxis don’t take credit cards, I make sure I have some cash before jumping into one. And if I believe that stealing is wrong, then I will pay for my goods before leaving the store.

What we believe is then of tremendous practical importance. False beliefs about physical or social facts lead us into poor habits of action that in the most extreme cases could threaten our survival. If the singer R Kelly genuinely believed the words of his song ‘I Believe I Can Fly’ (1996), I can guarantee you he would not be around by now.

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

The most natural objection to this first argument is that while it might be true that some of our beliefs do lead to actions that can be devastating for others, in reality most of what we believe is probably inconsequential for our fellow humans. As such, claiming as Clifford did that it is wrong in all cases to believe on insufficient evidence seems like a stretch. I think critics had a point – had – but that is no longer so. In a world in which just about everyone’s beliefs are instantly shareable, at minimal cost, to a global audience, every single belief has the capacity to be truly consequential in the way Clifford imagined. If you still believe this is an exaggeration, think about how beliefs fashioned in a cave in Afghanistan lead to acts that ended lives in New York, Paris and London. Or consider how influential the ramblings pouring through your social media feeds have become in your very own daily behaviour. In the digital global village that we now inhabit, false beliefs cast a wider social net, hence Clifford’s argument might have been hyperbole when he first made it, but is no longer so today.

The second argument Clifford provides to back his claim that it is always wrong to believe on insufficient evidence is that poor practices of belief-formation turn us into careless, credulous believers. Clifford puts it nicely: ‘No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may someday explode into overt action, and leave its stamp upon our character.’ Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news pedlars, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, . . .

Continue reading.

Clifford’s position is a strong argument for a good education and training in critical thinking skills, which enable us to look critically at our beliefs and judge whether they are justified.

Written by LeisureGuy

5 November 2018 at 8:37 am

Posted in Daily life, Education

Tagged with

To be happier, focus on what’s within your control

leave a comment »

Massimo Pigliucci, a professor of philosophy at City College and at the Graduate Center of the City University of New York and author of How to Be a Stoic: Ancient Wisdom for Modern Living, writes in Aeon:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

This is the Serenity Prayer, originally written by the American theologian Reinhold Niebuhr around 1934, and commonly used by Alcoholics Anonymous and similar organisations. It is not just a key step toward recovery from addiction, it is a recipe for a happy life, meaning a life of serenity arrived at by consciously taking what life throws at us with equanimity.

The sentiment behind the prayer is very old, found in 8th-century Buddhist manuscripts, as well as in 11th-century Jewish philosophy. The oldest version I can think of, however, goes back to the Stoic philosopher Epictetus. Active in the 2nd century in Rome and then Nicopolis, in western Greece, Epictetus argued that:

We are responsible for some things, while there are others for which we cannot be held responsible. The former include our judgment, our impulse, our desire, aversion and our mental faculties in general; the latter include the body, material possessions, our reputation, status – in a word, anything not in our power to control. … [I]f you have the right idea about what really belongs to you and what does not, you will never be subject to force or hindrance, you will never blame or criticise anyone, and everything you do will be done willingly. You won’t have a single rival, no one to hurt you, because you will be proof against harm of any kind.

I call this Epictetus’ promise: if you truly understand the difference between what is and what is not under your control, and act accordingly, you will become psychologically invincible, impervious to the ups and downs of fortune.

Of course, this is far easier said than done. It requires a lot of mindful practice. But I can assure you from personal experience that it works. For instance, last year I was in Rome, working, as it happened, on a book on Stoicism. One late afternoon I headed to the subway stop near the Colosseum. As soon as I entered the crowded subway car, I felt an unusually strong resistance to moving forward. A young fellow right in front of me was blocking my way, and I couldn’t understand why. Then the realisation hit, a second too late. While my attention was focused on him, his confederate had slipped his hand in my left front pocket, seized my wallet, and was now stepping outside of the car, immediately followed by his accomplice. The doors closed, the train moved on, and I found myself with no cash, no driver’s licence, and a couple of credit cards to cancel and replace.

Before I started practising Stoicism, this would have been a pretty bad experience, and I would not have reacted well. I would have been upset, irritated and angry. This foul mood would have spilled over the rest of the evening. Moreover, the shock of the episode, as relatively mild as the attack had been, would have probably lasted for days, with a destructive alternation of anger and regret.

But I had been practicing Stoicism for a couple of years. So my first thought was of Epictetus’ promise. I couldn’t control the thieves in Rome, and I couldn’t go back and change what had happened. I could, however, accept what had happened and file it away for future reference, focusing instead on having a nice time during the rest of my stay. After all, nothing tragic had happened. I thought about this. And it worked. I joined my evening company, related what happened, and proceeded to enjoy the movie, the dinner, and the conversation. My brother was amazed that I took things with such equanimity and that I was so calm about it. But that’s precisely the power of internalising the Stoic dichotomy of control.

And its efficacy is not limited to minor life inconveniences, as in the episode just described. James Stockdale, a fighter-jet pilot during the Vietnam War, was shot down and spent seven and a half years in Hoa Lo prison, where he was tortured and often put in isolation. He credits Epictetus for surviving the ordeal by immediately applying the dichotomy of control to his extreme situation as a captive, which not only saved his life, but also allowed him to coordinate the resistance from inside the prison, in his position as senior ranking officer.

Most of us don’t find ourselves in Stockdale’s predicament, but once you begin paying attention, the dichotomy of control has countless applications to everyday life, and all of them have to do with one crucial move: shifting your goals from external outcomes to internal achievements.

For example, let’s say that you are preparing your résumé for a possible job promotion. If your goal is to get the promotion, you are setting yourself up for a possible disappointment. There is no guarantee that you will get it, because the outcome is not (entirely) under your control. Sure, you can influence it, but it also depends on a number of variables that are independent of your efforts, including possible competition from other employees, or perhaps the fact that your boss, for whatever unfathomable reason, really doesn’t like you.

That’s why your goal should be internal: if you adopt the Stoic way, you would conscientiously put together the best résumé that you can, and then mentally prepare to accept whatever outcome with equanimity, knowing that sometimes the universe will favour you, and other times it will not. What do you gain by being anxious over something you don’t control? Or angry at a result that was not your doing? You are simply adding a self-inflicted injury to the situation, compromising your happiness and serenity.

This is no counsel for passive acceptance of whatever happens. After all, . . .

Continue reading.

Written by LeisureGuy

3 November 2018 at 10:11 am

Posted in Books, Daily life, Mental Health

Tagged with ,

Bruno Latour, the Post-Truth Philosopher, Mounts a Defense of Science

leave a comment »

Interesting article that clarified for me the nature of the social constructionist view of reality. If I understand it correctly, a social constructionist would agree in the abstract statement that there is an objective physical reality and science works to discover what it is, but the social construction of theories and their acceptance evolves in the context of human culture, at one remove from physical reality, and thus the socially accepted “facts” may or may not correspond to physical reality (which is why good science is important). Ava Kofman reports in the NY Times Magazine:

In the summer of 1996, during an international anthropology conference in southeastern Brazil, Bruno Latour, France’s most famous and misunderstood philosopher, was approached by an anxious-looking developmental psychologist. The psychologist had a delicate question, and for this reason he requested that Latour meet him in a secluded spot — beside a lake at the Swiss-style resort where they were staying. Removing from his pocket a piece of paper on which he’d scribbled some notes, the psychologist hesitated before asking, “Do you believe in reality?”

For a moment, Latour thought he was being set up for a joke. His early work, it was true, had done more than that of any other living thinker to unsettle the traditional understanding of how we acquire knowledge of what’s real. It had long been taken for granted, for example, that scientific facts and entities, like cells and quarks and prions, existed “out there” in the world before they were discovered by scientists. Latour turned this notion on its head. In a series of controversial books in the 1970s and 1980s, he argued that scientific facts should instead be seen as a product of scientific inquiry. Facts, Latour said, were “networked”; they stood or fell not on the strength of their inherent veracity but on the strength of the institutions and practices that produced them and made them intelligible. If this network broke down, the facts would go with them.

Still, Latour had never seen himself as doing anything so radical, or absurd, as calling into question the existence of reality. As a founder of the new academic discipline of science and technology studies, or S.T.S., Latour regarded himself and his colleagues as allies of science. Of course he believed in reality, he told the psychologist, convinced that the conversation was in jest. From the look of relief on the man’s face, however, Latour realized that the question had been posed in earnest. “I had to switch interpretations fast enough to comprehend both the monster he was seeing me as,” he later wrote of the encounter, “and his touching openness of mind in daring to address such a monster privately. It must have taken courage for him to meet with one of these creatures that threatened, in his view, the whole establishment of science.”

Latour’s interlocutor was not the only person who felt that the establishment of science was under attack. The mid-1990s were the years of the so-called science wars, a series of heated public debates between “realists,” who held that facts were objective and free-standing, and “social constructionists,” like Latour, who believed that such facts were created by scientific research. To hint at any of the contention and compromise that went on behind the scenes, the realists feared, would give succor to the enemies of progress: creationists, anti‐vaxxers, flat‐earthers and cranks of all stripes. If scientific knowledge was socially produced — and thus partial, fallible, contingent — how could that not weaken its claims on reality? At the height of the conflict, the physicist Alan Sokal, who was under the impression that Latour and his S.T.S. colleagues thought that “the laws of physics are mere social conventions,” invited them to jump out the window of his 21st-floor apartment.

At the time, the science wars struck most people outside the academy, if they noticed them at all, as an overheated scholastic squabble. Lately, however, these debates have begun to look more like a prelude to the post-truth era in which society as a whole is presently condemned to live. The past decade has seen a precipitous rise not just in anti-scientific thinking — last year, only 37 percent of conservative Republicans believed in the occurrence of global warning, down from 50 percent in 2008 — but in all manner of reactionary obscurantism, from online conspiracy theories to the much-discussed death of expertise. The election of Donald Trump, a president who invents the facts to suit his mood and goes after the credibility of anyone who contradicts him, would seem to represent the culmination of this epistemic rot. “Do you believe in reality?” is now the question that half of America wants to ask the president and his legion of supporters.

“I think we were so happy to develop all this critique because we were so sure of the authority of science,” Latour reflected this spring. “And that the authority of science would be shared because there was a common world.” We were seated at the dining-room table of his daughter’s apartment in the 19th Arrondissement of Paris, where Latour, who is 71, was babysitting for his 8-year-old grandson, Ulysse. The apartment, he told me proudly, was purchased with the money that came with the award of the 2013 Holberg Prize, known as the Nobel of the humanities, for what the jury heralded as his “reinterpretation of modernity.” He was wearing a purple turtleneck sweater, his favorite burgundy slacks and sensible black walking shoes. He has a full head of dark, disheveled hair, and his vigorously overgrown eyebrows sweep several unsettling centimeters up beyond the rim of his round spectacles, like a nun’s cornette. “Even this notion of a common world we didn’t have to articulate, because it was obvious,” he continued. “Now we have people who no longer share the idea that there is a common world. And that of course changes everything.”

Those who worried that Latour’s early work was opening a Pandora’s box may feel that their fears have been more than borne out. Indeed, commentators on the left and the right, possibly overstating the reach of French theory, have recently leveled blame for our current state of affairs at “postmodernists” like Latour. By showing that scientific facts are the product of all-too-human procedures, these critics charge, Latour — whether he intended to or not — gave license to a pernicious anything-goes relativism that cynical conservatives were only too happy to appropriate for their own ends. Latour himself has sometimes worried about the same thing. As early as 2004 he publicly expressed the fear that his critical “weapons,” or at least a grotesque caricature of them, were being “smuggled” to the other side, as corporate-funded climate skeptics used arguments about the constructed nature of knowledge to sow doubt around the scientific consensus on climate change.

But Latour believes that if the climate skeptics and other junk scientists have made anything clear, it’s that the traditional image of facts was never sustainable to begin with. “The way I see it, I was doing the same thing and saying the same thing,” he told me, removing his glasses. “Then the situation changed.” If anything, our current post-truth moment is less a product of Latour’s ideas than a validation of them. In the way that a person notices her body only once something goes wrong with it, we are becoming conscious of the role that Latourian networks play in producing and sustaining knowledge only now that those networks are under assault.

This, in essence, is the premise of Latour’s latest book, “Down to Earth,” an illuminating and counterintuitive analysis of the present post-truth moment, which will be published in the United States next month. What journalists, scientists and other experts fail to grasp, Latour argues, is that “facts remain robust only when they are supported by a common culture, by institutions that can be trusted, by a more or less decent public life, by more or less reliable media.” With the rise of alternative facts, it has become clear that whether or not a statement is believed depends far less on its veracity than on the conditions of its “construction” — that is, who is making it, to whom it’s being addressed and from which institutions it emerges and is made visible. A greater understanding of the circumstances out of which misinformation arises and the communities in which it takes root, Latour contends, will better equip us to combat it.

[Emphasis added—and I’ve recently experienced this in an on-line exchange with a conservative. I quoted a list of facts (which could easily be fact-checked), but the list was dismissed altogether because a) the facts put conservatives in a bad light, and b) the facts were published in a liberal publication. My interlocutor didn’t bother to check whether the facts I cited were true or not, but simply dismissed them because of the publication in which they appeared. I did point out that these facts (which exposed some of the idiocy of conservative positions) would never appear in a conservative publication (for obvious reasons), so their appearance in a liberal publication was natural enough. The important point—which my interlocutor was unable to grasp—was whether the facts reported were true or not. He had moved to a plane in which the “truth” of facts depended solely on who spoke them. – LG]

Philosophers have traditionally recognized a division between facts and values — between, say, scientific knowledge on one hand and human judgments on the other. Latour believes that this is specious. Many of his books are attempts to illuminate, as he has written, “both the history of humans’ involvement in the making of scientific facts and the sciences’ involvement in the making of human history.” In a formulation that was galling to both sociologists and scientists, he once argued that Louis Pasteur did not just, as is commonly accepted, discover microbes; rather, he collaborated with them.

Latour likes to say that he has been attuned from an early age to the ways in which human beings influence their natural environment. His affluent family, proprietors of the prominent winemaking business Maison Louis Latour, had been cultivating the same Burgundy vineyards for more than 150 years when Bruno, the youngest of eight children, was born there in 1947. An older brother was already being groomed to run the family firm, so Latour was encouraged to pursue a classical education. At 17, he was sent to Saint-Louis de Gonzague, one of the most prestigious schools in Paris, where he mingled with other young members of the French elite. Although he was a wealthy and well-read Catholic, he found himself completely unprepared for the virulent snobbery of the capital. He was made to feel like the proud, provincial hero of a Balzac novel who arrives in Paris and soon discovers how little he knows about the ways of the world. It was at Saint-Louis de Gonzague that he began to study philosophy, a compulsory subject in the final year of French high school. The first text he was assigned was Nietzsche’s “The Birth of Tragedy”; unlike “all the confusion of mathematics,” it immediately struck him as clear and perfectly rational. [! – LG]

In 1966, he began his undergraduate study at the University of Dijon, where he developed an interest in epistemology — the branch of philosophy concerned with how knowledge is made — but even then he had started to suspect that most of what he was learning was “probably wrong.” Philosophers talked about science as though it were a purely cognitive enterprise, a matter of sheer intellectual virtuosity, and about scientists (when they talked about them at all) as logical, objective, heroic.

These suspicions only deepened over the following years, which Latour spent in the Ivory Coast, under the auspices of a sort of French Peace Corps to avoid military service. As he wrote his doctoral dissertation, he taught philosophy at a technical school in Abidjan and volunteered to work on a study commissioned by the French government. His task was to find out why French companies, which still owned and operated many of the factories in postcolonial Abidjan, were having such difficulty recruiting “competent” black executives. It took less than a day for Latour to realize that the premise was flawed. “The question was absurd because they did everything not to have black executives,” he told me. In the French-run engineering schools, black students were taught abstract theories without receiving any practical exposure to the actual machinery they were expected to use. When they were subsequently unable to understand technical drawings, they were accused of having “premodern,” “African” minds. “It was clearly a racist situation,” he said, “which was hidden behind cognitive, pseudohistorical and cultural explanations.”

In Abidjan, Latour began to wonder what it would look like to study scientific knowledge not as a cognitive process but as an embodied cultural practice enabled by instruments, machinery and specific historical conditions. Would the mind of a scientist or an engineer from, say, California seem any more “modern” or “rational” than that of one from the Ivory Coast if it were studied independent of the education, the laboratory and the tools that shaped it and made its work possible?

Before leaving Dijon for Abidjan, Latour met Roger Guillemin, a biologist who would soon go on to win the Nobel Prize for his work on hormone production in the brain. Guillemin later invited him to study his laboratory at the Salk Institute in San Diego, and so beginning in 1975, Latour spent two years there as a sort of participant-observer, following scientists around as they went about their daily work. Part of Latour’s immersion in the lab involved conducting actual experiments, and his co-workers would often gather around to watch. They couldn’t believe that someone could be, as he put it, “so bad and clumsy.” He found pipetting especially difficult. Anytime the slightest thought crossed his mind, he would forget where he placed the instrument and have to start all over again. He later realized that it was precisely his lack of aptitude for lab work that led him to pay such close attention to the intricate, mundane labor involved in the manufacture of objectivity.

When he presented his early findings at the first meeting of the newly established Society for Social Studies of Science, in 1976, many of his colleagues were taken aback by a series of black-and-white photographic slides depicting scientists on the job, as though they were chimpanzees. It was felt that scientists were the only ones who could speak with authority on behalf of science; there was something blasphemous about subjecting the discipline, supposedly the apex of modern society, to the kind of cold scrutiny that anthropologists traditionally reserved for “premodern” peoples. Not everyone felt the same way, however. The previous year, in California, Latour met Steve Woolgar, a British sociologist, who was intrigued by his unorthodox approach. Woolgar turned Latour on to the work of other sociologists and anthropologists, like Michael Lynch, Sharon Traweek and Harold Garfinkel, who had also begun to study science as a social practice. Latour, in turn, invited Woolgar to spend a few weeks with him studying his primates at the Salk Institute.

The two men collaborated on “Laboratory Life,” which after its publication in 1979 became a founding text in the nascent field of science and technology studies and, by academic standards, a breakthrough success. The book continues to challenge some of our most deeply held notions about how knowledge is made. No one had ever contested that scientists were human beings, but most people believed that by following the scientific method, scientists were able to arrive at objective facts that transcended their human origins. A decade and a half earlier, in his best seller, “The Structure of Scientific Revolutions,” the physicist-turned-philosopher Thomas Kuhn had done much to weaken the Whig interpretation of science by showing how historical advances were governed by contingency and debate. What Latour observed firsthand in Guillemin’s lab made the traditional view of science look like little more than a self-serving fiction.

Day-to-day research — what he termed science in the making — appeared not so much as a stepwise progression toward rational truth as a disorderly mass of stray observations, inconclusive results and fledgling explanations. Far from simply discovering facts, scientists seemed to be, as Latour and Woolgar wrote in “Laboratory Life,” “in the business of being convinced and convincing others.” During the process of arguing over uncertain data, scientists foregrounded the reality that they were, in some essential sense, always speaking for the facts; and yet, as soon as their propositions were turned into indisputable statements and peer-reviewed papers — what Latour called ready-made science — they claimed that such facts had always spoken for themselves. That is, only once the scientific community accepted something as true were the all-too-human processes behind it effectively erased or, as Latour put it, black-boxed.

In the 1980s, Latour helped to develop and advocate for a new approach to sociological research called Actor-Network Theory. While controversial at the time, it has since been adopted as a methodological tool not just in sociology but also in a range of disciplines, like urban design and public health. From his studies of laboratories, Latour had seen how an apparently weak and isolated item — a scientific instrument, a scrap of paper, a photograph, a bacterial culture — could acquire enormous power because of the complicated network of other items, known as actors, that were mobilized around it. The more socially “networked” a fact was (the more people and things involved in its production), the more effectively it could refute its less-plausible alternatives. The medical revolution commonly attributed to the genius of Pasteur, he argued, should instead be seen as a result of an association between not just doctors, nurses and hygienists but also worms, milk, sputum, parasites, cows and farms. Science was “social,” then, not merely because it was performed by people (this, he thought, was a reductive misunderstanding of the word “social”); rather, science was social because it brought together a multitude of human and nonhuman entities and harnessed their collective power to act on and transform the world.

In the fall of 2016, the hottest year on record, Latour took a plane from Paris to Calgary, Canada, where he was due to deliver a lecture on “the now-obsolete notion of nature.” Several hours into the flight, above the Baffin ice sheets to the west of Greenland, he peered out the window. What he saw startled him. That year the North Pole was melting at an accelerated pace. The tundra below, rent with fissures, reminded him of the agonized face from Edvard Munch’s painting “The Scream.”

“It was as though the ice was sending me a message,” Latour recalled in March. Dressed in a striking suit (straw-colored tie, blue waistcoat), he was speaking to a sold-out theater of some 200 people in Strasbourg as part of the city’s biennial puppetry festival. Although Latour is a figure of international renown on the academic circuit, his lecture — a sort of anti-TED Talk on climate change featuring an array of surreal images and acoustical effects — was anything but a traditional conference paper. Throughout the performance, Latour’s looming figure was hidden behind images projected onto a screen, so that it seemed as though he were being swallowed by his own PowerPoint presentation. The effect was a bit like watching “An Inconvenient Truth,” if Al Gore had been a coltish French philosopher who said things like “Scientists, artists, and social scientists like myself are beginning to propose what we call — and maybe it’s too exaggerated — a new cosmology.”

The idea that we can stand back and behold nature at a distance, as something discrete from our actions, is an illusion, Latour says. This was the message that the melting ice sheets were sending him. “My activity in this plane going to Canada was actually having an effect on the very spectacle of nature that I was seeing,” he told his Strasbourg audience. “In that sense, there is no outside anymore.” Appropriately enough, the show, which he has performed in several cities across Europe and will bring to New York this week, is called “Inside.” In our current environmental crisis, he continued, a new image of the earth is needed — one that recognizes that there is no such thing as a view from nowhere and that we are always implicated in the creation of our view. With the advent of the Anthropocene, a word proposed by scientists around the turn of the century to designate a new epoch in which humanity has become tantamount to a geological force, Latour’s idea that humans and nonhumans are acting together — and that the earth reacts to those actions — now sounds a lot like common sense. “He is really the thinker of the Anthropocene,” Philippe Pignarre, Latour’s French publisher of 40 years, told me. “A lot of scientists in France didn’t like him originally because he treated them like other workers, and they believed in having a special relationship to the truth. But now they are using his work. He is at the center of people who want to think about the world.”

“Inside” draws heavily on “Down to Earth,” his new book, which has been highly praised in France since its release there last fall. Scientists, he writes, have largely looked at the problem of climate-change denial through the lens of rational empiricism that has governed their profession for centuries; many limit their domain to science, thinking it inappropriate to weigh in on political questions or to speak in an emotional register to communicate urgency. Even though the evidence in support of global warming has long been overwhelming, some scientists continue to believe that the problem of denialism can be solved through ever more data and greater public education. Political scientists, meanwhile, have shown that so-called “irrational” individuals, especially those who are highly educated, in some cases actually hold onto their opinions more strongly when faced with facts that contradict them. Instead of accusing Trump supporters and climate denialists of irrationality, Latour argues that it is untenable to talk about scientific facts as though their rightness alone will be persuasive. In this respect, “Down to Earth” extends the sociological analysis that he brought to bear on factory workers in Abidjan and scientists in California to the minds of anti-scientific voters, looking at the ways in which the reception of seemingly universal knowledge is shaped by the values and local circumstances of those to whom it is being communicated.

Latour believes that if scientists were transparent about how science really functions — as a process in which people, politics, institutions, peer review and so forth all play their parts — they would be in a stronger position to convince people of their claims. Climatologists, he says, must recognize that, as nature’s designated representatives, they have always been political actors, and that they are now combatants in a war whose outcome will have planetary ramifications. We would be in a much better situation, he has told scientists, if they stopped pretending that “the others” — the climate-change deniers — “are the ones engaged in politics and that you are engaged ‘only in science.’ ” In certain respects, new efforts like the March for Science, which has sought to underscore the indispensable role that science plays (or ought to play) in policy decisions, and groups like 314 Action, which are supporting the campaigns of scientists and engineers running for public office, represent an important if belated acknowledgment from today’s scientists that they need, as one of the March’s slogans put it, to step out of the lab and into the streets. (To this Latour might add that the lab has never been truly separate from the streets; that it seems to be is merely a result of scientific culture’s attempt to pass itself off as above the fray.)

Of course, the risk inherent in this embrace of politics is that climate deniers will seize on any acknowledgment of the social factors involved in science to discredit it even further. In a New York Times Op-Ed, a coastal geologist argued that the March for Science would “reinforce the narrative from skeptical conservatives that scientists are an interest group and politicize their data, research and findings for their own ends.” [This is exactly parallel to my on-line encounter with the conservative who denied any facts reported in a liberal publication. – LG] This was what happened in the infamous 2009 incident now known as Climategate, when emails to and from scientists at the University of East Anglia, a leading center for climate research in Britain, were hacked, revealing exactly the kinds of messy debates that Latour documented in “Laboratory Life.” Climate skeptics cited this as proof that the scientists weren’t really discovering climate change but simply massaging the data to fit their preconceptions. Certainly the incident did not, as scholars of science and technology studies might have hoped, lead the public to a deeper understanding of the controversy and negotiation that govern all good science in the making.

Some might see this discouraging episode as a reason to back away from a more openly pugnacious approach on the part of scientists. Latour does not. . .

Continue reading. There’s much more.

Later in the article:

At a meeting between French industrialists and a climatologist a few years ago, Latour was struck when he heard the scientist defend his results not on the basis of the unimpeachable authority of science but by laying out to his audience his manufacturing secrets: “the large number of researchers involved in climate analysis, the complex system for verifying data, the articles and reports, the principle of peer evaluation, the vast network of weather stations, floating weather buoys, satellites and computers that ensure the flow of information.” The climate denialists, by contrast, the scientist said, had none of this institutional architecture. Latour realized he was witnessing the beginnings a seismic rhetorical shift: from scientists appealing to transcendent, capital-T Truth to touting the robust networks through which truth is, and has always been, established.

Written by LeisureGuy

25 October 2018 at 8:15 am

Posted in Books, Daily life, Politics

Tagged with

Unpublished and Untenured, a Philosopher Who Inspired a Cult Following Image

leave a comment »

James Ryerson writes in the NY Times Book Review:

Ever since completing his Ph.D. at the University of Pittsburgh in 1993, the Israeli philosopher Irad Kimhi has been building the résumé of an academic failure. After a six-year stint at Yale in the ’90s that did not lead to a permanent job, he has bounced around from school to school, stringing together a series of short-term lectureships and temporary teaching positions in the United States, Europe and Israel. As of June, his curriculum vitae listed no publications to date — not even a journal article. At 60, he remains unknown to most scholars in his field.

Among a circle of philosophers who have worked or interacted with Kimhi, however, he has a towering reputation. His dissertation adviser, Robert Brandom, describes him as “truly brilliant, a deep and original philosopher.” Jonathan Lear, who helped hire Kimhi at Yale, says that to hear Kimhi talk is to experience “living philosophy, the real thing.” The philosopher and physicist David Z. Albert, a close friend of Kimhi’s, calls him “the best and most energetic and most surprising conversationalist I have ever met, a volcano of theories and opinions and provocations about absolutely everything.” (Kimhi and Albert appear to have been inspirations for the two brainy protagonists of Rivka Galchen’s short story “The Region of Unlikeness.”)

To his admirers, Kimhi is a hidden giant, a profound thinker who, because of a personality at once madly undisciplined and obsessively perfectionistic, has been unable to commit his ideas to paper. As a result, he has not been able to share his insights — about logic, language, metaphysics, theology, psychoanalysis, aesthetics and literature — with the wider academic world.

This type of character, though unusual, is not unheard-of in philosophy. Unlike, say, history or sociology, philosophy has long reserved a place for the occasional talent who struggles or declines to publish. The tradition dates back to Socrates, who not only didn’t write but also disparaged writing as too rigid a medium to capture “the living, breathing discourse of the man who knows.” (Plato’s words, of course.) Even as recently as the second half of the 20th century, many philosophy departments still employed a resident Socratic figure — a nonpublishing legend like Sidney Morgenbesser of Columbia or Rogers Albritton of Harvard — as if to provide a daily reminder that the discipline’s founding virtues of intellectual spontaneity, dialectical responsiveness and lack of dogmatism did not lend themselves naturally to the settled view of a treatise.

Kimhi may be the last of this dying breed. Certainly the days of granting tenure to such people are gone. Which is why Kimhi’s supporters have held out hope that a book he was working on for many years would eventually be published. The philosopher Robert Pippin, who has helped secure positions for Kimhi at the University of Chicago, explains that drafts of the manuscript have circulated to great excitement, if among “a very curated audience.” Harvard University Press was interested in publishing the book as early as 2011, but Kimhi, ever the perfectionist, was reluctant to let it go, forever refining and refashioning. Perhaps his foot-dragging was an expression of doubt, too: Could any book live up to his reputation?

Now, at long last, with the publication in July of THINKING AND BEING (Harvard University, $39.95), the world can find out.

It is not easy to summarize Kimhi’s book. Though only 166 pages, it strives to do a lot in a short space, aiming to overthrow views about logic and metaphysics that have prevailed in philosophy for a century. And though characterized by a precision of expression, the book is not what you would call lucid. Reading it is less about working through a series of rigorous, detailed arguments — the dominant mode of contemporary Anglophone philosophy — and more about getting accustomed to a radically different way of looking at fundamental philosophical questions, including What is thinking? and What is the relationship between thinking and the world?

For a taste of the ambition of Kimhi’s project, consider his ideas about the nature of logic. For many decades, our understanding of logic has been defined by a distinction between the “force” and “content” of a proposition — that is, between the act of asserting something and what it is you are asserting. If we don’t draw this distinction, according to a standard view of logic, it is not clear why the following sort of inference is valid:

Premise 1: P —> Q [e.g., “If it’s raining, then things are wet”]

Premise 2: P [“It’s raining”]

Conclusion: Q [“Things are wet”]

Note that this conclusion follows only if P (“it’s raining”) is unambiguously the same thing in each of the premises. But in the first premise, P is not asserted (“it’s raining” is entertained as a possibility), whereas in the second premise P is asserted (“it’s raining” is presented as fact). Therefore, according to this view, the assertion or “force” of P must be external to logic. An assertion is a psychological attitude (“I think … ”), a fact about what someone happens to believe. Logic, by contrast, concerns the abstract relations that hold among the “contents” — roughly, the meanings — of propositions.

In other words, logic provides us not with an empirical understanding of how our thinking actually works (that’s the purview of psychology), but with a normative understanding of how thinking should work. There is no “I” in logic.

Kimhi argues that this view is wrong, and that the distinction between psychology and logic has led our understanding of thinking astray. Consider that the following statement does not, according to the standard view, constitute a logical contradiction: “It’s raining, but I don’t believe it’s raining.” Why? Because the first part of the sentence concerns a state of affairs in the world (“it’s raining”), whereas the second part concerns someone’s state of mind (“I don’t believe it’s raining”).

Kimhi wants to rescue the intuition that it is a logical contradiction to say, “It’s raining, but I don’t believe it’s raining.” But to do this, he has to reject the idea that when you assert a proposition, what you are doing is adding psychological force (“I think … ”) to abstract content (“it’s raining”). Instead, Kimhi argues that a self-conscious, first-person perspective — an “I” — is internal to logic. For him, to judge that “it’s raining” is the same as judging “I believe it’s raining,” which is the same as judging “it’s false that it’s not raining.” All are facets of a single act of mind.

One consequence of Kimhi’s view is that . . .

Continue reading.

Written by LeisureGuy

8 October 2018 at 3:55 pm

Posted in Books, Daily life

Tagged with

How a Huguenot philosopher realised that atheists could be virtuous

with 2 comments

That an atheist could be virtuous doesn’t seem all that surprising today, when we see some who are Christian being so vicious. (I’m looking at the evangelicals.) Michael W Hickson, assistant professor of philosophy at Trent University in Peterborough, Ontario, writes in Aeon:

For centuries in the West, the idea of a morally good atheist struck people as contradictory. Moral goodness was understood primarily in terms of possessing a good conscience, and good conscience was understood in terms of Christian theology. Being a good person meant hearing and intentionally following God’s voice (conscience). Since an atheist cannot knowingly recognise the voice of God, he is deaf to God’s moral commands, fundamentally and essentially lawless and immoral. But today, it is widely – if not completely – understood that an atheist can indeed be morally good. How did this assumption change? And who helped to change it?

One of the most important figures in this history is the Huguenot philosopher and historian, Pierre Bayle (1647-1706). His Various Thoughts on the Occasion of a Comet (1682), nominally dedicated towards taking down erroneous and popular opinions about comets, was a controversial bestseller, and a foundational work for the French Enlightenment. In it, Bayle launches a battery of arguments for the possibility of a virtuous atheist.

He begins his apology on behalf of atheists with a then-scandalous observation:

It is no stranger for an atheist to live virtuously than it is strange for a Christian to live criminally. We see the latter sort of monster all the time, so why should we think the former is impossible?

Bayle introduces his readers to virtuous atheists of past ages: Diagoras, Theodorus, Euhemerus, Nicanor, Hippo and Epicurus. He notes that the morals of these men were so highly regarded that Christians later were forced to deny that they were atheists in order to sustain the superstition that atheists were always immoral. From his own age, Bayle introduces the Italian philosopher Lucilio Vanini (1585-1619), who had his tongue cut out before being strangled and burned at the stake for denying the existence of God. Of course, those who killed Vanini in such a fine way were not atheists. The really pressing question, Bayle suggests, is whether religious believers ­– and not atheists – can ever be moral.

Bayle concedes that Christians possess true principles about the nature of God and morality (we’ll never know whether Bayle himself was an atheist). But, in our fallen world, people do not act on the basis of their principles. Moral action, which concerns outward behaviour and not inward belief, is motivated by passions, not theories. Pride, self-love, the desire for honour, the pursuit of a good reputation, the fear of punishment, and a thousand customs picked up in one’s family and country, are far more effective springs of action than any theoretical beliefs about a self-created being called God, or the First Cause argument. Bayle writes:

Thus we see that from the fact that a man has no religion it does not follow necessarily that he will be led to every sort of crime or to every sort of pleasure. It follows only that he will be led to the things to which his temperament and his turn of mind make him sensitive.

Left alone to act on the basis of their passions and habitual customs, who will act better: an atheist or a Christian? Bayle’s opinion is clear from the juxtaposition of chapters devoted to the crimes of Christians and chapters devoted to the virtues of atheists. The cause of the worst crimes of Christians is repeatedly identified as false zeal, a passion that masquerades as the love of God but that really amounts to politico-religious partisanship mixed with hatred of anyone who is different. Bayle’s survey of recent religious wars demonstrated in his mind that religious beliefs enflame our more violent tendencies:

We know the impression made on people’s minds by the idea that they are fighting for the preservation of their temples and altars … how courageous and bold we become when we fixate on the hope of conquering others by means of God’s protection, and when we are animated by the natural aversion we have for the enemies of our beliefs.

Atheists lack false religious zeal, so we can expect them to live quieter lives.

Yet Bayle does not fully establish the possibility of a virtuous atheist. The kind of behaviour that he focuses on is merely superficially good. In Bayle’s time, to be truly good was to have a conscience and to follow it. In the Various Thoughts, he doesn’t declare that atheists can have a good conscience. In fact, Bayle’s pessimism reaches its pinnacle in a thought experiment involving a visit from an alien species. Bayle claims that it would take these aliens less than 15 days to conclude that people do not conduct themselves according to the lights of conscience. In other words, very few people in the world are, properly speaking, morally good. So atheists are merely no worse than religious believers, and on the surface they might even appear morally superior. While this is less ambitious than claiming that atheists can be completely virtuous, it is still a milestone in the history of secularism.

Bayle expanded on his Various Thoughts twice in his career, once with Addition to the Various Thoughts on the Comet (1694) and again with Continuation of the Various Thoughts on the Comet (1705). In this latter work, Bayle established the foundations of a completely secular morality according to which atheists could be as morally virtuous as religious believers. He begins his discussion of atheism with the strongest objection he could muster against the possibility of a virtuous atheist:

Because [atheists] do not believe that an infinitely holy Intelligence commanded or prohibited anything, they must be persuaded that, considered in itself, no action is either good or bad, and that what we call moral goodness or moral fault depends only on the opinions of men; from which it follows that, by its nature, virtue is not preferable to vice.

The challenge Bayle undertakes is to explain how atheists, who do not recognise a moral cause of the Universe, can nevertheless recognise any kind of objective morality.

He offers an analogy with mathematics. Atheists and Christians will disagree about the foundation of mathematical truths. Christians believe that God is the source of all truth, while atheists do not. However, metaphysical disagreements over the source of the truth of triangle theorems make no difference when it comes to proving triangle theorems. Christians and atheists all come to the conclusion that the sum of the angles inside every triangle is equal to two right angles. For the purposes of mathematics, theological views are irrelevant. Similarly for morality: whether one believes that the nature of justice is grounded in the nature of God or in the nature of a godless Nature makes no difference. Everyone agrees that justice requires that we keep our promises and return items that we have borrowed.

Bayle’s most surprising argument is that Christians and atheists are in agreement about the source of the truths of morality. The vast majority of Christians believe that God is the source of moral truths, and that moral truth is grounded in God’s nature, not in God’s will or choice. God cannot make killing innocent people a morally good action. Respecting innocent life is a good thing that reflects part of God’s very nature. Furthermore, according to Christians, God did not create God’s nature: it has always been and always will be what it is.

At bottom, these Christian views do not differ from what atheists believe about the foundation of morality. They believe that the natures of justice, kindness, generosity, courage, prudence and so on are grounded in the nature of the Universe. They are brute objective facts that everyone recognises by means of conscience. The only difference between Christians and atheists is the kind of ‘nature’ in which moral truths inhere: Christians say it is a divine nature, while atheists say it is a physical nature. Bayle imagines critics objecting: how can moral truths arise from a merely physical nature? This is indeed a great mystery – but Christians are the first to declare that God’s nature is infinitely more mysterious than any physical nature, so they are in no better position to clarify the mysterious origins of morality! . . .

Continue reading.

Written by LeisureGuy

18 September 2018 at 4:46 pm

Posted in Daily life, Religion

Tagged with ,

%d bloggers like this: