I read Harry Frankfurt’s “On Bullshit” shortly after it was published some eleven years ago. I’ve thought now and then about how I might incorporate it into one of my philosophy courses, but I’ve never found an acceptably seamless way of fitting it in (given everything else I wanted to discuss). Although it does not demonstrate Frankfurt’s meticulous analytic method, this video might be a second-best choice for getting discussion going next time a student wonders aloud whether philosophy itself is bullshit. My answer is: no, it’s not; some philosophers do bullshit, but one of the main purposes of contemporary philosophy is to not let them get away with it for very long.
At the end of a recent PBS NewsHour interview, Jeffrey Brown asked actress Juliette Binoche how she felt about the sorts of roles she can expect to be offered, now that she is older than 50. She gave one of the most positive responses about aging that I’ve ever heard. Here it is, rendered as verse-
I have experience.
And so it’s not as if
I’m not facing it…
but it’s not a fear.
‘Cause time is a tool to grow!
If you don’t have that tool,
how can you grow?
How can you transform?
So, you have to believe that
time is your best friend!
Imagine if you had to die
when you’re young,
you’d feel like, wow!
You know, what I’ve learned
Here is the whole (6 minute) interview. The verse above occurs at about 5 min. 30 sec.-
There is a very good cover story in Harpers Magazine this month (September issue) by William Deresiewicz entitled “How College Sold Its Soul… and surrendered to the market.” This story is especially relevant here in Wisconsin, where Governor Walker and the Republican-controlled legislature recently slashed the UW system budget by $250,000,000 while freezing tuition, and “the search for truth” came close to being excised from the UW’s mission statement. Although many students are under the misapprehension that eschewing liberal arts programs in favor of business and professional ones is likely to improve their financial position over the long run, pointing that out isn’t Deresiewicz’s main concern; rather, he’s arguing that college should not be viewed in economic terms at all. Here’s a brief excerpt from the article:
It is not the humanities per se that are under attack. It is learning: learning for its own sake, curiosity for its own sake, ideas for their own sake. It is the liberal arts, but understood in their true meaning, as all of those fields in which knowledge is pursued as an end in itself, the sciences and social sciences included. History, sociology, and political-science majors endure the same kind of ritual hazing (“Oh, so you decided to go for the big bucks”) as do people who major in French or philosophy. Governor Rick Scott of Florida has singled out anthropology majors as something that his state does not need more of. Everybody talks about the STEM fields – science, technology, engineering, and math – but no one’s really interested in science, and no one’s really interested in math: interested in funding them, interested in having their kids or their constituents pursue careers in them. That leaves technology and engineering, which means (since the second is a subset of the first) it leaves technology.
Deresiewicz locates the origin of the problem in the ascendence of “neo-liberalism”, by which he means “an ideology that reduces all values to money values.” Corporate and other business interests would prefer that colleges act as vocational schools, rather than that they train students to reason critically and creatively. He points out that it is not in the interests of economic elites to have students conceiving of alternatives to the status quo, or at least to have them gaining the skills that would allow them to do so. Whether you agree with his diagnosis or not, his critique of current attitudes towards higher education (even on college campuses themselves) is well worth reading.
If you have trouble finding the article, Kathleen Dunn of WPR interviewed Deresiewicz on Monday 8/31, and they covered many issues not discussed in the article, including Wisconsin-related ones. You can listen to or download the segment here. You can also find the podcast on iTunes.
It’s been fascinating to read the news stories on Sandra, the orangutan who an Argentine court decided has a right to freedom as a “non-human person”. Reporting it, UPI made one of the most revealing blunders, declaring-
On Sunday the court agreed with AFADA attorneys’ argument that Sandra was denied her freedom as a “non-human person” — a distinction that places Sandra as a human in a philosophical sense, rather than physical.
Well, no: the distinction doesn’t “place Sandra as a human” in any sense, and especially not “in a philosophical sense”. Rather, the court is implying that non-human animals have rights, not as honorary members of our species, but in virtue of their own cognitive abilities. Some animal rights activists might even take offense at this sort of “discrimination” by cognitive class (at what degree of cognitive impairment does a human cease to have rights?), but at least it avoids the – probably unconscious – speciesism that seems to lie behind the UPI comment.
That’s not to say it is philosophically easy to decide who has rights, and on what basis, partly because there are so many views of what a “right” is. What seems clear is that granting all and only humans rights (on the basis of their species alone) is objectionably arbitrary. An alternative approach is to argue that any sentient creature deserves moral consideration on the basis of its ability to feel pleasure or pain, but such a view has its own complications. While all of the philosophical kinks are being worked out (a process that is notoriously slow), it seems safest to “err” on the side of maximal compassion, which we can hope to be also the side of maximal impartial rationality.
Over the years I’ve posted several audio excerpts from Alan Watts’ talks, but I hadn’t seen any animated illustrations of his suggestions or parables until today. Here are a couple of short ones that draw from his lectures on Zen and Daoism (thanks to Tom via Berry for finding them)-
The second one, in particular, raises some interesting questions: is Watts – or the parable – suggesting that one should never judge an event to be good or bad, just because one can never know all of the event’s long-term consequences, and one can never be certain even of its immediate, short-term consequences? In the case of each of the events in the parable, instead of the farmer’s saying “Maybe”, could he not have said something just a little stronger: “Probably”? True, he would have been wrong about the improbable consequences of the events in the story, but if he made a habit of saying “probably”, wouldn’t he be right at least most of time? And wouldn’t that be enough to allow for the usefulness of at least some value judgments (the ones past experience teaches us we can be most confident about)?
I guess my point is this: yes, nature is very complex, and our minds are very limited, as are the data we use when we judge some event to be good or bad. But our minds are also part of the complexity of nature, and the somewhat predictable patterns of nature can and should inform our minds. I think the best lesson to be learned from the parable is not that one should never make value judgments, but that one should be very humble when making them, and act only on those judgments one has good reason to believe are true.
I’ve railed against Facebook many times on this blog, and in 2010’s “Facebook: Beyond The Last Straw“), I promised I would stop. I managed to keep that promise for nearly four years, but I’ve been roused to rail once again by the confluence of four different interests I happen to have: emotion research (one of my philosophical activities), ethics (a subject I teach), federal regulations covering university research (which I help to administer by being on my university’s Institutional Research Board) and the internet (which, of course, I constantly use).
In case you haven’t yet heard, what Facebook did was to manipulate the “news feeds” users received from their friends, eliminating items with words associated with either positive emotions or negative emotions, and then observing the degree to which the manipulated users subsequently employed such positive or negative vocabulary in their own posts. Facebook’s main goal was to disconfirm a hypothesis suggested by previous researchers that users would be placed in a negative mood by their friends’ positive news items, or in a positive mood by their friends’ negative news items. As I understand it, the results did disconfirm that hypothesis, and confirmed the opposite one (namely, that users would be placed in congruent rather than incongruent mood states by reading their friends’ positive or negative news items), but just barely.
Although I find this methodology questionable on a number of grounds, apparently peer-reviewers did not. The research was published in a reputable journal. More interesting to me are the ethical implications of Facebook’s having used their users as guinea pigs this way.
The best article I’ve found on the net about the ethical issues raised by this experiment was written as an opinion piece on Wired by Michelle N. Meyer, Director of Bioethics Policy in the Union Graduate College-Icahn School of Medicine at Mount Sinai Bioethics Program. Meyer is writing specifically about the question of whether the research, which involved faculty from several universities whose human-subject research is federally regulated, could have (and should have) been approved under the relevant regulations. Ultimately, she argues that it both could have and should have, assuming that the manipulation posed minimal risk (relative to other manipulations users regularly undergo on Facebook and other sites). Her only caveat is that more specific consent should have been obtained from the subjects (without giving away the manipulation involved), and some debriefing should have occurred afterward. If you’re interested in her reasoning, which at first glance I find basically sound, I encourage you to read the whole article. Meyer’s bottom line is this-
We can certainly have a conversation about the appropriateness of Facebook-like manipulations, data mining, and other 21st-century practices. But so long as we allow private entities to engage freely in these practices, we ought not unduly restrain academics trying to determine their effects. Recall those fear appeals I mentioned above. As one social psychology doctoral candidate noted on Twitter, IRBs make it impossible to study the effects of appeals that carry the same intensity of fear as real-world appeals to which people are exposed routinely, and on a mass scale, with unknown consequences. That doesn’t make a lot of sense. What corporations can do at will to serve their bottom line, and non-profits can do to serve their cause, we shouldn’t make (even) harder—or impossible—for those seeking to produce generalizable knowledge to do.
My only gripe with this is that it doesn’t push strongly enough for the sort of “conversation” mentioned in the first line. The ways in which social media sites – and other internet sites – can legally manipulate their users without their specific consent is, as far as I can tell, entirely unregulated. Yes, the net should be open and free, but manipulation of the sort Facebook engaged in undermines rather than enhances user freedom. We shouldn’t expect to be able to avoid every attempt to influence our emotions, but there is an important difference between (for instance) being exposed to an ad as a price of admission, and having the information one’s friends intended you to see being edited, unbeknownst to you or your friends, for some third party’s ulterior purpose.
File this under “odd confluences of marketing and philosophy”…
As both a fan of the BK Veggie (at least when starving and passing through a small town with only fast food restaurants and no Subway) and a philosophy professor, I found this news item almost as interesting as it is just plain weird: Burger King, in its infinite corporate wisdom, has decided to change its catch-phrase from “Have It Your Way” to “Be Your Way”. BurgerBusiness.com apparently got the scoop–
Fernando Machado, SVP_Global Brand Management, told BurgerBusiness.com that the new tagline is the result of a company reexamination of its brand and its relationship with its customers. “Burger King is a look-you-in-the-eyes brand, a relaxed and a friendly brand. It is approachable and welcoming,” he said. “So we wanted the positioning to reflect that closeness. We elevated ‘Have It Your Way’ to ‘Be Your Way’ because it is a richer expression of the relationship between our brand and our customers. We’ll still make it your way, but the relationship is deeper than that.”
Sure, Be Your Way: be obese, be diabetic, be wasteful, be oblivious (except, of course, when you order the Veggie). We’ll take your money, however you are. Of course, “Have It Your Way” has its own share of unfortunate associations: have a heart attack, have a stroke, have gastric distress… But what seems to be moving the advertisers here is rather this: since being indicates a “deeper relationship” than having, and therefore since what you are is likely to be more important to you than merely what you have, emphasizing being over having should lead you to desire a Whopper more than you would were you still stumbling into one their establishments under the less efficacious spell of their traditional catch-phrase. However, the relationship between desiring, being, and having can be tricky, as Jean-Paul Sartre made abundantly clear in his epic Existentialist tome, Being and Nothingness. Here’s a quick summary of his view on this, courtesy of the Internet Encyclopedia of Philosophy–
For Sartre, the lover seeks to possess the loved one [or the loved burger – ed.] and thus integrate her into his being: this is the satisfaction of desire. He simultaneously wishes the loved one nevertheless remain beyond his being as the other he desires, i.e. he wishes to remain in the state of desiring. These are incompatible aspects of desire: the being of desire is therefore incompatible with its satisfaction.
So… do the advertisers really want to short-circuit the desiring process, and prematurely emphasize being over having? But wait… the plot thickens-
In the lengthier discussion on the topic “Being and Having,” Sartre differentiates between three relations to an object that can be projected in desiring. These are being, doing and having. Sartre argues that relations of desire aimed at doing are reducible to one of the other two types. His examination of these two types can be summarised as follows. Desiring expressed in terms of being is aimed at the self. And desiring expressed in terms of having is aimed at possession. But an object is possessed insofar as it is related to me by an internal ontological bond… Through that bond, the object is represented as my creation. The possessed object is represented both as part of me and as my creation. With respect to this object, I am therefore viewed both as an in-itself [an inert, untroubled thing – ed.] and as endowed with freedom. The object is thus a symbol of the subject’s being, which presents it in a way that conforms with the aims of the fundamental project [that is, the impossible project of being God, who alone can be conscious of something without being alienated from it – ed.]. Sartre can therefore subsume the case of desiring to have under that of desiring to be, and we are thus left with a single type of desire, that for being.
So, ultimately, if desiring to have is reducible to desiring to be, the advertisers might be wasting their time – much ado about nothing. Or is that much ado about nothingness?
As a rule, musicals tend to strike me as amusing at best (Passing Strange aside), and only time will tell whether Sting and his cohort of Broadway pros can pull off the rare feat of successfully marrying rock, pop, or folk songs to an emotionally resonant and theatrically stageable story. But the more I listen to the numbers Sting has written for his The Last Ship project, the more they grow on me. You can listen to many of those songs, performed live by Sting and several cast members, on this American Masters episode. Meanwhile, here’s one of the more thought-provoking and suggestive songs from the album (not included in the Great Performances episode), one that demonstrates how a talented – and well-read – songwriter (or two) can relate an interpretation of quantum physics to a theme with a lot of poetic and dramatic potential: how choices create universes, and how those universes might be related to parallel universes not only physically, but – more humanly – by relief, or regret, or resignation, or…
“It’s Not The Same Moon”
by Sting and Rob Mathes
Did you ever hear the theory of the universe?
Where every time you make a choice,
A brand new planet gets created?
Did you ever hear that theory?
Does it carry any sense?
That a choice can split the world in two,
Or is it all just too immense for you?
That they all exist in parallel,
Each one separate from the other,
And every subsequent decision,
Makes a new world then another,
And they all stretch out towards infinity,
Getting further and further away.
Now, were a man to reconsider his position,
And try to spin the world back to its original state?
It’s not a scientific proposition,
And relatively speaking…you’re late.
It’s not the same moon in the sky,
And these are different stars,
And these are different constellations,
From the ones that you’ve described.
Different rules of navigation,
Strange coordinates and lines,
A completely different zodiac,
Of unfamiliar signs.
It’s not the same moon in the sky,
And those planets are misleading,
I wouldn’t even try to take a bearing or a reading,
Just accept that things are different,
You’ve no choice but to comply,
When smarter men have failed to see,
The logic as to why.
It’s not the same moon,
It’s not the same moon,
In the sky.
I usually teach two books in my “Contemporary Philosophy” class: A. J. Ayer’s Language, Truth and Logic, and Saul Kripke’s Naming and Necessity. Ayer’s book nicely illustrates the limits of verificationist semantics, the problems with phenomenalism, and the futility of trying to eliminate metaphysics from philosophy. Kripke’s book shows how metaphysics survived – and ultimately exploited – the “linguistic turn” taken by 20th century analytic philosophy. One thing that both books have in common, however, is at least a passing concern with unicorns.
Ayer uses the sentence “Unicorns are fictitious” to illustrate how surface grammar can systematically mislead philosophers into spouting metaphysical nonsense (e.g., that since ‘unicorns’ seems to be the subject of this sentence, they must “have a mode of real being which is different from the mode of existing things”). Kripke, on the other hand, uses his scientific essentialism to argue that unicorns not only do not actually exist; they could not even possibly exist.
Well, we were talking about Ayer’s discussion of unicorns in class today, and Shannon, one my sharpest students, later tweeted me that “‘Back to the unicorns’ is something one only hears in Harry Potter or philosophy classes”, to which I responded with “Indeed…”, followed by the title of this post.
This got me thinking, though: just how extensively are unicorns used in the philosophical literature? (There’s a book to be written here, if it hasn’t already been published). To get a rough idea, I did a quick search of the Stanford Encyclopedia of Philosophy (one of my favorite resources), and found that the mythological creatures trot onto that particular stage in no less than twenty-nine – count ’em, 29 – different topics! Here’s a link to the list, for all of you unicorn junkies out there.
I found “Her”, Spike Jonze’s new movie, somewhat difficult to sit through. It feels too long (so little happens), and it treads a very thin line between a psychologically rich character study and a Saturday Night Live parody of a cliché romance. Also, the overall look of the film is bland, as if it were covered with a gray-filter. It’s too dimly lit in many scenes. In fact, one key scene happens entirely in the dark, a stylistic choice I couldn’t help but see as a sign of Jonze’s embarrassment with the scene’s content. No doubt the somberness of much of the indoor photography is meant to underscore protagonist Theodore’s extremely introverted personality. But it’s overkill: Joaquin Phoenix’s spot-on Theodore needs no extra help.
Yet, despite these problems, “Her” is, to my mind, perhaps the most thought-provoking Hollywood film released this year, with the possible exception of 12 Years A Slave (which I blogged about here). I say this even though in most ways Her is the opposite of my favorite Jonze film, 2004’s Adaptation. That movie had an almost frenetic energy; it was saturated by the sub-tropical colors of South Florida, it had a very complex structure (thanks to screenwriter Charlie Kaufman), and it centered on two (or three?) eccentric protagonists, played with all the requisite bravado by Nicholas Cage and Chris Cooper. Her, on the other hand, just plods along, without much to look at (a Scarlett-Johansson-shaped video image of Samantha might have helped a lot in that respect), with the simplest possible structure, and only two substantial characters, one of which is invisible, the other of which is rarely expressive.
But what I like about “Her” is its heartfelt exploration of intimacy, an exploration that goes deeper than what is generally found in your standard relationship flick (which, I admit, is not saying much). The film raises the question of whether it would be possible to be really intimate with the user interface of an operating system (to get some sense of Johansson’s silicon-based Samantha, just imagine Apple’s SIRI on both intellectual- and emotional-IQ steroids). But the film is more centrally concerned about the loss of human intimacy in our ever more technologically-mediated world, and that is an even worthier subject. Samantha and Theodore’s dialog reminds us, somewhat poignantly, of what a genuinely intimate relationship at least sounds like – something that’s sorely lacking not only from most other films, but also from many lives. The only thing missing from Theodore and Samantha’s relationship (besides a body, of course) seems at first to be any element of danger. For surely nothing could be less dangerous than a relationship with an entity pre-programmed to satisfy one’s every need. Theodore apparently need not fear that Samantha will ever leave him like his ex-wife did, but there’s the rub: how could such an apparently “failsafe” relationship ever really be fulfilling?
It’s the particular way in which the film first raises and then answers (or subverts) that question that makes it worth watching, and helps to excuse its weaknesses. Here’s the trailer-
Near the end of the film there’s an important reference to Alan Watts, the mid-20th century intellectual, ex-theologician, and pre-New Age disseminator of Asian religious traditions and metaphysical views. For those unfamiliar with Watts’ work, the brief description of him given in the film might suffice for the script’s purposes (though I doubt it). But for those at least passingly familiar with his life and work, the reference will have all sorts of rich resonances, and suggest several different levels on which to interpret the ending. The most obvious level has to do with Watts’ charismatic charm, which seems to have been accompanied by a (no doubt philosophically motivated) lack of shame. The second, slightly less obvious level rides on Watts’ trenchant criticisms of Western Culture, which he viewed as both a cause and effect of its average member’s confusion and neurosis. His prescription was, quite simply, to become enlightened in the down-to-earth, Zen sense he himself clearly sought. Finally, a third level of interpretation rides on the similarity between Samantha and Theodore that Samantha at one point says comforts her. To jump aboard this train of thought, you need to focus on Watts’ thesis that reality is, ultimately, One (a “monistic” worldview that a Buddhist need not accept). To avoid falling into didactic mode, I’ll just add that these three levels of interpretation are, I think, complementary. They leave Theodore with much more to mull over beyond the picture’s ending than just the promises and pitfalls of romantic attachments. The only problem is that the reference to Watts and the relevance of his personality and worldview is such “inside baseball”, the resonances that finally sold me on the film will probably not occur to most of the film’s audience. I’m not sure that they even occurred to the filmmakers.
If you’ve never heard an Alan Watts talk, here’s a 10-minute audio excerpt I once used in an adult enrichment class that focused on his fusion of Eastern and Western perspectives. At one point he mentions “the ceramic myth” and “the fully automatic myth” – ideas he explains earlier in the talk. By the first he just means the monotheistic story that God created the universe (much as people create ceramics). By the second he’s referring to the Newtonian view of the universe as a dumb, fully automatic machine, devoid of consciousness. In this excerpt, the two main themes he riffed on throughout his career – the mental illness of Western culture, and the metaphysical monism (supported by ecology and post-Newtonian physics) that could be part of the cure – are on full display.
Much more Watts is available here. For my own previous posts on Watts, just search for his name using the Search box above.
I’ve blogged before about Stew (AKA Mark Stewart), the Tony award-winning playwright for his rock musical Passing Strange and accomplished singer-songwriter (check out his latest album, Making It), but his lecture/performance at UW Oshkosh the night before last gave me another opportunity to share him with you.
The answer he gave to the question – is art necessary? – was, as you might have expected, yes… but the reasons he gave were not the usual ones. For instance, it wasn’t that cultures require art to flourish, or that art is needed to civilize the heathen soul. Rather, Stew riffed on three main themes, and I’ll just state the gist of them here, along with some of my own elaboration I don’t think he’d object to.
First, art is what people do, as people. You simply can’t be a person unless you create art, even if the only art you create is yourself. When you step into your grandma’s house, you notice – if you have any eye for it at all – that she has carefully placed keepsakes and photos on the coffee table, the shelves, etc.. Her whole life is (or at least those aspects of it she cares to remember are) on display, if not for others, at least for herself. Then there’s the annual holiday card, letter, or now email that many of us send to our friends and family, updating them on our “true stories”. This is a creative act. It is art. Similarly, we’re all playwrights. Every day we choose our own costumes and dabble with our sets; we also write most of own lines. I would add that, unlike the days when radio ruled, we’re now our own music supervisors as well, as we carry our music libraries on our phones. But – and here I’m developing Stew’s theme in a way with which he might not entirely approve – for better or worse we’re not entirely in control of the final product. We’re not the sole producers of our art, after all. Our parents, and everyone who came before us, and for that matter the entire universe, also have that honor (or should I say dubious distinction?). Nor, even if we are self-directors, do we contractually have control over the final cut. We all wander onto each other’s stages, often in the middle of productions we have nothing – or nearly nothing – to do with. Narratively this should result in relative chaos, and sometimes it does, but usually we manage to muddle through. It is, as Stew said, what we do.
Secondly, art is necessary in the sense that, paradoxical as this might sound, it keeps life real. It always, though often unintentionally, offers a critique of the status quo: the one-dimensional, black and white, reductive Grand Narratives proffered by politicians, religious leaders, and mass media marketeers. Art does this merely by reminding us of the particular, the personal, and the idiosyncratic. Impoverished art – and here’s my somewhat more Aeolian take on Stew’s relatively Ionian melody – is little more than some permutation of the status quo that the artist has perhaps unconsciously internalized and regurgitated. Impoverished art merely reflects the status quo by being overly simplistic, stereotypical, shallow, sentimental and/or sensationalistic… Sartre would call such art “inauthentic”. When impoverished art is intentionally produced and therefore bad in addition to impoverished, there might be a temptation write it off as prostitution – it is often done just for money, and it does similarly satisfy a consumer’s need (so perhaps even bad art is “necessary”, in a sense). But artists that intentionally produce impoverished art invest less of themselves in their work than even the most jaded prostitutes, who at least have to use their own bodies. Such artists merely pretend, without taking any chances, without revealing anything about their actual selves. More “authentic” artists also pretend, but never merely. Their pretending is not deceptive; it’s not pretense.
Not that I have anything against the occasional “guilty pleasure”… For instance, I confess to regularly watching the latest version of “Hawaii 5-0”, mainly for the scenery and, since I grew up in the Islands, its nostalgic value. Sometimes, serendipitously and for purely personal, idiosyncratic reasons, even impoverished art resonates.
Finally, art provides us with at least one half of a real friendship in a world where real friends are always rare, but grow even rarer as we age. Poets, novelists, singer-songwriters, filmmakers, and others put the best of themselves into their works; they represent themselves – or at least how they see the world – as honestly as they can. What more could you ask of true friends, except perhaps that they also show some interest in you? And these friends, unlike the flesh-and-blood kind, are never far away. There they are, under a layer of dust on your bookshelf, in your rarely opened music and movie files, undemanding, patiently waiting to be discovered or re-discovered when you most need them. Of course, just like the flesh-and-blood variety, such friends might fail to live up to expectations, or lose their attractiveness over time. But to co-opt and re-purpose Matthew 7:16- By their fruits you shall know them… not to mention yourself.
Speaking of fruits (or, less metaphorically, works), it seems fitting to end this post with the opening lines of T.S. Eliot’s “The Burnt Norton”, the first of his “Four Quartets” – which Stew mentioned as being a very old friend of his, but one that he’s just now really getting to know:
Time present and time past
Are both perhaps present in time future,
And time future contained in time past.
If all time is eternally present
All time is unredeemable.
What might have been is an abstraction
Remaining a perpetual possibility
Only in a world of speculation.
What might have been and what has been
Point to one end, which is always present.
Footfalls echo in the memory
Down the passage which we did not take
Towards the door we never opened
Into the rose-garden. My words echo
Thus, in your mind.
If you’re looking for one last book to read this summer, and you’re the type who likes to indulge in grand speculations without sacrificing critical reasoning, I’d like to recommend Thomas Nagel’s “Mind and Cosmos”. Like some other philosophers who have reached a certain age, Nagel seems more than willing to set aside the hair-splitting rigor required for first-rate academic work, and to suggest tentative answers to the truly mind-boggling, age-old problems: here, how we should try to adequately explain the origin of the universe, life, consciousness, cognition, and value. In a scant 128 pages, Nagel takes on this apparently intractable problem as simply and directly as a self-respecting analytic philosopher can, mainly by pursuing a negative goal: to cast doubt on the sufficiency of the usual materialist explanation of the universe, as well as on the contemporary neo-Darwinian explanation of life and its mental dimensions. In this he can’t help but share, with obvious discomfort, common ground with Intelligent Design proponents. But Nagel pays little attention to the Creationist alternative, dismissing it as insufficient, implausible, and at least as ideological as its neo-Darwinian competitor. Instead, he wants to shore up the credentials of an ancient view that goes all the way back to Aristotle: a naturalistic teleology that holds that we can adequately explain the universe can only with a theory that includes laws that work, in some sense, in reverse. According to such a view, the universe exists, in part, in order to bring about the existence of conscious, thinking creatures with the ability to recognize objective truths about physics, biology, psychology, and value (particularly morality). That is, the universe is determined to develop as it does at least partly in order to recognize itself. This is not an entirely original idea, but rarely has a philosopher of Nagel’s stature been brave enough to actually advocate it, at least publicly.
If Nagel is right (and he realizes that his argument is based on little more than quite tentative epistemological intuitions), our current science is not necessarily wrong, but it is radically incomplete, and the hope that by merely adding further causal principles of the same type it can eventually provide an adequate “theory of everything” – or even a “theory of everything that we currently know of” – has to be abandoned. To sum up this negative point and hint at the positive alternative, here’s part of the book’s last two paragraphs-
…I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world. It would be an advance if the secular theoretical establishment, and the contemporary enlightened culture which it dominates, could wean itself of the materialism and Darwinism of the gaps – to adapt one of its own pejorative tags. I have tried to show that this approach is incapable of providing an adequate account, either constitutive or historical, of our universe.
However… [a]n understanding of the universe as basically prone to generate life and mind will probably require a much more radical departure from the familiar forms of naturalistic explanation than I am at present able to conceive. Specifically, in attempting to understand consciousness as a biological phenomenon, it is too easy to forget how radical is the difference between the subjective and the objective, and to fall into the error of thinking about the mental in terms taken from our ideas of physical events and processes…
It is perfectly possible that the truth is beyond our reach, in virtue of our intrinsic cognitive limitations, and not merely beyond our grasp in humanity’s present stage of intellectual development. But I believe that we cannot know this, and that it makes sense to go on seeking a systematic understanding of how we and other living things fit into the world. …The empirical evidence can be interpreted to accommodate different comprehensive theories, but [in the case of reductive materialism and its neo-Darwinian extension], the cost in conceptual and probabilistic contortions is prohibitive. I would be willing to bet that the present right-thinking consensus will come to seem laughable in a generation or two – though of course it may be replaced by a new consensus that is just as invalid. The human will to believe is inexhaustible.
Of course, Nagel’s book has raised the ire of many of his fellow philosophers who accept “the present right-thinking consensus”. For an informative article on the criticisms, read this essay from a few months ago in the Chronicle: “Where Nagel Went Wrong”.
…Or is he just ignorant of the founders’ theological influences?
If you’re a regular visitor to this blog, you might have noticed that I haven’t been posting much about the current Presidential race. That’s because I tend to get interested in political races – in a writerly way – only when there is some sort of logical or philosophical issue to discuss, and, let’s face it, this campaign hasn’t exactly been rich in philosophical content. Lately, however, Paul Ryan’s standard stump speech has often included the following sort of statement, which he also has made on the floor of the House:
“Our founders got it right when they wrote in the Declaration of Independence that our rights come from nature and nature’s God, not from government.”
Ryan is certainly correct that the founders used the phrase “the Laws of Nature and of Nature’s God” in the first paragraph of the Declaration-
“When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.”
But it was Ryan’s own emphasis on nature and “nature’s God” that got my attention. In Jefferson’s day, the phrase was strongly associated with the Enlightenment doctrine of Deism. Here’s a brief outline of the view from the (always enlightening) Stanford Encyclopedia of Philosophy–
Deism is the form of religion most associated with the Enlightenment. According to deism, we can know by the natural light of reason that the universe is created and governed by a supreme intelligence; however, although this supreme being has a plan for creation from the beginning, the being does not interfere with creation; the deist typically rejects miracles and reliance on special revelation as a source of religious doctrine and belief, in favor of the natural light of reason. Thus, a deist typically rejects the divinity of Christ, as repugnant to reason; the deist typically demotes the figure of Jesus from agent of miraculous redemption to extraordinary moral teacher.
That Jefferson himself was a Deist is pretty clearly stated in a letter he wrote to his friend, William Short-
“…it is not to be understood that I am with him [Jesus] in all his doctrines. I am a Materialist; he takes the side of Spiritualism; he preaches the efficacy of repentance toward forgiveness of sin; I require a counterpoise of good works to redeem it. Among the sayings and discourses imputed to him by his biographers, I find many passages of fine imagination, correct morality, and of the most lovely benevolence; and others, again, of so much ignorance, of so much absurdity, so much untruth and imposture, as to pronounce it impossible that such contradictions should have proceeded from the same being.” [“Letter to William Short, 13 April 1820” The Writings of Thomas Jefferson. Ed. Andrew Lipscomb. Hershey: Pennsylvania State University, 1907. p. 244.]
Now, I’m no historian (and if I’ve gotten anything wrong here, please let me know), but I would expect someone in Ryan’s position to be more of one. Could it be that he is unaware of the Deism inherent in the phrase he’s constantly trotting out on the campaign trail? Or, despite his professed Catholicism, could he be a “secret Deist” himself (a possibility that seems far less unlikely – thanks to the lack of any necessary outward manifestations of the theology – than the “Sekrit Muslim” charge made against President Obama)? Deism is not currently widespread, partly because it turns out to be surprisingly hard (impossible?) to prove God’s existence by “the light of reason” alone. So there’s a natural tendency for Deism to evolve or devolve either into Fideism (which rejects the role of rationality in religion in favor of non-rational faith) or Atheism. Ryan’s perhaps inadvertent endorsement of Deism is inconsistent with both Catholicism and Atheism (the view of his intellectual heroine, Ayn Rand). But since logical inconsistency would be more troubling than simple ignorance, perhaps it would be most charitable to charge Ryan only with the latter.
If you’re a Boltzmann brain, then I’m likely a figment of your imagination as you float around in otherwise empty (or at least high-entropy) space, a minimum assemblage of whatever matter or energy is required to generate your thoughts and images. You emerged as a “quantum fluctuation” of particles out of the quantum fields that underlie space itself – your mother was the vacuum (no offense intended). Yes, you were an unlikely fluctuation, but given enough time – and an eternity is more than enough time – you were bound to happen at some point. In fact, at least absent assumptions far more speculative and untested than those of statistical mechanics and quantum physics, it was far more likely that you would emerge as an isolated brain – or whatever assemblage of particles you really are – in infinite space than that the Big Bang would have occurred with just the right properties to give rise to the universe as we observe it.
The idea that you could be mistaken about everything except the fact of your own bare existence as a conscious mind is nothing new. In his Meditations, Descartes developed such a scenario on his way convincing himself that his own mind certainly existed, and hence (along with several controversial assumptions) that a benevolent, omnipotent God must exist, and therefore that our everyday beliefs about the physical world are highly likely to be true (as long as we form them carefully). To make his skeptical scenario psychologically vivid and a worthy antagonist to defeat, Descartes imagined that a malevolent demon might be deceiving him in every possible way. Of course, Descartes recognized that his demon scenario was utterly improbable, but since in his view knowledge had to be built on an absolutely certain foundation, he thought that the mere possibility of such a demon could undermine his previously uncritical faith in his common sense beliefs, and that showing that such a demon could not cause him to reasonably doubt his own existence would go a long way towards establishing a firm foundation for math, physics, and the other sciences. Critics, of course, love to point out that a mere possibility is insufficient to justify a reasonable doubt. It is possible that a mountain of gold will soon emerge in my back yard, but that mere possibility gives me no reason to doubt that I shouldn’t quit my day job just yet. The possibility of a demon similarly can provide no reasonable ground for doubting my common sense beliefs. By contrast, the disturbing aspect of the Boltzmann brain scenario is that our best-tested physical theories actually suggest that being a Boltzmann brain is not only possible, it’s actually more likely – much more likely – than the situation in which we believe ourselves to be.
To explain why we observe a relatively orderly, amenable universe around us, even though a higher-entropy, less amenable sort of universe is far more likely to emerge from the cosmos on purely statistical grounds, we naturalists often appeal to an “anthropic principle”: in an infinite universe, some regions are likely to be more amenable to life than others, and life will quite predictably exist only in those regions where its evolution is possible. But the statistical reasoning that supports the probability of your being a Boltzmann brain also undercuts such appeals to anthropic principles. Sean Carroll puts this nicely in his book, “From Eternity To Here”-
… Maybe, we might reason [in accordance with an anthropic principle], in order for an advanced scientific civilization such as ours to arise, we require a “support system” in the form of an entire universe filled with stars and galaxies, originating in some sort of super-low-entropy early condition. Maybe that could explain why we find such a profligate universe around us.
No. Here is how the game should be played: You tell me the particular thing you insist must exist in the universe, for anthropic reasons. A solar system, a planet, a particular ecosystem, … whatever you like. And then we ask, “Given that requirement, what is the most likely state of the rest of the universe [given statistical mechanics and quantum theory], in addition to the particular thing we are asking for?”
And the answer is always the same: The most likely state of the rest of the universe is to be in equilibrium. If we ask, “What is the most likely way for an infinite box of gas in equilibrium to fluctuate into a state containing a pumpkin pie?,” the answer is “By fluctuating into a state that consists of a pumpkin pie floating by itself in an otherwise homogeneous box of gas.” Adding anything else to the picture, either in space or in time – an oven, a baker, a previously existing pumpkin patch – only makes the scenario less likely, because the entropy would have to dip lower to make that happen.
It’s important to emphasize that Carroll’s point here isn’t to argue that we should in fact believe that we are Boltzmann brains, but rather to provide a sort of reductio ad absurdum of the limited set of assumptions and theories that lead us to that conclusion. Still, upon finishing Carroll’s book, which avoids the Boltzmann brain conclusion only by indulging in some extremely tentative cosmological speculations, it’s hard to simply dismiss the possibility that we are, in fact, Boltzmann brains.