Reading Mrs. Dalloway

Posted: July 24, 2017 in Uncategorized

Like any good modernist, Virginia Woolf expects her readers to work for their bread, that is for their enjoyment and understanding.

This is not an entirely new idea, not new at all really. Milton, who was well aware his poem was difficult, said he wrote Paradise Lost for the “fit audience. . . . though few.” But in the century prior to Modernism, i.e. the nineteenth century, it was more common to consider the reader’s comfort, and, in fact, the term “dear reader” was common in both Romantic and Victorian literature. I often joke that the Modernists, by contrast, said “F-You reader,” but that is only half true, and, of course, they never really said it so straightforwardly.

Modernists, I’m sorry to say, were snobs (and so are some of their PoMo descendents). They weren’t interested in reaching the vast middle class. On the contrary, they considered the typical consumer of popular books to be “bourgeois”—complacent, undemanding readers who didn’t want much demanded of them and, therefore, didn’t deserve the artists’ attention. Thus the Modernists were typically either indifferent to the needs of this audience or actively hostile to it. They wrote for the intellectual and artistic “avant-garde”—people who valued experimentation and innovation over tradition, form over content, who came to art to be challenged not coddled, who were willing, in short, to work for their pleasure.

(Note: Mark Morrison, a far more accomplished scholar than myself, with whom I attended grad school, would, I think, disagree with the previous paragraph. In his book: The Public Face of Modernism, he argues that, on the contrary modernists were optimistic about the potential of their avant-garde efforts to reach mass audiences. And I do recall once hearing that James Joyce hoped Ulysses would be a best seller (I might have heard it from Mark). If that’s the case, and they really hoped they would reach the same audience as say, H.G. Wells, with their experimental and challenging works, then, I guess they weren’t snobs. They were delusional.)

This is the time during which there arose a distinction between what we now call “literary” and “genre” fiction—though the Modernists wouldn’t have used those terms. Sci-fi, Fantasy, Horror, etc. are all examples of genre fiction. They tend to be exciting, plot-driven works that make use of suspense and action and mystery and magic to enthrall their readers. Literary fiction, by contrast, is character-driven, “realistic,” psychologically oriented, and sees language not as simply a means of communication but as an integral part of literary art, valuable in and of itself.

In this sense, Mrs. Dalloway could be the poster-child for literary fiction, though it was by no means the first example. But what distinguishes it from earlier literary fiction, such as the stories of Anton Chekhov, for example, is the unconventional nature of its story telling methods and of the story itself.

Take, for example, the plot. There isn’t one. It’s simply the day in the life of several characters in some way connected to a party being thrown by the main figure, Mrs. Dalloway.[1] Each one of them has a story, of sorts, but their conflicts are often subtle (like Mr. Dalloway’s inability to tell his wife that he loves her). It’s a bit like the 1991 film, Slackers, in which we follow around various characters, always switching perspective, as they go about their day in Austin, TX.

At the level of language, the sentences can be long and sometimes hard to follow, as Woof is as interested in the beauty as in the meaning of the sentences, sometimes more so. Take for instance this example:

There was a breath of tenderness; her severity, her prudery, her woodeness were all warmed through now, and she had about her as she said good-bye to the thick gold-laced man who was doing his best, and good luck to him, to look important, an inexpressible dignity; an exquisite cordiality; as if she wished the whole world well, and must now, being on the very verge and rim of things, taker leave. (174)

This is part of a description of Clarissa Dalloway as scene by her old friend Peter Walsh. It’s 73 words long, full of clauses and including parenthetical expressions, and even parenthetical expressions with parenthetical expressions, without marking them off using punctuation as in this part that I have re-punctuated: [who] . . . . had about her {as she said good-bye to the thick, gold-laced man (who was doing his best ((and good luck to him)) to look important}, an inexpressible dignity. . . . .

So to understand and appreciate Mrs. Dalloway, you have to slow down and pay attention, almost be ready to diagram sentences in your head, and be willing sometimes simply not to understand what VW is writing.

But, to try and make things a little simpler, maybe, I’m going to give you some heads ups about and tips for reading this novel.

First, the Plot: As I’ve already said, there isn’t one, but there are two or three candidates for what we might call a plot. “Plot 1”: Mrs. Dalloway is throwing a party in the evening. She loves throwing parties because they celebrate life, but some people like her husband and old friend Peter Walsh think she’s superficial in this respect, so she’s both worried about the party succeeding and self-conscious about caring for it at all. “Plot 2”–Septimus Warren Smith and his Italian wife, Rezia, are spending the day in London where they will eventually meet Dr. Bradshaw, a psychiatrist who Rezia hopes will help her husband after their previous physician, Dr. Holmes, failed to do so. Septimus has gone insane after losing his best friend in WWI, has messianic delusions, and has threatened to kill himself.

These are the two big “plots.” A third, lesser, plot is the story of Peter Walsh who has just arrived, unexpectedly, from India. Walsh left England after Clarissa (not yet Mrs. Dalloway) rejected him, and now has come back some thirty years later to help a woman get a divorce so he can marry her. But that’s just why he’s here. What the novel is really interested in is how people see him and how he sees other people. Perpetually unlucky in love, he is sensitive, intelligent, and somewhat pathetic. His return brings back a lot of memories for Clarissa and others about their youth.

This story is, in fact, always more concerned with what people are thinking during the day than what they are actually doing.

Other important characters and “plots” include, but are not limited to, Richard Dalloway’s afforementioned inability to express his love for Clarissa; Mrs. D’s hatred of Miss Kilman, a religious woman who has befriended her daughter; Clarissa’s feeling excluded because she was not invited to lunch by the famous Lady Bruton; Lady Bruton’s desire to exert political power by using powerful men to help her achieve her agendas (such as encouraging emigration); and the unexpected appearance at the party of Sally Seton, the “wild girl” of their youth who grew up to marry a rich manufacturer and brags of having 5 boys (despite the fact that she and Clarissa once shared a passionate kiss).

Setting: London, four years after the end of WWI, i.e. the early 20s. The Dalloways are rich and somewhat aristocratic. Mrs. Dalloway has been sick before the start of the novel, possibly related to the influenza epidemic that killed 20 million people after the War; she and her friends, though almost all well-to-do, were radicals—at least in principle—in their youth, favoring socialism, though none of them ever went beyond talking and theory in that regard.

Shifting Perspective: This is one of the most difficult things about the novel because we are used to novels staying in one perspective, that is showing us the world through one set of eyes or, if it’s going to have multiple perspectives, (as in, say, the novel, Game of Thrones), each person gets their own chapter. In Mrs. Dalloway, however, the perspective can shift from paragraph to paragraph with no warning at all.  One minute you’re in the mind of Mr. Dalloway, the next in the mind of some vagrant woman who plays a very minor role in the story. Woolf gives us almost no cues to alert us to the switch other than the shift in the voice of the character, that is the way they express themselves to themselves (since a lot of this story is about what people are thinking to themselves).

Ambiguous Pronouns: This one kind of annoys me, because it is a common problem in student writing. Often you can’t be sure at first what is the antecedent to a given pronoun. In other words, Woolf uses the word “she” in a sentence, but it might refer to more than one person, and so you have to go back and reread to make sure you’re getting it correct.

Here’s an example: in the previous paragraph we’re told that Mrs. Dalloway has entered the room where her maid Lucy is getting stuff ready for the party. Then we get:

“Oh Lucy,” she said, “the silver does look nice!”

“And how,” she said, turning the crystal dolphin to stand straight, “did you enjoy the play last night?” “Oh, they had to go before the end!” she said “They had to back at ten!” she said. “So they don’t know what happened,” she said. “That does seem hard luck,” she said (for her servants stayed later if they asked her). “That does seem rather a shame,” she said . . . . “ (38)

I don’t know about you, but that paragraph with all those “she saids” messed with my head. Is it Lucy or Clarissa being referred to when VW writes “she said?” I couldn’t be sure until I had read the sentences a couple of times and then realized it was Lucy who had been to the theater. All this is made more difficult by the fact that VW uses quotation marks when she is not really quoting.

What do I mean? “They had to be back at ten!” she said. Think about it. That’s Lucy talking about her and her friends. She must have said to Mrs. Dalloway “We had to be back at ten!” but we’re really not hearing Lucy talk directly, we’re getting her speech indirectly as processed by Mrs. Dalloway—or the narrator.

This type of maneuver is called “free indirect” style and is usually confined to what people think, not what they say as in Darn, they would all have to leave early, Lucy thought. It looks like direct thought, but it’s not, it’s indirect as queued by the “they” instead of “we.” But why use that technique in dialogue?

If a student of mine in creative writing made these kinds of “mistakes,” I’d call them out on it, but we have to assume that VW knew what she was doing. So what’s the point? One is tempted to say, to make more life more difficult for the reader. And that might not be entirely wrong. Remember, she wants us to work for our art. But, maybe she’s also saying something about the way we process other people’s speech in our own minds, so that, what you are saying to me is actually experienced by me not really as you speaking but of my mind registering what you are saying, a subtle, but maybe important difference.

The other major difficulty you should be prepared for is that there are No Chapters. The best you get to divide this novel up are occasional extra blank lines between paragraphs to mark off a section as on page 29. But there are precious few even of these. I counted 7, and one of these came after 90 plus pages of no breaks.

So, you’ll have to figure out for yourself when it’s time to take a break, so to speak. I’d suggest never spending less than 20 minutes at a time, though, reading this novel or you will never keep track of what’s going on. This is not Facebook or Twitter. This is some serious reading that requires some serious attention span, and that’s not something  we get trained for anymore.

But here’s your chance to practice the skill. . . .

[1][1] Framing an entire novel in the course of one day was an idea VW borrowed from her fellow Modernist, James Joyce, author of Ulysses. Be grateful I didn’t assign that novel as it is four times as long and ten times as difficult as Mrs. Dalloway.

No Respect

Posted: July 10, 2017 in Uncategorized

Sigmund Freud is the Rodney Dangerfield of Western intellectuals. He doesn’t get any respect. Frequently, for example, you will hear students echoing their psychology professors or high school teachers asserting that Freud is obsolete, that his theories have been discredited and are only of historical interest.[1] Psychology professors, theorists, and high school teachers should know better. Perhaps they even do know better, but there is something about Freud that bothers them, that disturbs them, so they defensively dismiss him as un-empirical and sex obsessed.

But the thing is, modern psychology, at least the clinical sort, would be a vastly different enterprise if it weren’t for Freud, and, moreover, Freudian psychoanalytic language has become so much a part of our common parlance that we are barely aware of it. For example, the use of the word “defensively” in the last sentence of the previous paragraph is a Freudian term, used to describe the way we unconsciously reject or avoid truths that make us uncomfortable. This concept remains today a basic premise in virtually any form of clinical therapy and is a concept that even lay people are likely to use in conversation, as in “don’t get defensive.”

The notion of the unconscious too, though not invented by Freud was theorized and popularized by him, and both culture and practical psychology are indebted to him for the notion that our behaviors are often driven by thoughts and desires of which we are not consciously aware.

For example, though orthodox behaviorism rejected such an idea, today’s cognitive-behavioral therapy, which is often considered the most effective form of “talk therapy,” relies on the notion that we have stored negative thoughts that, unknown to us, drive unwanted behaviors. Cognitive-behaviorists may not use the term “the unconscious” in precisely the way the Freud did, but, they are indebted to him.

And speaking of “talk therapy,” who do suppose is responsible for that? It was Freud who popularized the idea of the “talking cure,”—though he himself adopted it from a neurologist named Josef Breuer. Prior to Freud, treatment for mental illness was comparatively barbaric and consisted mostly of isolating “mad” people rather than treating them. The whole notion endemic to our culture that we must talk about our feelings, that much of our personal pain derives from childhood experience, is a direct legacy of Freud.

And, yes, the importance of sex in defining human character is a Freudian concept as is the notion that we should view sex in a less moralistic way. Freud was not a hedonist, but he did want us to overcome our shame surrounding sexuality, and much of the modern liberal attitude (for better or for worse) towards sexuality is indebted to him.

Freud’s concepts of the Id, the Ego, and the Superego, though no longer recognized as genuine structures of the mind, nonetheless, remain powerful metaphors that still help make sense of human behavior. Likewise, Freud’s concept of the pleasure-principle remains a compelling way of understanding human motivation.

The death-drive, which was never widely accepted, is a notion that still cannot, at least in its metaphorical form, be utterly dismissed. There does appear to be something self-defeating both in individuals and in the species generally and Freud’s concept helps us to recognize the power of this commonly shared impulse to self-destruct.

And, moreover, you should know that psychoanalysis as a method has not died, though it has evolved and, in some cases at least, become more evidenced-based. There are psychoanalytic institutes in nearly every major city of the Western world, and there is nearly a century’s worth of psychoanalytic research that followed in the wake of Freud.[2]

And yet, all we remember him for is the Oedipus complex, the notion that boys want to sleep with their mothers and kill their fathers (which, BTW, is a vast oversimplification of the concept). All we remember are phallic symbols and a Viennese accent. Some of you may remember a scene from Bill and Ted’s Excellent Adventure in which the famous analyst is portrayed as a laughable geek clumsily intruding on Billy the Kid and Socrates as they flirt with a couple of mall girls.


Freud with Socrates and Billy the Kid

But let’s give Freud his due. He belongs right up there with Darwin and Marx as one of the great intellects of modernity.

Freud, by the way, who was no more modest than either Marx or Darwin, recognized and his own importance. In Introduction to Psychoanalysis, he compares to the Heliocentric Theory and the Theory of Evolution the psychoanalytic discovery that man was not in the driver seat of his own behavioral car, that he was driven by unconscious notions of which, by definition, he was not even aware:

Humanity has in the course of time had to endure from the hands of science two great outrages upon its naive self-love. The first was when it realized that our earth was not the center of the universe, but only a tiny speck in a world-system of a magnitude hardly conceivable; this is associated in our minds with the name of Copernicus, although Alexandrian doctrines taught something very similar. The second was when biological research robbed man of his peculiar privilege of having been specially created, and relegated him to a descent from the animal world, implying an ineradicable animal nature in him: this transvaluation has been accomplished in our own time upon the instigation of Charles Darwin, Wallace, and their predecessors, and not without the most violent opposition from their contemporaries. But man’s craving for grandiosity is now suffering the third and most bitter blow from present-day psychological research which is endeavoring to prove to the ego of each one of us that he is not even master in his own house, but that he must remain content with the veriest scraps of information about what is going on unconsciously in his own mind. We psycho-analysts were neither the first nor the only ones to propose to mankind hat they should look inward; but it appears to be our lot to advocate it most insistently and to support it by empirical evidence which touches every man closely.

(Quoted from Goodreads,retrieved 11/5/13 from

Freud’s assertion here is that the notion of the unconscious was—like the notion that we are not the center of the universe, and like the notion that we are not fundamentally any different from apes—was a great wound to human pride.

In fact, he says, it is a deeper insult to our dignity, and maybe that’s why we have never forgiven him, maybe that’s why we’d rather dismiss him as outdated, obsolete, as a geek? Maybe that’s why he gets no respect.

[1] It’s funny, by contrast biology students may point out that Darwin has been updated and that there was much he didn’t know, but they still credit him as a major figure in their field.

[2] There are also many off-shoots of psychoanalysis, such as Jungian psychoanalysis, still in practice today.



Posted: June 28, 2017 in Uncategorized

Dr. Morbius enhances his brain using the Plastic Educator in the 1956 film, Forbidden Planet.

Welcome to the Plastic Educator, the official course blog for English 115 Online Western Humanities II. My goal here is to raise the electromagnetic waves of your brain, not through any mechanical device, however, but through words.  I’ll use this blog to introduce and discuss texts we’re reading, to expand upon PowerPoint presentations and class discussions, and to share whatever comes to mind with regard to our explorations in the Western Humanities.

You say you want a revolution
Well, you know
We all want to change the world
You tell me that it’s evolution
Well, you know
We all want to change the world

But when you talk about destruction
Don’t you know that you can count me out

–Lennon and McCartney

Though no doubt they were considered radical in their day, the essays by Kant and Montaigne strike me as examples of what we might today call “classical liberalism,” which is a liberalism that is so moderate by comparison with what we call “liberal” today that many on the political left would call it conservative.

Certainly, coming as they were on the heels of the middle ages and during a time when both Catholic and the newly emerged Protestant churches still yielded immense power, both politically and psychologically, the independence of thought modeled by Montaigne and proposed by Kant must have seemed bold, if not revolutionary.

Montaigne, who is the earlier of the two writers by more than a century, articulates a kind of cultural relevance that would test the tolerance of all but the most radical thinkers today. He writes with a sanguine equanimity of foreign practices ranging from orgiastic marriage ceremonies to the ritual eating of the dead to incest between parents and children, blithely observing that “Barbarians are no more a wonder to us than we are we are to them” (6).[1] He offers no moral judgment whatsoever on these practices, but, on the contrary, seems to assert that morality itself is entirely determined by culture.

“The laws of conscience,” he writes, “which we pretend to be derived of nature, proceed from custom” (9).

From such a vantage point, one cannot critique a culture where, for example, the eunuchs who guard the “sacred women” have their noses and lips cut off so they will not be attractive to their charges (7). From this point of view, one can neither argue that severe bodily mutilation nor ritual prostitution (“sacred women”) are “wrong.” It is simply a matter of custom.

This was, of course, a time in the world’s history when Europeans were “discovering” new countries and new cultures and were coming face to face with societies where practices such as cannibalism were ingrained or where Western notions of sexual modesty and propriety were turned upside down. But Montaigne could have chosen to see these cultures as “backwards” or “primitive”—or “barbaric” in the usual sense of the term, but instead, he seems simply to see them as different.

Kant, was also radical in his own way. His assertions that “Laziness and cowardice” are the reasons that people turn to books or pastors or physicians for guidance must have been challenging, if not outright offensive, to many people of his day. His injunction, “Have courage to use your own understanding!” is a kind of manifesto that might be seen as the starting point for a motto that has been popular in our own time, “Question Authority!”

Moreover, he articulates a bold political ideology: the notion that one generation cannot impose limitations on future generations, that attempts to do so would be justifiably considered “unauthorized and criminal” by the generations that followed. This means, in effect, that no law is universal across time and that no generation is obliged to follow the rules laid down by a previous one. Consider what this means for religious ideology. From this perspective, the Ten Commandments for example are actually an immoral and unlawful imposition of one generation upon another.


But it’s not just religion. Kant’s notion of the rights of subsequent generations even contravenes political ideologies of our own day. For example, the notion of “original intent” as it applies to the U.S. Constitution is utterly meaningless from a Kantian perspective, at least I am understanding it here. The founding fathers, in such a view, have no right, whatsoever, to impose limitations on people living more than two centuries after them. The fact that Thomas Jefferson or Alexander Hamilton may have thought one way about government should have no bearing whatsoever on how we run government today, if we come to different conclusions from them about what is right and what is wrong or even efficacious.[2]

And yet, for all their seemingly radical ideologies, both Montaigne and Kant back away from revolution and seem, on some level, to advocate the status quo.

Kant makes an important distinction between public and private thought. Public thought, for Kant, is the realm of the scholar, who must always be free to critique whatever it is he feels compelled to critique. But, he says, “private use of reason may, however, often be narrowly restricted without otherwise hindering the progress of enlightenment” (2).

This may be a little confusing to us because we tend to think of private thought as sacred and public displays as subject to regulation, but Kant isn’t so much writing about private thought as what we might call “contracted behavior.” In other words, when you work for someone or are part of an organization, you are obliged to follow its “private” laws.

If, for example, one is a soldier, one must obey one’s officer. If one is a priest, one must uphold Church dogma. “The citizen cannot refuse to pay the taxes imposed on him,” Kant writes (2).

Kant, like, Montaigne, seems to approve of the Socratic notion, articulate in the Socratic dialogue “Crito,” that, while one may critique one’s culture, one is obliged to follow its laws to the letter. Socrates was so convinced of this idea that he allowed himself to be executed rather than escape prison and, thus, break a law of his country.

That is certainly a far cry from the rationalistic revolutionary fervor that overturned the French government at the end of the eighteenth century and even at odds with the notion of civil disobedience that Henry David Thoreau (who was jailed for not paying his taxes in protest of government policy) would articulate in the mid-nineteenth century and which would inspire Ghandi and Martin Luther King, Jr.

“It is a very great doubt, whether any so manifest benefit can accrue from the alteration of a law received, let it be what it will, as there is danger and inconvenience in altering it,” Montaigne writes (11). He quotes with approval one critic who suggests that anyone who is so arrogant as to institute a social innovation ought to keep a halter around his neck so that, in case his innovation proves to be pernicious, he may be hung.

This is a form of what might be called “classical liberalism,” which is a kind of free-thinking that, while it may encourage change and progress, works incrementally and through not against the system. Both Montaigne and Kant were aware of the terrible price that civil unrest, not to mention civil war, could bring about (Montaigne had lived through some four decades of religious wars), and so they were naturally cautious of advocating revolutionary behaviors, as opposed to revolutionary ideas.

Today, I think, such reticence to commit to social action and social change would be seen as conservative rather than liberal, but it wasn’t always that way.

In the 1960s, for example, John Lennon wrote a song called “Revolution” where he praised the desire to change the world, but said if that meant “destruction” you could “count him out.” Later that same year, however, he recorded that song saying you could “count him in,” thus moving from liberal to radical.

And, of course, maybe if Montaigne and Kant lived today, they would also see things differently and promote concrete, as well as philosophical, social change, would embrace a more revolutionary ethic. Almost assuredly they would be, in some fundamental sense, different men if you believe along with Montaigne that our values are determined not by nature but by our environment.

[1] Initially, Montaigne’s use of the term “barbarians” would seem to imply a bias toward non-Western cultures, but, in practice, as we will see, his radical moral equivalency undermines, if not eviscerates, notions of Western moral superiority.

[2] To some extent, I suppose, the system for amending the Constitution was set up to address this human right of future generations to self-determination, but what if future generations deem the system for amending the system to be too cumbersome and want to jettison it? From Kant’s perspective, it seems, they ought to be able to.

I’ll begin by observing that , though I ma not agree with Blake that Milton was of the devil’s party without knowing it, I do consider his theodicy, as a theodicy  like Descartes’ “proof” of God, to be a failure.

This is not to say that the poem is a failure, far from it (though Voltaire, as we’ll soon see, did think it was a poor effort). Milton successfully, in my view, brings new light to the Genesis story, and his creative interpretation is so strong that it is difficult, after reading Paradise Lost to ever see the story of Adam and Eve in the same way. He has made it his story, so that the characters of Satan, Adam, and Eve that we will encounter in art, literature, music, and film for the next 400 years, invariably owe as much to Milton as to the Bible. Could the Rolling Stones have produced “Sympathy for the Devil,” without Milton?

But as a defense of God, the poem is, in my view, a noble failure, if for no other reason than that the character of God, Himself, comes across as so stiff and unsympathetic. When He calls man an “Ingrate” who “had of me/ All he could have” (3.97-8), he sounds like a petulant overlord. When He asks “what proof could they have given sincere/ Of true allegiance, constant faith, or love” without being tested (3.103-4), He sounds like a jealous husband, and when He declares “Die he or justice must, unless for him / Some other, able and as willing, pay / The rigid satisfaction, death for death” (3.21-12), He sounds to me like more like some sort of Cosmic loan shark or gangster than a beneficent God.

And what about the Son? Doesn’t he mitigate God’s justice with his sacrifice? Well, maybe, but, at least as Milton depicts it, how much of a sacrifice is it really? “On me let death wreak all his rage,” says the Son. ” Under his gloomy power I shall not long Lie vanquished. . . . Thou wilt not leave me in the loathsome grave / His prey” (3.241-248). What is the Son saying here? Sure, I’ll die for Man because I know You (God) won’t leave me dead. So then, really, he’s not dying at all. He may suffer, of course, but he knows with certainty—unlike humans—that his death is not real, that it won’t last more than a few days. That doesn’t seem like such a great sacrifice to me.

Am I being blasphemous? Maybe a little, but my point is that—to an objective observer—I don’t think Milton makes a strong case. Yes, the choir may appreciate his sermon, but they are not the ones who need God to be justified.

So it is, in the end, in my view, another example of the Enlightenment’s reach exceeding its grasp. A bold, if failed, attempt to use reason where reason cannot be of much help.

Indeed, the poem, itself, seems to understand, even if Milton does not, the limits of reason. All the way back in Book 2, the poem takes a stance on reason that is, at the very least, ambivalent. What do I mean by “ambivalent?” I use the word in the psychoanalytic sense of the term—meaning not “unsure” so much as holding in one’s heart two diametrically opposed ideas or feelings about the same object. To feel ambivalent about someone in this sense is not to feel like you sort of like or don’t like them but to feel you hate and love them at the same time.[1] Milton’s poem, I would argue, demonstrates powerful ambivalence toward reason.

On the one hand, Milton seeks to “justify the ways of God to men” (1.26). What does “justify” here imply if not some sort of rational defense? And Milton’s poem does attempt to articulate, often through the mouth of God Himself, a justification based on reason, most especially through the argument of Man’s free will alluded to earlier (see 3.96-125) where God explicitly states that if He had not left Man free to fall, Man’s obedience and fealty would mean nothing.

And yet, in Book 2, who else do we find “in thoughts more elevate. . . reason[ing] high / Of providence, foreknowledge, will, and fate, / Fixed fate, free will, foreknowledge absolute”  and who “found no end, in wandering mazes lost “(2.558-61)? Who? Fallen angels, that’s who, demons who in their philosophical speculations seem not very far off from God in Book 3 who excuses himself from responsibility for Man’s fall on the grounds that “If I foreknew / Foreknowledge had no influence on their fault, / Which had not proved less certain unforeknown “ (117-9). How is God’s philosophizing here any more elevated than that of the fallen angels Milton mocks in Book 2?

Adam praises reason to Eve in Book 9 but also warns of its susceptibility to fraud:

But God left free the will, for what obeys

Reason is free, and reason he made right

But bid her well beware, and still erect,

Lest by some fair-appearing good surprised

She dictate false and misinform the will

To do what God expressly hath forbid. (351-7)

Eve, when debating with the serpent, declares that outside the one commandment given by God to Adam and herself, “our reason is our law” (9.654). But it is the evidence of speech and reason in the serpent (the two capacities Descartes said were denied animals) that ensnares Eve, that and a speech by Satan worthy of the greatest of courtroom lawyers, a speech based as much on rationality as on lies, so that they seem to her  “imprenged with reason” (9.737). She, herself, uses what seems like reason to justify her action, “How dies the serpent? ” she asks. “He hath eaten and lives” (764); if the serpent didn’t die from eating the apple, then neither should she. It’s almost a mathematical equation.

Of course, maybe it’s not so much reason, Eve uses, as rationalization. “Rationalization” is when we appear or even pretend to use reason to justify an act that is based on some other motivation, lust for example. But is that really the case here? What prompts Eve to break the commandment as depicted by Milton? Is it simple lust for godhead? Or has her reason betrayed her as Adam warned?

I’d suggest the latter, that what Milton’s poem seems to be saying is that, ultimately, we can’t trust reason. And, in that sense, the failure of reason accounts for the failure of the theodicy. Reason has its limits. It cannot save us from sin; it cannot bring us to God.

In this sense, the poem seems to take arms against Descartes who looked to reason and reason alone as the ultimate arbiter. Milton seems to be pointing out that, just as we cannot trust our senses, which can, of course, be fooled, we also cannot trust reason, since that is as imperfect as eyesight or hearing.  And yet, even after the Fall, Milton refers to the operation in man of “sovereign reason” (9.1130), according it a kind of kingship in the soul of man.

It’s classic ambivalence, but, again, must be distinguished from Descartes in whom, I would argue, there is no ambivalence with regard to reason. Descartes puts all his hope and faith and belief in rationality; Milton seems to say reason is all well and good, but, in the end, it is as likely to betray you as anything else in our fallen world.

[1] I was under the impression that Freud originated this concept, but, apparently, it was a contemporary of his, Eugen Bleuler, who first articulated this idea of ambivalence. See

There’s so much to say about Paradise Lost that I never get to half of what I’d like to address in class, and I doubt it will be any different online. I am tempted to journal on what is the most interesting part of the poem, the depictions of God and Satan, but I’m going to leave that for you all to discuss in the forum. I’ll just ask you to think about this famous observation about Paradise Lost by the poet William Blake. He said of Milton, with regard to the epic poem, that

he was a True Poet and [so] was of the devil’s party without knowing it.

(“The Marriage of Heaven and Hell”)

Now that raises some interesting issues both about Paradise Lost and about poets you might think about.[1].

But what I thought I’d spend the most time on today was a different sort of religious issue, namely the pagan elements of Milton’s poem. Maybe some of you have wondered why a Christian poem contains so many references to Greek, Roman, Egyptian, and other pagan mythologies? Of course, in some sense, Milton is doing nothing new here. In fact, he’s following the lead of Dante, who likewise incorporates pagan myth into an ostensibly Christian epic poem—and for many of the same reasons. But Milton does something somewhat different, in emphasis if not altogether in method.

Now, why would either Milton or Dante incorporate so much pagan mythology? One answer is simple: because they are writing in the epic tradition, and the greatest practitioners of that tradition, whom they were emulating—Homer and Virgil—were pagans, and infused their poems with tales of gods, goddesses, and monsters. For poets like Dante and Milton, who are trying not only to imitate, but to outdo their predecessors, they need to demonstrate that they have as great a command of the mythological pantheon as any pagan poem, indeed, maybe a better one. It’s a way of demonstrating that they have the “chops” for the job. It may also be that as readers of pagan literature, they shared a certain love of these old pagan tales and wanted to, in some way, include them.

But it may also have been necessary, theologically speaking. Both Dante and Milton were aware, as were all the great Christian thinkers, that other religions preceded their own, that once the world was dominated by religious notions and deities quite different from the Christian and even Jewish ones. If the Judeo-Christian religion is the true one, how does one account not only for the existence, but for the predominance of these other religions?

Dante paved the way in The Divine Comedy by suggesting that these pagan “gods” were in fact demons, though he is inconsistent in doing so, sometimes referring to God as “Jove.” And, at least as far as I remember, he doesn’t explain how these demons came to be thought of as gods. He simply absorbs some of the pagan gods and monsters into his infernal system, so that, for example, Minos, who is the judge of the underworld in Homer and Virgil is also the judge in The Inferno and Cerberus the three-headed-dog guards the infernal circle of the gluttons in Dante’s poem, a slightly more specialized role than he plays as guard-dog to Hades in the pagan poets. But he doesn’t explain how Cerberus came to be.

Milton takes it a step further. The pagan gods, as Milton depicts them, were, in fact, fallen angels, demons, pretending to be gods, furthering the seduction and corruption of mankind by posing as deities. We know them as Moloch or Belial or Jove, Isis, or Osiris, but these are names “the sons of Eve” gave them when

By falsities and lies the greatest part

Of mankind they corrupted to forsake

God their creator.  (I. 364-9)

They are devils adored as deities (I.373). It’s a clever move that explains how a race of gods could precede the Judeo-Christian one. But it’s also a common one that cultures often use to account for—or appropriate—religious systems that precede their own. They take the preceding system and demonize it.

There is even evidence in the Ancient Greek mythology that the gods we know, Zeus, Hera, Athena, were preceded by a previous system of deities that were later discredited. These are known as the Titans. In the Ancient Greek system they are evil, but some people speculate these villains of the Greek system were once themselves worshipped as gods by a culture that was overtaken by the Greek culture. That cultural change is rewritten to be a war against the evil Titans, teaching us something not only about the evolution of religions but also about how history works.

Milton is especially clever in how he constructs this narrative retcon.[2] Take, for example, the minor character Mulciber, the demon architect of Book 1. Milton explains how he was “fabled” to have been “thrown by angry Jove / Sheer o’er the crystal battlements” of Mount Olympus (I.741-2) but the Greeks who told his story were “Erring; for he with this rebellious rout /Fell long before” (I.747-8). The story Milton is referring to is more commonly known as that of Hephaestus (or Vulcan) the misshapen artisan-god whom Zeus hurled from Mount Olympus “in drunken rage” (Kastan, footnote, 37). Milton would have us believe that this story was a distortion of the real one, the expulsion of the rebellious angels from Heaven by God.

We see an even bolder example of such a retcon with the birth of Sin. In book 2, she explains her birth to Satan, telling him that, when once he began to plot in his mind against God, she

shining heavenly fair, a goddess armed

Out of thy head . . . sprung. (757-8)

What is this but a retconning of the birth of Athena from the forehead of Zeus? And what does such retconnning achieve?

One, it discredits the old religion, showing that what was meant to be a heroic birth of a goddess was in fact the degraded and degrading conception of Sin in the mind of Satan. Two, it allows Milton to retain and turn to his own philosophical and aesthetic devices, a powerful image from the pagan pantheon. He is at once both discrediting the Greeks and Romans and paying homage to them.

This may also be also be an example of what the literary critic Harold Bloom calls “the anxiety of influence.” Bloom argued that writers are plagued by the anxiety that everything worth saying has already been said by greater and earlier writers. There is nothing left for them to say. So what must they do? They must assert their creative power by intentionally misconstruing their predecessor’s work. They, essentially, write new work by rewriting the old, claiming their masters “got it wrong,” so they can get it right. This is particularly interesting for Milton because he, himself, was to cast a powerful shadow over generations of writers to come after him. In fact, Bloom argued that Blake’s reading of Milton—he was of the devil’s party without knowing it—was just such a “misreading” motivated by the anxiety of influence. Bloom, himself, was influenced in this theory by his great predecessor, Freud, whom we’ll study later in the semester. So you see, it all hangs together. . . .

[1] Blake, by the way, was a Romantic poet and he will crop up over and over again as a counter-weight to the Enlightenment—though Romanticism is also a product of the Enlightenment. We may get a chance to read him later in the semester.

[2] “Retcon,” short for “retroactive continuity” is a term used in the comic book world and elsewhere to describe a method in which new writers rewrite older versions of a hero’s biography to fit in with present day realities. For example, in the Marvel Universe, Iron Man, who originally was a product of the Viet Nam war is “retconned” to have come out of the Gulf War, so that he is more a product of the current generation of readers. Sometimes these retcons are used to explain an otherwise unexplainable contradiction in a character’s ongoing story; perhaps there are two different versions of the hero’s origins. Eventually, some writer figures out a way to make them work together, and voila, he’s been retconned.  D.C. comics retconnned Superman by explaining that the Superman of the 1930s was a product of an alternate universe to the Superman of the 1980s (see the Wikipedia article: Comics usually retcon for commercial reasons, but there can be ideological ones as well.

Rational Optimism and Lame Proofs

Posted: September 12, 2011 in Uncategorized

Part I: Descartes and rational optimism

I don’t know if “rational optimism” is common phrase in philosophy. A quick Google shows me the term exists, but not in the sense that I mean it. Writer Frank Robinson, author of a book by that name, suggests that “humans are fundamentally cooperative, the world is becoming increasingly peaceful, and the causes for it are growing ever stronger.”[1] Now that just seems silly to me and not at all what I mean—though to be fair I haven’t read his book.

No, what I mean is an optimism about rationality itself, a belief that reason—aided perhaps by science—can uncover all mysteries if not solve all problems. It is the philosophical or intellectual version of the Disney motto “All dreams can come true if we have the courage to pursue them.”) Maybe I should call it optimistic rationality?[2]

Anyway, this sort of optimism is very much in evidence in Descartes, and it’s one of the things I took especial note of in re-reading Discourse. In Part 2, for example, Descartes maintains that if he sticks closely to his method for investigating truth, “there cannot be anything so remote that it cannot eventually be reached nor anything so hidden that it cannot be uncovered” (16). Likewise, in Part 6, his conclusion, he writes about the possibilities of advancing medicine through research, perhaps even of curing old age, and that he would “infallibly find such knowledge if it were not impeded by the brevity of life or by a lack of experiences” (44-5) (in other words, if he could just live long enough, he could stop aging).

This is a kind of faith in reason and science essential to Descartes’s thought experiment, and one that is still very much with us today, though it has come under fire in the postmodern world.  You might think of this idea as the Detective Formula. If you read detective fiction or watch detective films—at least traditional ones, like, say, Sherlock Homes or “The Purloined Letter”—you see in the writer and his or her protagonist and unflinching faith that all mysteries can be solved if you use the proper method (“simple deduction, elementary my dear Watson”). As a culture, we tend to believe this about nearly all physical, if not metaphysical, phenomenon. Whether it’s unlocking the mysteries of the atom or curing cancer, we, Western Culture that is, have an abiding faith that given enough time and resources any such investigations will be rewarded with answers. Or at least we used to.

We have been able to maintain such faith because we have believed, even in the absence of religious belief, that answers exists, that two plus two always equals four even if an individual doesn’t yet know how to add. Descartes, thus, points out that

since there is only one truth about each thing [ and] whoever discovers it knows as much as it is possible to know about it, and that, for example, a child who has been taught arithmetic and has done an addition in accordance with its rules, can be sure of having found everything that the human mind could find about the sum in question. (17-8)

Once a child has added two plus two she knows as much about it as Einstein. Right?

Ah, it must have been nice to have lived in the seventeenth century. You might not have had indoor plumbing but you had the possibility of certainty. It was possible then to imagine things might be “clear and distinct.” It’s not always so simple for us today.

There are, for example, alternative math systems. Now I’m no mathematician, and I can’t tell you if some of those systems allow two plus two to equal five, but I know in some mathematical systems parallel lines meet. So that we can no longer say there is only one thing to know about parallel lines (that they never intersect). Modern observations about the nature of light are another example. We now know that light is both a particle and a wave. There is not simply one thing to know about light.

Descartes repeatedly uses the terms “clear and distinct” to refer to truths he discovers, but nothing is clear and distinct in the postmodern world. Ever hear of the Heisenberg Principle? It maintains that any object we observe is affected by our observation, so that it’s impossible to conduct a neutral experiment. Now, Heisenberg, himself, was talking about measuring the speed of electrons and was pointing out that to measure an electron’s speed you must interfere with its progress, and hence, you can never be fully certain of what it’s real speed was without interference. But his principle, rightly or wrongly, has been extended beyond physics to nearly every realm of thought in the postmodern world, so that I find as I reread Discourse I am nostalgic for the pure Enlightenment faith in certainty—that truth, one truth, exists even if we cannot discover it, but that we probably can discover it if we employ the right methods and give ourselves enough time.

Many of us still believe that. I certainly used to. I’m not so sure anymore. I’m sometimes afraid if I were to engage in Descartes’ thought experiment, I would become like one of those folks he says would lose their path and “remain lost all their lives” (14). Indeed, I sometimes think that’s exactly what’s happened to our society. It has embraced Descartes’ thought experiment, cast everything—God, religion, politics, even math and physics—into radical doubt; but unlike Descartes it has not imposed on itself strict rules for advancing truth or conducting itself during the period of reconstruction following the demolition of old truths and values. Kind of feels sometimes like we’ve been left out in the cold.

Part II: Lame Proofs

I always meet Part IV of discourse with admiration and disappointment. Admiration for the simplicity and self-evident truth of “the cogito.” Descartes’ success of discovering at least one thing that cannot be disputed in this world is not to be taken lightly. Though it has its limitations—it proves to me that I exist, but not that you do—I don’t think anyone yet has been able to disprove Descartes here. He has demonstrated that at least something is clear and distinct, something I think even postmodernists can’t dispute. And that’s a relief.

But he follows this piece of brilliant simplicity with one of the most obscure/obtuse passages in the book:

I knew from this that I was a substance the whole essence or nature of which was to think and which, in order to exist, has no need of any place and does not depend on anything material. Thus the self—that is, the soul by which I am what what I am—is completely distinct from the body and is even easer to know than it, and even if the body did not exist the soul would still be everything that it is. (25)

Huh? How does Descartes go from very reasonably proving (to himself) that he exists to proving that he is “a substance the whole essence or nature of which [is] to think” and which “has no need of any place and does not depend on anything material?” I understand Descartes must have known relatively little about the operations of the brain, but still, by what leap of the imagination did he arrive at the notion that the mind is independent of the body? Where is the mathematics here? What is the logical train of thought? He doesn’t spell it out, even a little.

Of course, I can’t help but read all this through the lens of modern science, which tells me, for example, that personality traits can be radically altered by modifications to the brain. (see the famous case of Phineas Gage[3] And even so, I don’t necessarily agree with a friend of mind who has studied brain chemistry and consciousness who tells me that every thought we have, every feeling, every emotion is reducible to an electro-chemical reaction. But if I were to follow strict logic, strict mathematics, it seems to me much more sensible to suggest that the body can exist without the mind than vice-versa (as in ants, say).

Don’t get me wrong, part of me agrees with Descartes that there is something distinct from the body (call it a soul if you like, like he does). But I can’t prove it, and it seems to me he can’t either, or at least doesn’t explicitly do so. It’s a startling misstep, but not quite as serious a flaw as his “proof” of God.

Some people speculate he inserted these (lame) “proofs” of the soul and God’s existence to satisfy the Catholic Church, which had punished Galileo for undermining religion. But I don’t get that sense here. What I sense is a kind of desperate urgency to prove what would be too frightening to live without, a need so strong to hold onto these two pillars of existence—God and the soul—that Descartes (unconsciously) overrode his own investigative principles.[4] That’s understandable, but, again, disappointing coming so shortly, immediately really, after the cogito.

Then again, maybe there’s just something here I’m not getting?[5]

[1] Robinson, Frank S. “The Case for Rational Optimism.” Web. 17 January 2011.

[2] A more recent version of this optimistic view of reason, especially with regard to technology, may be seen in the memoir, The Boy Who Harnessed the Wind. There William Kamkwamba, who hopes to improve life in his native Malawi by building windmills to provide electrical power, says, “In science we invent and create. . . We make new things that can benefit our situation” (pg249) and latter, echoing Disney,  opines about “all the things made possible when your dreams are powered by your heart” (280).)

[3] Actually, Descartes does realize this on some level. In Part 6, he writes: “even the mind depends so much on the temperament and the disposition of one’s bodily organs that, if it is possible to find a way to make people generally more wise and more skilful than they have been in the past, I believe we should look for it in medicine” (44)

[4]Kind of like Dr. Zaius overriding Cornelius’ archeological discoveries, only this time Dr. Zaius is “internalized.” Dr.  Z is the part of Descartes’ unconscious that simply can’t tolerate the inconsistencies between science and religion–Freud, as we will see later in the course, would have something to say about all this.

[5] Descartes does say that he explains in another (until then) unpublished work how “the rational soul . . . could not in any way be drawn from the potentiality of matter . . . .but that it has to be specially created” (42). The book in question, as our notes point out, is The World. Maybe someone out there wants to read it and report back as to whether Descartes satisfactorily explains there the mind/body split?