All posts by Aaron.Ottinger

The Day After Payday: Graduate Students, Gleaning, and Apocalypse

Jean-François_Millet_(II)_-_The_Gleaners_-_WGA15691Finally after a long, cold summer, payday finally arrived! It was yesterday, the tenth of October. The frost has melted and the money has blossomed. It is for some the first payday since school ended in June. Sure, it was a glorious summer, sitting everyday in a library, reading and writing. After all, as long as you “do what you love,” the conditions in which you live do not matter. So I’ve been told.

One point of reference for this frugal summer has been Agnès Varda’s 2000 documentary, The Gleaners and I (Les glaneurs et la glaneuse). Varda’s film covers the history and contemporary practice of gleaning in France. Gleaning is the agricultural practice of gathering scraps leftover from the harvest, such as grain, potatoes, or whatever is available. It is a practice largely reserved for indigent peoples. While I initially picked up this film for the short interview with psychoanalyst, Jean Laplanche, since viewing the whole documentary, I have thought much more about gathering scraps.

Historically, gleaning has been considered a common practice, recorded as far back as the Bible, at least. It was often conducted by women and in groups.  But in 1788, gleaning was criminalized in England (collecting dead wood on the property of someone else was made illegal in the same year, which informs the plot of William Wordsworth’s “Goody Blake and Harry Gill”). In many court cases involving gleaning, it was actually the land-owning farmers who were the accused, namely for assaulting gleaners.[i] But regardless of who brought charges against whom, according to the “law,” there was one less way to survive.

Today—or at least thirteen years ago—Varda observes that gleaning has become somewhat of a solitary practice. Not only do gleaners search farmland for leftovers, but also the markets, garbage bins, and alleyways. Largely urban dwellers, Varda discovers that a good gleaner knows which grocers, bakers, and even which fishmongers throw away food before it has spoiled. It is striking how many of the people Varda meets glean out of repulsion to the capitalist culture’s insistence that consumers continuously purchase commodities. For some, their commitment to glean is very much a moral issue.

In Seattle, where I live, it is illegal to forage in public parks, another form of gleaning. Over the summer I heard a news broadcast about how foraging is illegal here but that the Seattle Parks department is becoming more tolerant and actually teaching people how to forage for things, like nettles, without destroying ecosystems.

I am simultaneously pleased and troubled by this decision. I am pleased because it seems wise to use these spaces to also grow and harvest food so that urban dwellers are not limited only to imported products, which cost more money and require more fuel for distribution than locally grown products.

But if gleaning is given a bit of a (neoliberal, hip, west coast) shine to it, in the same move we become complacent with regards to very real things that cause some people to glean out of necessity, for instance, corporations and governments that rely on interns, or universities that rely on adjuncts and graduate student teacher assistants.

In a “roundabout” way, such complacency is already recognized. Shifting attitudes with respect to foraging in public parks (in other major cities, as well) follows from fears about an “uncertain future,” namely: “Climate change, extreme weather events, rising fuel prices, terrorist activity.”[ii] The reason that cities are softening up on gleaning is not because the poor have suddenly found a place in the proverbial hearts of middle-class Americans. Rather, gleaning needs to be appropriated by the so-called “creative class” in order to survive the next 9/11, tsunami, or cosmic collision.

Here I am reminded of Slavoj Žižek’s now well-circulated quote concerning apocalypse: “we are obsessed with cosmic catastrophes: the whole life on earth disintegrating, because of some virus, because of an asteroid hitting the earth, and so on…it’s much easier to imagine the end of all life on earth than a much more modest radical change in capitalism.”[iii]

In other words, rather than make it so that a real majority in the world has access to basic health care, clean water, safe food, warm shelter, as well as access to a quality education—and thus possibly diminishing the desires of some to destroy the planet or large sums of it—we are learning how to identify and cook nettles, and openly admitting that we are doing so in preparation for the next big catastrophe.

Perhaps there is no solution. Perhaps the damage is too great. But too great for what? Yes, climate change is real, its current trajectory is being driven primarily by human actions, and its effects will be profound, and most likely, profoundly bad.

And yet, this past weekend at the Society for Literature, Science, and the Arts’ annual conference (this year’s theme was the “Postnatural”), I heard about another approach. One of the keynote speakers, Subhankar Banerjee, an artist and environmental activist, spoke about “long environmentalism.”

The concept itself is still being worked out, but I would contrast it to assumptions that technological innovation is going to suddenly fix that whole climate change problem. Likewise, governments, corporations, and universities are not going to suddenly care about the needs and welfare of the displaced, the underpaid, and the overworked. I would say that these two seemingly disparate issues both require a similar “long” solution. For any problem humans and other species face today, the solutions require drastic changes to our ways of living: no quick turnaround is to be had. It’s great that cities are legalizing foraging and the colleges are starting recycling programs. But these are paper towels on a massive oil spill.

I do not promise organic unity in the conclusion of this post. That would be perverse. Instead I conclude with an anecdote:

Walking through campus after the English department’s annual reception during the first week of classes (that is three weeks ago), a number of my fellow graduate students and I came across a box of cookies left on top of a trashcan. One of us grabbed the box, to the horror of some and the ecstatic glee of others. As hands reached into the assortment of cheap, sugary treats, I announced to my cohort, “We’re gleaners!” At least one of them looked at me and understood my meaning. We smiled our intoxicating smiles and forgot for a second that we were really gleaning.

 

 


[i] King, Peter. Crime and Law in England, 1750-1840: Remaking Justice from the Margins. Cambridge: Cambridge UP, 2006. 281-338. Print.

[ii] McNichols, Joshua. “Urban Food Foraging Goes Mainstream In Seattle.” KUOW.ORG. KUOW News and Information, 1 Aug. 2013. Web. 10 Sept. 2013.

[iii] ŽIŽEK! Dir. Astra Taylor. Zeitgeist, 2005.

Digital Humanities: My Introduction 1.3

This post is part of a three-part series charting my introduction to the digital humanities. My entrance largely follows from attending a seminar that meets twice a quarter on Saturday mornings entitled, “Demystifying the Digital Humanities” (#dmdh). Paige Morgan and Sarah Kremen-Hicks organize the seminar and it is sponsored through the University of Washington’s Simpson Center for the Humanities.

As the spring term ends for the 2012-2013 school year, I want to conclude this series of posts with some reflections on introducing the digital humanities into my pedagogical practice.

Digital Humanities or Multimodal Composition Class?

The course I designed in March differs greatly from the class I ended with this week. My assignment was English 111. As the course catalog describes it, 111 teaches the “study and practice of good writing; topics derived from reading and discussing stories, poems, essays, and plays.” While the catalog says nothing about the digital humanities, so long as we accomplished the departmental outcomes, my assumption was that a digital humanities (DH) component would only provide us with new tools for thinking through literature and writing.

It was an innocent assumption.

The main issue was scope. For my theme I chose “precarity,” which Judith Butler describes as that “politically induced condition” wherein select groups of people are especially vulnerable to “injury, violence, and death.”[i] Because there are so many “precarious characters” in Wordsworth and Coleridge’s Lyrical Ballads, I used this collection for my primary literary text. In addition to investigating precarity, with the Ballads students could also explore multiple genres and how the re-arrangement of poems alters the reading experience. Third, I wanted to use a digital humanities approach. By a DH approach I mean that I would encourage digital humanities values with regards to writing (e.g. collaboration, affirming failure), using digital tools, and learning transferable skills.

By the second half of the course it was clear that the students were confused about the concept, unhappy with the text, and struggling to understand the purpose of the values, tools, and skills. During the second half of the course, I lost hope for my big collaboration project and I dropped the emphasis on the Ballads, focusing instead on rhetorical analyses of blogs and news sites addressing issues of precarious peoples and working conditions, which was especially timely after the recent tragedy in Bangladesh.

Without the literature component, students began to feel more comfortable with the tools and concept, which led to greater motivation and better papers. On the downside, these students signed up for a literature class, which I basically eliminated. The triad of concept, literature, and method should work. But I found that if all three areas are of equal difficulty you may risk blocking success in any of them.

The “transferable skills” were perhaps the most successful part of the course. It is not the case that my classes didn’t teach transferable skills prior to my digital humanities emphasis. But as Brian Croxall has emphasized, we can teach more of them. As far as the digital humanist is concerned, more “skills” is tantamount to learning how to use more tools, which I translated (perhaps erroneously) as more media. So this term, all of my students built websites and blogs.

From building blogs and websites students learned firsthand how medium shapes what we can write, how “writing” might necessarily include design and management, and rather than give a tutorial on how to build these sites, I showed students how they could use Google to search for help on their own. The transferable skills were twofold: build an online platform to host your work (which alters what you can present and how), and learn where and how to find answers to your building questions (and rather than “good” sources, I stressed more of them). While initially these sites were less than satisfactory, by the end of the class students began to realize the potential and implications of the medium, which prompted several of them to re-build their sites during revision phases, taking more time with the organization of pages, images, background colors, and hyperlinks, and then explaining why these changes were important.

The websites and blogs showed signs of success with regards to “building skills,” but these platforms might belong less to the digital humanities and more to “multimodal scholarship.” As the organizers of the Demystifying the Digital Humanities seminar stressed during the April 14th session, digital humanists use their tools to “produce” scholarship, while multimodal scholarship means using tools to “display and disseminate” traditional research. These differences are a bit blurry for me still, but the blurriness might be accounted for by the fact that some of us are “trickster figures” occupying multiple regions on the plane of digital scholarship, as Alan Liu explains in the most recent PMLA (410).[ii]

But Liu adds greater clarity to these distinctions when he explains how a digital humanities project uses “algorithmic methods to play with texts experimentally, generatively, or ‘deformatively’ to discover alternative ways of meaning” (414). The algorithms may be out of reach for English 111 (and me!), but by using Google Sites, Blogger, and Ngram many students were cracking the digital ice and playing. In other words, these basic multimodal tools might be a useful first step towards transferring to a more involved and complicated DH project.

For such a class to be really successful it will require much more planning. For the fall, I am refining what I have rather than adding more tools to the mix. Until I do some serious text mining of my own, it might be safer to design a “writing with digital media” course. But now that Pandora’s (tool) box is open, I don’t see it closing in the future.

 

After attending the Demystifying the Digital Humanities seminars and writing these posts, I wonder if my introduction has actually led me to media studies instead. My suspicion is that I will touch both areas, because it is ultimately the task or problem that will determine the approach. However, and I believe Liu also demonstrates this point, the digital humanities as a method might prove to be a problem or task generator. With these tools we will become like Darwin returning from the Galapagos with all those varieties of finches sitting on his desk, asking what all these birds have to do with one another. Perhaps the moral should be: the more materials the bigger the questions.


[i] Butler, Judith. “Performativity, Precarity, and Sexual Politics.” AIBR. Revista de Antropología Iberoamericana 4.3 (2009): 1-13. Print.

[ii] Liu, Alan. “The Meaning of the Digital Humanities.” PMLA 128.2 (2013): 409-423. Print.

Digital Humanities: My Introduction 1.2

This post is part two of a three-part series charting my introduction to the digital humanities. My entrance largely follows from attending a seminar that meets twice a quarter on Saturday mornings entitled, “Demystifying the Digital Humanities” (#dmdh). Paige Morgan and Sarah Kremen-Hicks organize the seminar and it is sponsored through the University of Washington’s Simpson Center for the Humanities.

The first post in this series attempted to define the digital humanities by considering some of its values. Today I want to make two points regarding what a digital humanist is and does. First, a digital humanist is not the same thing as a scholar. While the same person may occupy both roles, these roles nevertheless perform distinct tasks. Second, the digital humanist is distinguished by the tool set, and those tools are primarily for the purposes of visualization. So let’s explore these two points in greater detail, and I’ll conclude by looking at one of the many tools you can use in your own introduction to the digital humanities.


Tools, Tools, Tools!

On the last day of our Demystifying the Digital Humanities seminar (May 4, 2013), the organizers drew our attention to something surprising with regards to digital humanities scholarship: it may not be scholarship, at all. Many of those coming to the digital humanities already know how to conduct research, build and organize an archive, and employ “critical thinking” in order to arrive at some conclusions. The final step is often a presentation of these conclusions in the form of a written essay or a book.

Rather than adding data and conclusions in the scholar’s process, the digital humanist multiplies the perspectives and the media. The digital humanist uses tools in order to view and present collected data in the form of a diagram, graph, word cloud, map, tree, or timeline (or whatever you invent). Because a visual image allows us to see the “same” object or data set in a different way, the tool increases the scholar’s range of conclusions. So the scholar must demonstrate significance, but it is the tool that functions as a “bridge” for the sake of achieving that end.

Given the literary scholar’s tendency toward close reading, certainly an abstract diagram of the work(s) will lead to a less insightful reading. But here we are operating as if the tool provides a conclusion, which is the wrong assumption. The tool does not provide conclusions. The tool only allows us to see more at once.

My close reading of a romantic poem might be the most accurate, interesting, or revealing, but if I can see the same information in relation to more texts, across spatial and temporal fields, my tools will make conclusions regarding historical time periods outside my area of specialization. Wrong again! The map or graph only demonstrates correlations, intersections, and divergences. It is then up to the scholar to investigate those areas.

As the historian Mills Kelly says in his contribution to Debates in the Digital Humanities, “instead of an answer, a graph…is a doorway that leads to a room filled with questions, each of which must be answered by the historian [or literary scholar] before he or she knows something worth knowing.”[i] In this sense, the diagram functions like a treasure map that makes the X’s more explicit. And while that map will tell a scholar where to dig, it cannot tell us why the artifacts matter, what they mean, or how they are useful.

If the burden of the conclusion falls on the scholar, the digital humanist has aesthetic and logistic responsibilities. The digital humanist might ask questions like, “What kind of visualization most effectively represents my data?” It will also be important to consider financial issues like cost and maintenance. Often times, visualization software is free. But when depending on others for your tools, there are risks like the issue of ongoing support. If I use an online tool made by a company that suddenly “disappears,” I may have to go shopping. And let’s not forget the attachment people feel for an accustomed piece of equipment. Whatever tool one chooses, the old rule applies: backup your files. If you lose a tool you have only lost the medium through which you represent your information. Lose your information, and—well…

But everything we do comes with risks. To balance your decision as to whether or not you want to use these tools, I suggest having some fun with them first. An easy and fast way to see the benefits yourself is through IBM’s Many Eyes, a website devoted to free visualization software. The disadvantage is that Many Eyes’ visualizations must remain online; on the other hand, the site is so easy to use that you can test the water within minutes.

Below is a screenshot of a word tree I made from the Lyrical Ballads. In order to generate the tree, first I use the browser in the “data sets” to find the Ballads, which someone had already uploaded. Then I click the “visualize” button and select the first diagram option, “word tree.” From here I can enter any word from the Ballads that I want to explore. The 1800 edition begins with an “old grey stone,” so I enter “old,” which catches 47 hits. A diagram appears illustrating all the instances of “old” and how it connects to the words around it. Now imagine doing this with hundreds or thousands of texts. Many Eyes won’t tell you what all those connections mean; rather, it allows you to see them in the first place.

OLD in LB 2013-05-10 at 5.42.16 PM

For a closer look at this image, click here.

Rather than “new,” the word that best describes the advantage of digital tools is “more.” A Concordance to the Poems of William Wordsworth does something very similar to my word tree above because the book also supplies all the instances of “old” in Wordsworth’s poetry. But with digital tools, I could add the concordances to Virgil, Spenser, and Milton, as well as those writing manuals, law documents, and political pamphlets. Then all of these texts can be incorporated into the same visualization. In a way, these possibilities make me less nervous about the future of scholarship. Now I can see more ways of lengthening the narratives I was already generating, and find more to explore.

Beyond aiding our own scholarship, the visualization helps communicate what we do as scholars to a broader audience. The thing to remember is that the tool is not a justification in itself and it does not make one’s role as a scholar more relevant. But with these tools we can better demonstrate the power of the media we study to others using a medium held in common across discipline lines. Equally important, by working with these tools, we are in a better position to illustrate the necessity of the scholarship that actually makes these images meaningful.


The Demystifying the Digital Humanities seminar ended last week, but I hope that Paige and Sarah are able to continue these valuable workshops in one form or another in the years to come. For my final post in this series, I will discuss how I have attempted to incorporate the digital humanities into the course I am teaching this term, some of my success, as well as my failures.

 


[i] “Visualizing Millions of Words.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minnesota: U of Minnesota P, 2012. 402-03. Print.

Digital Humanities: My Introduction 1.1

appropriation by Christopher OttingerFor those who have yet to drink the digital humanities “Kool-Aid” (it’s the blue stuff they drink in Tron), for the next three posts I will chart my own introduction. My entrance largely follows from attending a seminar that meets twice a quarter on Saturday mornings entitled, “Demystifying the Digital Humanities” (#dmdh). Paige Morgan and Sarah Kremen-Hicks organize the seminar and it is sponsored through the University of Washington’s Simpson Center for the Humanities.

In this post I want to outline a brief definition of the digital humanities, and I will conclude by suggesting some things that you can do to advance your own understanding. Because these posts stem from my own introduction, they might be too basic for those already immersed in DH studies. Rather than an in-depth exploration, consider this post as an enthusiastic sharing of information.

Defining the Digital Humanities

During the first session of the seminar we attempted to define the digital humanities. A typical strategy towards definition might ask what a concept “is.” But the organizers challenged us to think about what this concept “does” and what “values” it embodies. The next two installments of this series will cover what you can “do” in the digital humanities. Today, I want look at some values.

Collaboration is one of the main values espoused in the digital humanities. “Instead of working on a project alone,” as Lisa Spiro says, “a digital humanist will typically participate as part of a team, learning from others and contributing to an ongoing dialogue.”[i]

In which case, a digital humanist might post his or her most recent progress, research, or problem on a blog or Twitter feed. Others can then add comments, suggestions, and criticisms. There is also a push toward finding people with the resources to do the job you have in mind (knowing he had the skills, I asked my brother to make the image above for this post). Overall, there is a common avowal among digital humanists that works ought to receive input and support from others before reaching the final product, and in addition, this feedback can come from more people from different disciplines.

Making works more available, as Paige and Sarah stressed, also means a greater willingness to be “open,” even with regards to “failure.” By being more open scholars can overcome the erroneous belief that every “success” equals “positive results.” As in the physical sciences, in the humanities there is little sense in reproducing the same bad experiment more than once. Sharing failures might ultimately lead to less repeat, and potentially more success.

It would be impossible to offer a full definition in this short space, but my conclusion so far is that, without knowing it, many young scholars are already invested in the digital humanities. For instance, writing for the NASSR Graduate Student Caucus blog qualifies as a digital humanist platform and method. I am writing in a public domain, making my interests more open for sharing and criticism, taking risks on what kinds of content I post, and focusing on producing more products more consistently, all of which embodies a DH ethos. During the first seminar in October, upon learning that I already shared many digital humanist values, it encouraged me to go familiarize myself with some of the tools, which I will now discuss.

Getting Started in the Digital Humanities

While not every university hosts a seminar like the one I attended, there are some traveling ones. According to the THATCamp homepage, it is “an open, inexpensive meeting where humanists and technologists of all skill levels learn and build together in sessions proposed on the spot.” These camps take place in cities all over the world and anyone can organize one. Or if you want something more intense, try the Digital Humanities Summer Institute at the University of Victoria (see Lindsey Eckert’s post on this site for an overview).

If you really want to jump into the digital humanities fast (this might sound self-indulgent in this context), I think the best method is reading blogs. The problem with blogs is the sheer quantity. But once you find a blog that works, they usually provide a blogroll that includes a list of the author(s)’ own preferences. At the bottom of this page I provide three blogs with three different emphases regarding the digital humanities for you to try (and please respond below if you have others to suggest).

The last thing is coding. It seems scary, but with simple (and free) online tutorials, learning how to code is like getting started with any foreign language: the first day is always the easiest. You learn “hello,” “please,” “thank you,” und so weiter. The difficulties arise later. But anyone who has travelled abroad knows that a small handful of phrases can actually satisfy a large range of interactions. For instance, it takes a few minutes only on w3schools.com to learn how to make “headings” in your blog post (like the emboldened titles above). Headings actually allow search engines like Google to more easily recognize your key words and phrases, which I didn’t realize until I started learning a little code. Ultimately, learning how to code can help you appreciate the rules that govern your online experience.

Last, I think it’s important to divulge why I became interested in the digital humanities. Because my dissertation started to focus more on tools, geometry, and the imagination in the eighteenth century, I found myself on the historical end of digital space. It made good sense then to start exploring current trajectories. But as I hope to show in the next two entries, “doing” digital humanities does not necessitate digital humanities “content.” Your introduction might be more about method, pedagogy, or even values. That said, it is worth having a good reason to invest your time in DH studies. As graduate students, time is always in short supply. But if it’s the right conversation for you, be open, be willing to fail, and enjoy the Kool-Aid.

Some Suggested DH Blogs:

If our blog is the only one you are reading with any frequency, perhaps the next place to go is The Chronicle of Higher Education’s ProfHacker. This blog features a number of authors writing on the latest trends in technology, teaching, and the humanities. For starters, try Adeline Koh’s work on academic publishing.

Ted Underwood teaches eighteenth and nineteenth century literature at the University of Illinois. His blog, The Stone and the Shell, tends to explain DH tools, values, and protocols for “distant reading.”

For a more advanced blog, in terms of tools and issues, I have found Scott Weingart’s the scottbot irregular resourceful, interesting, and it is also a great example of how to up the aesthetic stakes of your own blog.

 


[i] Spiro, Lisa. “This Is Why We Fight.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minnesota: U of Minnesota P, 2012. 16-35. Print.

Toward a Map of the International Conference on Romanticism 2012: “Catastrophes”

Precatastrophe:

“[The] most common catastrophe, the end of life, may have already happened without our knowing it”

–Brian McGrath (Clemson U)

Two weeks prior to “Catastrophes,” the International Conference on Romanticism’s 2012 session, a hurricane had formed and began moving through the Caribbean with an East Coast trajectory:

10/25/2012 2:33 AM EDT, Updated: 10/26/2012 5:05 PM EDT

“Could a Hurricane Sandy, winter storm hybrid worse than the “Perfect Storm” of 1991 slam the East Coast just in time to ruin both Halloween and Election Day?”

Huffington Post

A catastrophe does not start.  Its beginning is not a fixed point in time and space.  A catastrophic event develops, unfolds, and emerges.  While the catastrophe eventually becomes identifiable, its obscurity is not suddenly contained.  The causes and effects of a catastrophe are impossible to register entirely:

10/27/12 11:10 PM ET EDT

“‘We’re looking at impact of greater than 50 to 60 million people,’” said Louis Uccellini…The rare hybrid storm that follows will cause havoc over 800 miles from the East Coast to the Great Lakes.”

Wayne Parry and Allen G. Breed, “Hurricane Sandy, Approaching Megastorm, Threatens East Coast

So how do we measure catastrophe?  Does the number of people involved determine an event’s ontological status?  Even when a catastrophe appears to impact a single person only, seemingly infinite multiplicities are required beforehand in order to arrive at the individual’s loss:

11/7/12 5:13 PM MST

Roger Whitson@rogerwhitson

Spilled my coffee in the airport. #dumb

Expand                        Reply                Retweet             Favorite

Catastrophe By the Numbers:

In Eighteen Hundred and Eleven: A Poem, Barbauld examines “a national loss that can only conceal the individuals who bear that loss themselves.”

—Erin Goss (Clemson U)

Catastrophes are events that can be experienced but only through limited means. Different representational systems, from language to infrared technology and from maps to numbers, supply the conditions for making manifest that which an individual human cannot readily “see.”

Ocean surface winds for Hurricane Sandy

This image shows ocean surface winds for Hurricane Sandy observed at 9:00 p.m. PDT Oct. 28 (12:00 a.m. EDT Oct. 29) (from NASA.gov).

With the aid of a representation, humans convert an event into something it is not, something containable, accountable, and meaningful:

ICR 2012:  175 attendees, 147 papers, 5 plenary speakers, 2 absentees due to weather.

Weather for Tempe, AZ: November 8-11, 2012

Average Temperature: 79/55.

Average Precipitation: 0.02

Containment:  Once a catastrophe is converted, by way of a numerical system for instance, it becomes a representational thing over which humans can exert control:

11/9/12 12:53 PM MST

Bruce Matsunaga@BruceMatsunaga

@ICR2012 Please ask the MU to lower the thermostat in the Gold room!! #icr2012

Expand                        Reply                Retweet             Favorite

Accountability:  When a catastrophe is quantified, that conversion provides another way to represent the expenditure for an event.  It allows us to ask who—or what—pays the cost:

Conference registration fee: $140

Discounted fee for students & independent scholars: $80

Banquet: $50 (with cash bar)

Hotel Fee at the Twin Palms: $331.96/$80 per evening plus tax

Plane Ticket: $365 round trip

CO2 Impact: 1,928 lbs.

Meaning:  For decades, literary criticism has dismissed the numbers.  But like words, numbers are representations and they express meaning.  But when either words or numbers are used to represent a catastrophe and those involved, words and numbers can equally exclude the individuals represented in favor of their own proliferation.

After Catastrophes

“But when a scrap survives, disciplines come ‘limping back.’”

—Elizabeth Effinger (U of Western Ontario)

“A ghostly language can grow back over the damage.”

—Tristram Wolff (UC Berkeley)

Because catastrophes lack clear beginnings as well as endpoints, they cannot be represented by lines.  Lines, by definition, require two endpoints.  When winds gather together they form a storm, and when they scatter they leave artifacts in their wake.  The manifold tendencies of these artifacts presuppose the catastrophe that initially altered their courses.  Rather than reach an endpoint, a catastrophe transforms:

11/07/12 11:16 PM ET EST

“A nor’easter blustered into New York and New Jersey on Wednesday with rain and wet snow…inflicting another round of misery on thousands of people still reeling from Superstorm Sandy’s blow more than a week ago…Under ordinary circumstances, a storm of this sort wouldn’t be a big deal, but large swaths of the landscape were still an open wound.”

—Colleen Long and Frank Eltman, Huffington Post

So will a map of catastrophe look significantly different from a conference’s?  An old storm is embedded in the winds of a new one much like a conference picks up the conversations from the last.  The drift of arguments change and new topics gain emphasis, and yet, our function as scholars to preserve texts demands that the old data limp back into the dialogue, pending an apocalypse.  Events like conferences are not entirely cut-off from one another despite being punctuated by seasons, locations, and all the infinitesimal bits for which we cannot account.  Perhaps on a map, neither conferences nor catastrophes are lines with endpoints, but waves.

Many thanks to ASU and the conference organizers, Mark Lussier and Ron Broglio.

Congratulations to the graduate student essay winners:

First Place: Rebecca Nesvet (U. of North Carolina Chapel Hill), “Patagonian Giants, Frankenstein’s Creature, and Contact Zone Catastrophe.”

Second Place: Tristram Wolff (U. of California Berkeley), “Etymology and Slow Catastrophe: Tooke to Coleridge to Wordsworth.”


A Meditation on the One-Year Anniversary of Occupy Wall Street: Fear, Silence, and Participation

First, an admission: Before this evening I have never taken part in a political or social demonstration. But as a romanticist, I feel very close to revolution, social movements, and political protest.  So where is the disjunction?  There were numerous excuses I gave for not attending Occupy Wall Street events last year, namely writing a prospectus.  But I know I avoided the Occupy movement out of fear.  Fear of falling behind on my dissertation; fear of losing funding as a consequence; fear of being pepper sprayed by police; and fear of a stylistic change.  How do you go from pumping elbow patches to pumping fists?

Given my trepidation, tonight was perhaps the best introduction to protest.   In celebration of the one-year anniversary of OWS, Occupy Seattle held a silent demonstration.  For someone adverse to large crowds, yelling, and subjective forms of violence, in terms of appearance a silent march was a painless excursion.  Regardless, my legs shook the entire time.

In a silent protest, is there anything to really fear?  By and large the demonstration was one of the most innocuous experiences I have undergone with strangers.  I think on a scale of one to ten, the march ranked at a 1.  The American Nightmare concert I attended during college was a 7.  But—when you’re a graduate student—it is not often that you are of primary attention for the police.  It is a vulnerable feeling to have a dozen or more armed officers trailing you through city streets.  Of course, nothing is going to happen, you assure yourself.  No transgressions actually engender this fear, but the conditions of the situation do.  Structurally, we were surrounded.

With diminished levels of violence, it is questionable how effective a protest can be. Did not the group appear to be a bunch of lackluster whiners blocking traffic, hardly moving through the streets in silence?  And yet, the silence produced an eeriness.  Recall UC Davis’ Chancellor walking through a silent student protest last year.  There was a similar feeling tonight, but the structure was reversed.  The silence “emitted” outward from a center and arrested spectators.  Passersby stopped and observed; some took photos; some gawked; some didn’t notice.  One man howled out, “Occupy!”, then apologized to the crowd for his irreverence.  Eerie, yes—but without throwing bricks, engaging police, or detonating bombs it is difficult to make the front page.

But really, silence might be the most violent medium.  Academics enacting silence might benefit from Lenin’s example, as Slavoj Žižek describes it: “after the catastrophe of 1914…[Lenin] withdrew to a lonely place in Switzerland, where he ‘learned, learned, and learned’…And this is what we should do today when we find ourselves bombarded with mediatic images of violence” (8).[i]  Perhaps, but romanticists are a little touchy when it comes to withdrawing to a secluded place in the face of war and corruption.  Rather, we might translate silence to mean neglect.  Corporations need the average consumer.  They are not cancerous but infantile—neglect corporations and their power withers.

In a way, by studying romantic literature, romanticists have all been taking part in political demonstrations.  At the end of the evening, a representative from New York shouted out a “thank you” to New Yorkers for inaugurating Occupy.  A young man to my left replied in a low voice, “New York didn’t start Occupy.”  Agreed.  Forms of protest have a long history, each one particular in its own way, but a history nevertheless with which students of romanticism are familiar.  Familiar—but is reading about protest and revolution enough?  We lose something when we restrict “reading” to the page.  At the same time, it is not as if one marches in a demonstration in 2012 and suddenly “gets” the French Revolution, abolition, or women’s suffrage.  However, because revolutions do not die but decompose and scatter informational bits to be picked up and transformed, it is possible to connect to these historical and contemporary events through various media. So let’s make another admission: learning about revolution through study can be a form of protest, in fact, but if your legs never shake you have at least two limbs left uneducated.


[i] Žižek, Slavoj.  Violence.  New York: Picador, 2008.  Print.

 

Back to School: Time to Learn

School season is here!  Many of us are returning to the classroom in the next few weeks.  Some already have.  Freshmen will start their first classes right out of high school.  College seniors are prepping for the working world.  Businesses cash in on the hype, as well, having “back to school” sales.  And it will become impossible to find an apartment.  For Americans, education starts in the fall.  The season runs until spring.

But those poles say very little about when we learn.  The larger epistemological questions I’m thinking of are, “when do we learn what we learn?” and “when do we know what we know?”  There are multiple adaptations to these question, for instance, when should we know what we know; how long should we take to learn what we learn; or even, when is it best to admit we don’t know?  These questions steer us away from those that focus exclusively on identification (“what do I know?”), and they are modifications of the epistemological standard, “how do I know what I know?”  I like thinking about “how” in terms of “when” and “how long” because it allows us to critique established and perhaps arbitrary temporal designations.  For instance, why do most students begin college at eighteen, or why does college lasts for four years?  For some, these designations feel like law.  For others, they were meant to be broken.

King James I, despite being the most powerful person in the country, still had more to learn, at least according to his most brightest servant, Francis Bacon.  If dedicating his The Advancement of Learning (1605) to the sovereign was not a big enough clue, mid-chapter Bacon nudges his audience by inserting an apostrophe to the chief, claiming that even kings need to strive for evermore learning.[i]   He warns his royal highness of learning’s various diseases (not to be confused with our contemporary “crises” of education).  One disease concerns knowing how to discern old, worthy information from new, transient information.  But Bacon also wants good kings and princes to know when modern thought has simply superseded the available knowledge of previous generations.  When knowledge loses its flavor, it must be thrown out and trampled on.  Perhaps most interesting is Bacon’s insistence that knowledge is at its most profound at the axiomatic stage—when it is confusing, disorganized, turbulent, and it can shoot in manifold directions.  The observation comes off in this context more as a suggestion.  You want to be a brilliant king, James?  Enter a re-birth: Write aphorisms!

Perhaps you can teach an old king new tricks, but according to Rousseau’s Emile (1762), education begins as soon as someone wraps the infant in a blanket.[ii]  The slightest imposition on the child’s temperature teaches the human body to rely on prosthetic implements rather than its natural resistance to inclement weather.  No blankets, caps, or swaddling (60).  Let the child’s body adapt to the cold air: “It has a powerful effect on these newborn bodies; it makes on them impressions which are never effaced” (59).  Exposure to air is its own kind of learning.  It is difficult to leave the child exposed when the nurse insists on its being “well-garroted.” The nurse must then be ordered to let the child be, because “where education begins with life, the child is at birth already a disciple, not of the governor, but of nature” (61).  So if you want to educate your children right, Rousseau says begin from day one, pick the right teacher, and just let the children play-ay-ay.

Organizing her book according to themes and not a chronological sequence like Rousseau’s, Mary Wollstonecraft’s Thoughts on the Education of Daughters (1787) presents an arbitrary sequence in girls’ learning.[iii]  There seems to be no priority over when girls should learn about “Benevolence” or “Card Playing.”  Of course, that is with the exception of the main event, “Matrimony.”  In Austen’s novels, weddings appear at the beginning and the end; in Wollstonecraft they are dead center (chapter 11 out of 21). It is as if marriage engenders the gravity holding the rest of the woman’s life in order.  However, form is deceptive.  Wollstonecraft opines, “Early marriages are…a stop to improvement” (31).  If the girl has not already had a thorough education she will forgo it on account of how much work marriage requires.  And quite frankly, Wollstonecraft says, “many women…marry a man before they are twenty, whom they would have rejected some years after.”  If anything, Wollstonecraft’s organization, or brilliant lack thereof, says that learning can happen in isolated bursts and need not follow any necessary sequence.

I do not know if anyone would disagree in saying that learning is a productive process, but that we have these false notions with regards to “when” we learn results in some serious runoff.  While working on my teaching philosophy this spring, I kept pushing this idea of “learning as a mode of living.”  Part of this mode means doing what you always do but looking at one’s daily activity as a subject for thought.  Too often have I heard phrases like, “when I come home I just want to watch something I don’t have to think about.”  But it is not the object that requires no thinking; the viewer merely judges the object as a thing for which no thought is required.  What I do not understand is why as humans we are so impatient with things that waste our time, but so willing to dedicate our time to things we find so unworthy of our thoughts.  Anything can be a subject for thought.

Learning as a mode of living also means that learning does not end.  Learning does not end after class, when we arrive home; in some sense, learning does not sleep, or wait until we’ve had our coffee.  The body takes in information nonstop.  The question is what are we going to do with that information. The more conscious I have become of thought the more I realize that the brain produces an infinite quantity of images, movements, feelings, ideas, colors, memories and so on throughout the course of a day.  Part of the challenge is to resign to them.  Admit to the idea.  Give it room or space.  Record it in some way.  Then forget it.  They come back, anyway (who knows when?).  But now you have the first bit of an idea, and it is ready to shoot in another direction.  The trick is to admit that learning can happen anywhere and at anytime.

So in answer to the question, “when do we learn what we learn” or “know what we know,” there is no designated time for learning and knowing.  Knowing is not an identifiable position from which one can declare his or her knowledge.  Knowledge is stretchy, turbulent stuff like the time in which we declare it.  Stretch it far enough and suddenly we don’t know what we thought we did.  In the classroom then, it is perfectly acceptable that students feel confused about a subject matter, because when are they not confused?  Confusion ends only when we choose to cease thinking about an object, a world, or ourselves.  Confusion is the process of thinking; comfort is its absence.  Learn to be uncomfortable!  I tell my students that by the end of the term, they still might not understand some of the concepts we will have discussed.  Rather, like my high school English teacher, Mr. Weiss, used to say (I’ve tweaked the phrasing): we’re planting seeds in class and there is no way to know when they will sprout, bloom, dehisce, scatter, and so on.


[i] Bacon, Francis.  The Advancement of Learning.  Ed. Michael Kiernan.  Oxford: Clarendon, 2000.  Print.

[ii] Rousseau, Jean-Jacques.  Emile or On Education.  Trans. Allan Bloom.  New York: Basic Books, 1979.  Print.

[iii] Wollstonecraft, Mary.  The Works Of Mary Wollstonecraft.  Ed. Janet Todd and Marilyn Butler.  Vol. 4.  London: Pickering, 1989.  Print.

The Painful Pleasures of Romantic Feet

In early July 1797, Sara Coleridge spilled hot milk on her husband’s foot, prompting one of the finest romantic poems, “This Lime-tree Bower My Prison.”  The preface to the poem reads: “some long-expected Friends paid a visit to the Author’s cottage; and on the morning of their arrival, he met with an accident, which disabled him from walking during the whole time of their stay.”[i]  As Coleridge’s preface and the rest of the poem demonstrate, feet and walking were an important aspect of the romantic experience given the exceptional tendency to stroll, pace, and hike.  Thomas De Quincey calculated that Wordsworth had walked an estimated 180,000 English miles.  Coleridge’s great decade of walking culminated with him being the first to scale Scafell Peak in 1804.  And despite—or in spite of—a clubfoot, Byron swam four miles to cross the Hellespont on 3 May 1810.  It is difficult to imagine British romanticism without feet.

But it is precisely the romantic imagination that displaces the foot.  Following Coleridge’s accident, he laments not joining his party of friends.  He finds relief in imagining their journey, substituting the mental representation for the actual, physical experience of walking.  Was it merely a coincidence then that romantic poets frequently walked and on occasion mentioned their feet?  For Robin Jarvis, the wounded foot provides an opportunity for the poet to “[trace] the path of his friends” with his imagination, providing a view of an “uneven progress through a landscape which…offers locomotive as well as visual obstructions.”[ii]  These obstructions are then imitated in the poem’s uneven rhythms.  So the imagination might elide the physical, but the mind relies on the information the feet have gathered about the rhythm of walking in order to construct its elision.

For romantics, walking supplemented writing, but also they required supplementation for their walking.  In the tradition of Romans adding shoes to the Olympics (called “krepis”),[iii] the romantics had special clothes made to accommodate walks, and De Quincey was “first to go on a walking tour with a tent.”  According to Solnit, the introduction of these tools marks the beginnings of the “outdoor equipment industry.”[iv]  Some of us might not think of them as equipment, but animals also mediate the walking experience.  Wordsworth walked compulsively in order to compose, and sure enough, he brought along his dog.  The boyhood companion would warn Wordsworth of oncoming pedestrians so that the poet might cease his compositional “murmuring” before being mistaken for a madman.[v]

If romantic poets were such innovators and advocates in the world of walking, why would they ever pass over these things in favor of a different focus?  From a phenomenological perspective, passing up the foot for another issue (actual or imagined) might be inevitable.  It is typical to identify a thing when it breaks, or in Coleridge’s case, when it is scalded.  The foot suddenly becomes conspicuously present because it ceases to be what it normally is—a functional foot.  Only when noticed does a foot become the starting point of a poem.  But the poem quickly moves past the foot and onto the image of the poet’s friends.  In this example, Coleridge’s recognition of the thing (his foot) is negative: the foot’s conspicuousness never allows the observer to know the foot itself, only a deferral of the foot.

But it makes good sense that if a thing can attract attention when it breaks, it will attract attention when it works, as well.  The phenomenological response a la Heidegger would say that because the thing works we take it for granted and so the thing goes unnoticed.[vi]  For instance, I don’t think about my feet so long as they get me to work in the morning, just as the cabinetmaker doesn’t think about his hammer so long as it still drives nails.  However, if something suddenly works differently it might also attract attention.  This difference may signal that the thing was, in fact, not working beforehand.  We may have only grown accustomed to what has been broken for as long as we can remember.  So in the case that the thing is repaired, would I actually be experiencing a deferral of the thing, or would I finally gain access to the thing itself?  Probably not the latter, but the fact that the closest thing to one’s person could suddenly attract attention while working without flaw is exactly the realization I had one morning while running barefoot.

Rather than adding more equipment to my running, recently I decided I would try it with less.  It was about five thirty in the morning when I made the somewhat uncharacteristic decision (romantic mornings tend to follow romantic evenings, I find).  My usual place to run is the Olympic Sculpture Park.  The park faces Puget Sound, a large body of water punctuated with sailboats and ferries, framed by the Olympic Mountains.  I would like to say the scene was sublime or awe inspiring.  But my attention was on the ground.  I attempted to jog lightly at first, but beginning on a gravel path, there was much more pain than pleasure.  Approaching the grass I thought the softness would mitigate the discomfort but the grass was wet and cold.  At one point in my youth, it was common to run through the yard or the nearby woods without shoes.  Half my life has passed since my feet braved the earth.  It was shocking to have limbs so near and so unacquainted with exposure suddenly stung by what otherwise felt like a perfectly temperate morning.  The grass and my feet had become alien.

Our feet have become restricted to a heavily mediated form of touching.  If I could ask my feet what the world feels like they would describe a hot and itchy place: moist, confining, argyle.  The fact is, due to socks, rubber, and plastic I hardly ever touch the ground beneath me, to say nothing of unconstructed ground.  But that first morning I ran unshod, my dainty jog eventually became a full run, my feet enjoying the various textures of the ground.  Skin rubbed against concrete and woodchips, mud and grass, gravel and puddles.  At one point I stepped into mixture of grainy rocks and water covering the footpath.  I felt tiny air-filled cells densely packed together burst.  The sensation was not unfamiliar.  It reminded me of roe I had recently tasted at a sushi restaurant.  Finally, my mouth and feet had something to talk about.

Having abandoned the daily prosthetics designed for feet, I felt elated.  The experience was painful, but also it changed the way I relate to the park I routinely visit.  The landscape did not suddenly become sublime but more various and diverse, characteristics Wordsworth constantly praises in his Guide to the Lakes.  But where he praises visual diversity, my feet explored a tactile dimension of textures and temperatures.  Where the eye looks for contrasting colors, my feet were contrasting the hardness and softness of things.  These differences cancelled out most of the pain in the end; instead, it felt good to be feeling.  Running unshod reminded me of how little I actually know about this familiar place, and equally important, about my own body.

While Coleridge might not remove his shoes in order sharpen his focus on feet, the scalded foot still manages to open new points of access to the body and its surroundings.  Recall, Coleridge sits in a lime-tree bower.  He could have depicted himself lying in bed or sitting by a fire, but he chooses to situate himself on the ground in the garden.  Such a position is important because, although Coleridge seems to displace the physical for the imaginary, consider the fact that he immerses himself in the ground by eschewing a chair.  Tim Ingold has recently pointed out the modern belief that stationary rest was a prerequisite for thinking.[vii]  One must cease to move or walk in order to think and enhancing such thinking requires its own prosthetic: the armchair.  In this particular case, Coleridge does not celebrate the relationship between walking, the feeling of walking, and its correlation with thinking; rather, with other regions of the body spread across a plane of dirt, grass, roots, and rocks, the poet espouses a less regulated form of sitting which might provide the conditions for a different way of thinking altogether.  Given the variety of furniture, shoes, and constructed ground surfaces, it seems as though the body has access to unlimited experience.  However, if the body is forever wrapped, comforted, and secured, then how little of the world we actually know.

[i] Coleridge, Samuel Taylor.  Poetical Works. Vol 1. Ed. J.C.C. Mays. Princeton: Princeton UP, 2001.  Print.

[ii] Jarvis, Robin.  Romantic Writing and Pedestrian Travel.  New York: St. Martin’s, 1997.  Print.  149.

[iii] Tenner, Edward.  Our Own Devices: How Technology Remakes Humanity.  New York: Knopf, 2003.  Print.  78-79.

[iv] Solnit, Rebecca.  Wanderlust: A History of Walking.  New York: Viking, 2000.  Print.  115-116.

[v] Wordsworth, William.  The Prelude: 1799, 1805, 1850.  Eds. Jonathan Wordsworth and M.H. Abrams.  New York: Norton, 1979.  Print.  130-1.

[vi] Heidegger, Martin.  Being and Time.  Trans. Joan Stambaugh.  Albany: SUNY P, 1996.  Print.  67-71.

[vii] Ingold, Tim.  “Culture on the Ground: The World Perceived Through the Feet.”  Being Alive: Essays on Movement, Knowledge, and Description.  New York: Routledge, 2011.  Print.  33-50.

 

The Sublimity of “2001: A Space Odyssey” (1968)

The Sublimity of 2001: A Space Odyssey (1968)

Stanley Kubrick’s 2001: A Space Odyssey is a sublime film.  Tracing the evolution of humanity from prehistoric hominids to space age explorers immersed in Cold War politics, the film considers the telos or final aim of the human: a sentient computer. In terms of plot and thematically the film is sublime indeed, but especially when it’s big.  Kubrick’s movie comes back to the theater this week as part of Seattle’s first Science Fiction Film Festival, using a 70mm print, which basically means the resolution is higher than a standard 35mm print.  But 70mm film was/is used to shoot very few films, and the Cinerama, where 2001 will be screened, is one of only three theaters in the world with the capacity to project one.  For everyone else, the DVD will have to suffice (at least you get the extras!).  While I always thought aesthetic theories of the sublime had much to contribute to a conversation about Kubrick’s futuristic journey, is a big screen really a prerequisite for such a discourse?

It doesn’t hurt.

Kubrick’s film opens with “The Dawn of Man.”  A group of apes scavenge for sustenance, fighting with other clans of apes for a nearby waterhole.  By today’s standards, the apes resemble homo erectus, bipeds prior to the use of tools.  The stage in their development is important because one morning, Moon-Watcher (as he’s called in the script), awakes to find a large, black, symmetrical object: the monolith.  Geometrical form, par excellence.  Following from the encounter, Moon-Watcher creates what amounts to the first tool, thus inaugurating the next step in human evolution.  Moon-Watcher sees a bone and anticipates its use as a weapon.  The film presents viewers with a radical notion, that an external object determines brain capacity.  In other words, the encounter with the monolith animates Moon-Watcher’s imagination, but as the German Enlightenment philosopher Kant would say, the monolith itself does nothing.

For Kant, writing on aesthetics in his Critique of Judgment (Berlin, 1790)—a foundational text for studies on the sublime—sublime experience occurs only in the mind.[i]  A sublime experience follows from the “might” exhibited in nature causing a feeling of “respect” in the viewer. A truly sublime effect turns its subject into a “brave” and “noble” character with a newfound sense of moral purpose (§§28-9.99-106).  However, Kant disavows any purpose within the sublime object itself.  If it’s an ocean it’s only an ocean; if it’s a volcano it’s only a volcano (§29.110).[ii]  So according to Kant, the monolith could be anything because, for the human, it is the mind that determines the object.

From the inauguration of the first tool, time is compressed.  Kubrick now jumps almost two million years into the future as the camera follows Moon-Watcher’s hurled weapon through the air.  In a vicissitudinous cut Kubrick links two tools at the limits of technology: From Early Pleistocene bone to a twenty first-century military vessel orbiting earth. The gesture forces us to ask, what’s the difference?  As Adrian Mackenzie might say, the bone is local while the spaceship is global.[iii]  But how local are bones?  Like the monolith, these objects seem to traverse time and geographic location.  Furthermore, despite the apparent innocuousness of the film, the accompanying evil (or banality) of the monolith reveals itself in that imagination’s inauguration ushers in weapons of war—first and foremost.

For film’s third section, Kubrick introduces a different kind of sublimity.  If the military spaceship doubles as Moon-Watcher’s bone, the monolith’s double is the HAL 9000 computer.  Faceless and seemingly indifferent, HAL is “the most reliable computer ever made.” On their mission to Jupiter, the crew is comprised of HAL, scientists in hibernation, as well as two conscious scientists, Dr. Poole and Dr. Bowman.  Next to his human counterparts, HAL appears fragmented without an actual body, restricted by the cameras determining his sight.  On the other hand, HAL acts as the ship’s nervous system; that is to say, he is totally mobile, ubiquitous, and dubiously inescapable.  If the sublime requires safe distance, as it did for Edmund Burke in 1757, HAL creates the illusion of distance, while in fact he is closer than anything else.[iv] Kubrick zeroes in on a sublime object that cannot be measured in terms of physical distance.  The object is remote in appearance but near in personality, distant in body but near in omnipresence.  In this sense Burke is wrong while Kant and Kubrick are right: measuring, identifying, and containing the sublime says nothing about sublimity.

Maybe a good reviewer would explain the film’s end, but in the spirit of the sublime I will not enact that violence.  To be fair, the end should be experienced on the big screen, which is why, should the opportunity arise, any fan of the sublime or science fiction ought to see the film in the theater.  But what does one gain from bigness?  If in the end we admit that size alters experience, have we not undone the whole point of this article?  To admit that proportion is part of the sublime experience is only to admit exactly what these various thinkers ultimately gesture toward: the sublime cannot be contained within a single criterion or tedious criteria.

The Seattle Science Fiction Film Festival runs from 4/19 to 5/2.  Among others, films include Metropolis, Dune, Barbarella (of course), but sadly not Bladerunner.

 


[i] Kant, Immanuel. Critique of Judgment.  Trans. J.H. Bernard.  New York: Hafner Press, 1951.  Print.

[ii] On this point see Paul de Man’s “Kant’s Materialism” in Aesthetic Ideology.  Ed. Andrzej Warminski.  Minnesota: University of Minnesota Press, 1996.  Print.

[iii] For an interesting commentary on the limits of technology, comparing Paleolithic hand-axes to thermal nuclear devices (57-86), see Adrian Mackenzie’s Transductions: Bodies and Machines at Speed.  London: Continuum, 2002.  Print.

[iv] Burke, Edmund.  A Philosophical Enquiry into the Origin of our Ideas of the Sublime and the Beautiful.  Ed. James T. Boulton.  Notre Dame: University of Notre Dame Press, 1958.

 

The Speculative Turn and Studies in Romanticism

It might be fair to say where philosophy goes literary criticism follows, but the current destination is a little unclear.  Today’s graduate students of romanticism work with professors who rose up in academia when philosophical camps presented themselves in plain sight; one was either “influenced” by Derrida’s phenomenology, Foucault’s genealogies, Lacan’s brand of psychoanalysis, or some other wing of continental philosophy.  At this year’s MLA conference in Seattle I listened for hints of literary criticism’s current trajectory.  Mostly, I heard Fredric Jameson’s name but not so much in regards to a future direction.  However, peeking over the disciplinary line reveals a philosophical shift that has gained momentum in the last five years, commonly referred to as the “speculative turn.”

The speculative turn is a turn in the sense that the conversation has moved away from the linguistic one.  Speculative philosophy is generally metaphysical, systematic, and works outside the domain of the hard sciences.  The most recent emergence of speculative philosophy is interesting because of its investment in materialism and realism, and its engagement with the hard sciences.  Steering away from idealism (commonly associated with Kant and his successors), suggests that reality exists independent of human agency.  For many literature students, to declare one’s work materialist in 2012 will sound redundant, because materialist accounts of history in English departments have been prevalent for decades.  But that work was materialism without metaphysics, a discussion absent of the Absolute or the thing-in-itself, for better or worse.

What distinguishes the speculative turn is its posited problem, what Quentin Meillassoux calls “correlationism” (5).[i]  In short, correlationism is the insistence of the relationship between the concept of a thing and the thing itself, and it is this relationship that prohibits access to either.  Romanticists studying Kant and company know this story well; this “relationship” is what Kant refers to as the “transcendental schema,” a mediator between the object and the mind’s concept of that object (B 177, 181).[ii]  But, in Ray Brassier’s “Concepts and Objects,” included in the who’s who of continental materialism and realism, The Speculative Turn (2011), he says it is taken for granted that the “difference” or relationship between the thing and its concept is anything but conceptual (64).[iii]  To assume the difference is conceptual delimits the relationship to a strictly human imposition.

From the example of how one might interrogate a correlationist situation (Brassier uses George Berkeley to illustrate his point), it is clear that continental materialism and realism pursue further ways of engaging with the world without positioning the human as somehow detached or above the world engaged with.  Such an anti-anthropocentric line has been re-charged in late by Deleuze, especially in his critique of representation.  But if correlationism reinforces the linguistic turn’s abandonment, and hence the abandonment of representation, that does not necessarily mean “language is dead.”  The death of language holds especial concern for romanticists because English departments carry the burden of such a potential death.  Rather, the turn suggests that if an inquiry is to access anything immediately, to begin and end the investigation with language is to never even start.

So how does the speculative turn impact literary studies and studies in romanticism, in particular?  To be clear, philosophy and literary criticism are not the same.  There were many books of literary criticism from the 1980s and 90s “influenced” by deconstruction, but these books merely use a method in order to approach literary texts, which, initially, was not the method’s aim.  In some sense then, the new philosophical turn is quite remote from literary studies.  On the other hand, when the philosophy giant moves, its gravity impacts the academic milieu in general.  The fact that the speculative turn reasserts materialist and realist philosophy undoubtedly encourages a similar embrace in literary fields.  Especially for romantics, this re-emphasis is historically significant because Rousseau (our man!) largely marks the turn away from his hard-lined materialist predecessors.  But Rousseau is a signpost, not a gravestone.

The theories Rousseau sought to overturn did not die so much as criticism has preferred to focus on less thingly topics.  Materialist readings of romanticism have been lost for years, traded in for borderline idealist, dialectical ones. For instance, almost no critical reading of Wordsworth appears without citing the excellent and comprehensive Wordsworth’s Poetry by Geoffrey Hartman (1964), whose bibliography just so happens to dismiss W.H. Piper’s pantheistic materialist account of the romantic imagination, The Active Universe(1962).[iv] Current studies will not merely return to Piper’s history of ideas though.  Taking an object-oriented approach, coined by Graham Harman in his Tool-Being (2002), romantic studies might zero in on the object itself, independent of any relationship at all.[v]

In some sense, this “new” move is as much a return to the old as any new move is.  At the same time, it’s a return with a difference.  Derrida is back on the scene, but Martin Hägglund’s atheist Derrida.  Schelling has a starring role, but thanks to an increase in translations and Iain Hamilton Grant’s focus, the emphasis lands on Naturphilosophie.  In romantic studies, I suspect, given the recent emphasis on the more scientifically inclined Erasmus Darwin (e.g. Dahlia Porter’s work), a renewed interest in Newton and Locke will follow—hopefully, along with some “minor” figures that have gone overlooked.  In other words, if the speculative turn signals anything to us, it’s that we can do more.


[i] Meillassoux, Quentin.  After Finitude.  Trans. Ray Brassier.  London: Continuum, 2008. Print.

[ii] Kant, Immanuel.  Critique of Pure Reason.  Trans. Norman Kemp Smith.  New York: St. Martin’s Press, 1965.  Print.

[iii] Brassier, Ray.  “Concepts and Objects.”  The Speculative Turn: Continental Materialism and Realism.  Eds.  Levi Bryant, Nick Srnicek, and Graham Harman.  Melbourne: re.press, 2011. Print.

[iv] I was pleased to see an endorsement—not a ringing one—of Piper in Paul Fry’s excellent, Wordsworth and the Poetry of What We Are (2009).

[v] Harman, Graham.  Tool-Being: Heidegger and the Metaphysics of the Object.  Chicago: Open Court, 2002. Print.