February 2011 Archives

Sobriquet 70.2: Snow Day

| 0 Comments | 0 TrackBacks
Snow Day-sm.jpg
Thanks to the half-foot of snow left by the storm that swept through much of the northeastern part of the country last night and this morning, I did not have to go to work today. At 33, I rarely find that snow days excite me the way they once did, when I would quite literally leap with joy upon hearing the name of my school among the list of closings dutifully read by the local disk jockey, but, I have to admit, I retain enough of that sublime childhood pleasure to have gleefully shouted "Snow Day!! to myself this morning. I even tweeted it.

But as much as I loved snow days as a child and as much as I continue to enjoy them as an educator, I react differently to them today than I once did. For example, as a child, I would eagerly get out of bed at precisely the time I would normally hem and haw about waking up for school, pick up my snow shovel (often running down my family's un-shoveled driveway), call my friend, and begin walking door to door offering to clear neighborhood driveways for money. I could usually earn a solid thirty or forty dollars for my troubles (which, of course, meant that I could buy a new Ramones, Sonic Youth, or Clash album). Today, I fret over digging my car out of the snow. Likewise, as a child, I rarely gave a thought to how a snow day might affect my schooling; today, I worry about having to make changes to the syllabus I worked so hard to put together. In other words, there's something about snow days that remind me -- more poignantly than pretty much anything else -- that I'm a grown-up now.

In the end, though, a snow day is a snow day, and as one of the most beloved of life's little gifts, it's nothing to take for granted. So I blogged about it. I guess that's another change: as a child, I would never have passed up the opportunity for some late-night snowball tossing to sit down and blog about snow days...
One of the major lessons I try to impress upon the students in my Apocalyptic Literature class each semester is that much of the speculative fiction we read is less about the future depicted in a given text than it is about the fears and anxieties we experience in the present day. This tendency is perhaps most evident in Cold War-era novels like Nevil Shute's On the Beach, Pat Frank's Alas, Babylon, and David Brin's The Postman, each of which present readers with precisely the sort of post-nuclear holocaust landscapes most feared by the generations of people for whom duck-and-cover drills were as much a part of elementary education as basic reading and mathematics. For many of my students, however, the anxieties of Cold War brinksmanship have never felt relevant. After all, these are college students born after the fall of the Berlin Wall, young men and women who were barely ten years old when radicals based in Afghanistan used the military training they were provided by American soldiers to thwart Soviet aggression in the 1980s to mount the terrorist attacks of September 11, 2001. Thus, for quite a few of my students, the genetic modifications, threats of bio-terrorism, and transhumanistic strivings of those people living in the quasi-dystopian technocracy of Margaret Atwood's Oryx & Crake are much more real and much more frightening than The Bomb.

Indeed, the world Atwood imagines in Oryx & Crake is hardly that far-fetched, especially online. The exhibitionistic website At Home With Anna K, for instance, is almost certainly a reference to Ana Voog's AnaCam and the lifecasting movement pioneered by Jennifer Ringley and her now-defunct JenniCam website. Likewise, many of the other fictional websites Jimmy and Crake visit in the novel have real-life analogues: Felicia's Frog Squash is essentially a crush porn portal, the premise of dirtysockpuppets.com recalls the BBC's Spitting Image program, Queek Geek sounds an awful lot like Fear Factor, and the concept of watching assisted suicides on nitee-nite.com was actualized in our world when Craig Ewert allowed his death in Switzerland to be documented by Sky TV for their controversial Right to Die  documentary. Even the seemingly far-fetched idea of broadcasting live executions (which Jimmy and Crake watch on shortcircuit.com, brainfrizz.com, and deathrowlive.com) has already been discussed, with an alarmingly high percentage of the U.S. population receptive to the concept.

And the similarities between Atwood's future and our present are hardly limited to the Internet. In laboratories, for example, scientists have been developing "soggy pork," artificially-engineered meat that is eerily similar to the ChickieNobs Jimmy initially finds so repulsive in Oryx and Crake.

It is no surprise to me, therefore, that a bit of classroom discussion in which students debated the potential merits and possible pitfalls of genetic engineering and modification as described by Atwood said more about our present-day anxieties about the moral implications of the rapid pace of technological development than it did about what we can expect to see in our future. What the discussion did do, however, was further attune me to the vicissitudes of current debates about transhumanism. Thus, I couldn't help but pick up the most recent issue of Time Magazine when I saw that the cover story deals with the Singularity.

In the article, Lev Grossman provides readers with a relatively jargon-free (though somewhat sensationalist) discussion of the Singularity, the hypothetical point in time when Artificial Intelligence will exceed human intelligence and (presumably) change the world as we know it in ways we cannot even predict. Using Raymond Kurzweil's calculations (based on his theory that technology develops in exponential rather than linear fashion), the article pinpoints 2045 as the year when humanity merges with machines (unless, of course, Apophis collides with the planet before then). Some of the article's more interesting speculations:

We can use technology to extend human life and conquer disease

According to the article

"it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger."


We may be able to scan our consciousnesses into computers and thus achieve a form of immortality

This is, to me, the most interesting idea presented in the article, though I have heard it discussed on several previous occasions. In this scenario, human beings will be able to hook their brains up to computers and "download" memories, feelings, thoughts, and other components of one's self. In an ideal world, such a possibility would allow for an exact replica of a human being, with the exception of one's physical self, to be preserved externally. Barring such perfection, we have to ask ourselves some questions:

  • Lacking the biological structures of the human mind, will the new cloned self have consciousness, or will it merely be an unthinking record of one who has lived?
  • If its new housing does allow thought, will the new self continue to develop as an individual and, if so, will it then be considered a separate entity from its original incarnation?
  • Will it be possible to transplant these saved consciousnesses into new human bodies?
  • Can two selves with identical memories exist together? What will their relationship be to one another? Will the new consciousness be aware of its cloned status?

Technology will turn on humanity and destroy us

Welcome to The Matrix! Long a staple of sci-fi literature and film, Cybernetic revolt is hardly a new idea. It relies on three basic assumptions: 

1) that AI will eventually be able to think creatively and analyze data rather than merely calculate things;

2) that the evolution of machines will follow a Darwinian path and emerge as a virtual species;

and

3) that machines will develop a malignant attitude towards other life forms, particularly human beings.

It is the third assumption that I find most intriguing. It seems to take for granted that aggression is inevitable. Of course, this may be possible, but it is also possible that we are anthropomorphizing theoretical superhuman machines, endowing them with decidedly human vices when, in fact, they may not behave anything like us.

In the end, though, it comes down to one major Manichean dilemma: Will we be destroyed by machines competing with us for resources, or will we create compassionate oracles capable of solving our dilemmas (imagine AI fixing global climate change and helping us escape the destruction of the world upon the sun's death!)?

It should be an interesting century, this one!

OpenID accepted here Learn more about OpenID

About this Archive

This page is an archive of entries from February 2011 listed from newest to oldest.

January 2011 is the previous archive.

March 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.