Wednesday, November 07, 2012

Electoral Maps and Visual Rhetoric

Maps were very important Tuesday night. A ubiquitous feature of network coverage of elections is the large digital map, coded with the now definitive red and blue. As networks reported races and results, they invariably used touch screen maps to zoom in on states to look at county by county results. As the evening started shaping up, analysts used maps to explain scenarios that would get each candidate to the magical 270. So, as important as maps have become in these discussions, it is smart to examine these maps. They are, despite seeming to be object graphs of voter behavior, discursive in nature, and thus worthy of analysis.

The rhetorical nature of these maps is evident in the fact that, in the morning after the election, my Facebook news feed was full of this map:

This map seems to be an objective graph of how each county voted. Surely, there's no way to spin this map; it simply is what it is. Yet, it's not surprising that this map was being shared by my Republican friends, rather than my Democrat friends. The attractiveness of this map to my Republican friends is simple to explain: it contains a lot of red. The implicit argument, then, is that a huge section of this country is conservative, or at least Republican, and last night's results are a product of those slightly-less-American coasts. I had one friend even make the comment that "ninety percent of the country is red." But of course, square miles don't get to vote: people do. In this way, this map, as a graphic representation of the electorate, is a bit misleading.

Here is a different picture of the country:
This Rorschach Test looking map is a map of the United States, distorted to account for population. It is a county map, like the one above, except that the sized of the counties is adjusted so that they reflect population rather than land area. The colors on this map reflect the election results, by county, of the 2004 election (here is the county map from 2004. It looks nearly identical to 2012). This map is topical rather than topological.

On this map, the red isn't near so overwhelming. And, obviously, the blue is much more prominent. As one of my students said of this map, "this map is a lot more magenta." The highly divided nature of the country is much more evident in this map, and it is easy to see that the population centers of the country  are solidly blue.

The point here is that no piece of information, even a seemingly objective map, is devoid of rhetorical construction. The way that a map as a graphic representation is constructed, presented, and passed around must be interpreted. These two maps contain different assumptions about the information they present, and thus imply different arguments about the political picture of the United States. Understanding these may greatly inform our understanding of the nature of the electorate and last night's results.

Here's one more, just for fun. What are the [barely] implicit arguments imbedded in this graphic?

Monday, September 03, 2012

Astronauts, Cosmonauts, Webster's, and Opa: A Literacy Narrative

As an academic, issues of access to literacy have been very important to me.  Much of my recent research has been over students' abilities to successfully navigate the different rhetorical situations of online social network writing and more formal in-school writing. I've found (as have several other researchers), that despite the anecdotal complaints of teachers and parents that student writing ability is being destroyed by online writing, that students are remarkably adept at changing their writing style to reflect the rhetorical needs of the situation. One of the questions that remains open, however, is to what extent access plays a role in this ability. Most of our research has involved students at four year universities, or high-performing, college-bound high school students. Because this demographic is, to be frank, rather homogenous, most of the subjects of our research have had ready access to technology in the home, and thus have substantial practice writing in digital formats gained outside of school. Because of this, we don't really know whether or not people with less access are less able to negotiate these different rhetorical situations. This is an important research gap because we know that access to the basic tools of literacy is very important to a student's ultimate literate ability. In order to demonstrate this here, I will recount a literacy narrative of my own (an assignment I give students) in order to show how important access to books, and literacy sponsorship were to my own development.

In our house, we always had books. I can remember having collections of Little Golden Books, a staple of childhood when I was growing up. We were also once given an entire box of hand-me-down books that had belonged to my aunt but that my empty-nester grandmother was cleaning out of her house. We held these dear, and many of these books remained in our house throughout our childhood, and in fact, many are now in my parents' attic. But, to me, these books were old and outdated. Beyond this, my two sisters and I shared them, so, in a sense, they belonged to everyone and to no one.

Then, one Christmas, my Opa (which we called my grandfather. He's from Maine, but my grandmother was German, so we used the German Oma and Opa for them) gave me two books. One was a paperback Webster's Collegiate Dictionary. I remember being especially pleased that he had given me something which had been meant for college students (it was called Collegiate after all) and I took this to mean that my Opa believed me to be especially intelligent. The other book was a thin, hard back picture book about the space race between the US and the USSR. The book went back and forth between the innovations of both space programs, and ended with a picture of an American Astronaut and Soviet Cosmonaut shaking hands on a space station limb.

I was incredibly proud of these books. These were the first books that I had ever had that were exclusively mine. I didn't have to share them with my sisters, and on top of this, they were brand new. Ownership seems to have been very important to me. The fact that these books were mine made them very prized possessions. I attribute is gift with the beginning of my love of books. In fact, even now, I check very few books out of the library, because ownership of the book still feels important to me. I often explain to people that the book itself is the trophy for having read the book. My bookshelves are trophy cases. There was also, as there are in many literacy narratives, an emotional component to this story.

At the risk of performing amateur psychology, I would point out here that I was raised by my mother and my step-father. My Opa, who was the grandfather I spent the most time with, was technically my step-grandfather. Because of this, I suspect that beyond the wonderful newness of the books I was given, I may have also loved the books because they were, to me, symbols of acceptance. I was being given books by a man who famously loved books (he had a room in his house that was gradually filling floor to ceiling with books of his own). So his gift of books seemed to say "you're one of us." In this way, my sense of ownership extended beyond the books to ownership of a family, and gave me a sense of identity. I am a book lover, just like my Opa.

My own literacy narrative, if my experience is acceptably representative, suggests a couple of things about literacy. One of which is not surprising at all: that sponsorship is vitally important to literacy. My relationship with a book loving grandfather was the catalyst for my own love of reading and writing. The less explored feature of my narrative is the extent to which the materiality of literacy may be important. My own story suggests that having access to materials that I personally owned and controlled was an important aspect of my early literacy. This understanding has important implications with regard to the issues of access and the literacy divide, especially in the digital era when the most powerful learning tools are not books, but computers--a relatively expensive investment. To what extent will personally owning and controlling books, computers, iPods, and e-readers put some students in a position to outperform students with less access to these technologies?

Tuesday, August 14, 2012

Gotye and Digital [Pop] Culture

This post is intended to argue that the Gotye mash up above is evidence of Gotye's understanding of the mindset of digital culture. But instead of jumping right in, I intent to enter the conversation through the back door, in order to provide some theoretical background to what I am going to argue about Gotye's piece.

A couple years ago, Steven Hopkins, a graduate school colleague of mine wrote and presented a paper for a graduate seminar in which he presented the Gregory Brothers as an example of web 2.0 success (I hope I don't mis-state his argument. If I do, I look for him to correct me in the comments section). The Gregory Brothers, a musical group made up of a set of brothers and one of their wives, are now known for Autotune the News, the Obama Kick Ass Song, and of course the Bed Intruder Song (a phenomenon that I've addressed in the past).

Understanding the Gregory Brothers' success relies on two concepts important to theorists of digital culture and literacy. The first is Alexander Reid's concept of Rip/Mix/Burn. Reid cites Lawrence Lessig as the originator of the idea, which states basically that participatory digital culture relies on the ability of participants to rip material from other sources, and to combine these artifacts until a new artifact is produced through these combinations. Reid argues that this is, in fact, how cognition works. If this is the case, all cultural artifacts (whether art, discourse, or any other intellectual endeavor) are culminations of the artifacts, attitudes, ethics, and tropes that influenced them. This understanding, to Reid, problemitizes our understanding of issues like copyright, since nothing can truly be the product of one author/artist. All intellectual work is communal.

In the digital era, this process of Rip/Mix/Burn is exemplified in artifacts like fanzines, YouTube mash ups, and so on.  This is, of course, what the Gregory Brothers do on their websites. In order to make their videos, they bring together news clips, soundbites, and images and set these to music. So, they start with other people's copyrighted material, mix it up, add their own creativity, and produce something new.

The second important concept in understanding Web 2.0 success is Michele Knobel and Colin Lanksheare's idea of a Web 2.0 "mindset." For Lanksheare and Knobel, print culture was built around a "scarcity model." Hemingway was valuable because there was only one of him, and his success relied on his having been signed by Scribner. In order to become a successful writer, one must do so through the professional mediator of a publishing house. If one was to become a successful musician, one must be signed to a record deal. In this mindset, value came from an artifact's rarity, and the dissemination of that artifact was carefully controlled by professionals who supposedly knew what was good, and what would sell.

Web 2.0, on the other hand, functions according to a proliferation model. One becomes successful in digital culture by "going viral." And this relies, not on the professional wisdom of publishers and recording studios, but on the mouse clicks of viewers who like what they see and hit the share button. The Gregory Brothers, a band, became popular not when they were noticed by a studio for their "original" pieces, but when they were noticed by Internet users for their mash ups. Culture, according to Lanksheare and Knobel is shifting in such a way that this mindset will become dominant.

The entertainment industry proper, however, has been reticent to the changes this second mindset calls for. The entertainment industry, and the music industry in particular, has in fact, engaged in open warfare with the second mindset through  anti-piracy movements and PSAs, and law makers have responded with bills like the SOPA. The recording industry, including many musicians, has often militantly protected copyright. In this way, the industry has been rather retrograde with respect to its response to digital culture.

Because of this,I was a bit surprised and also pleased with this video by the recording artist Gotye. To produce this video, Gotye (who signs the video with the shortened form of his given name "Wally") trudged through the numerous parodies and covers of his song "Somebody that I Used to Know" that had been posted on YouTube and pieced together a new rendition of the song using these clips. Rather than circling the legal wagons and going after all of these video makers for copyright violation, he has instead himself ripped these samples, mixed them, and produced something from them. He has helped write his own fanzine.

This suggests that Gotye has adopted what Lanksheare and Knobel call the "second mindset." He hasn't just allowed the remixing of his song, but has in fact participated in the remixing process himself. In this way, he shares authorship with his audience, in much the same way as a blogger does when he enables the commenting function on a blog, then responds to commenters. He's allowed his sing to become an artistic wiki. Furthermore, he acknowledges that the form of the remix was ityself "inspired" by [ripped from] a Kutiman YouTube video. He also provides a link list to all the "original videos" of the "Somebody" covers he has used, and admits (in a tone that looks like apology) that he could not include all of the covers  he found.

What Gotye seems to understand is what the rest of the industry seems to have missed with regard to the new mindset: that the parodies and samples of his song did not harm him by violating his copyright. Rather, they brought attention to the song and they added to the conversation about its value. He seems to understand that art is a communal process, in which artists (and consumers) inspire and react to one another. The image of a solitary genius is a myth. Authorship is always shared. And the result of this shared authorship, in this case, is a haunting and aesthetically beautiful piece in its own right.

Wednesday, August 01, 2012

Looks Like Mad: Imagery and Mediation

Last week, at a training class for public officials I attended, someone brought a newspaper which had a montage of four pictures (like the one above) of James Holmes, the accused gunman in the Aurora, CO movie theater massacre. Holmes's appearance in court had been the conversation du jour for the twenty-four hours or so before. The descriptions from the media of these images and video clips included words like "strange," "detached," "bizarre," and so on. These reports seemed to lead inevitably to speculation about whether or not these images suggested that Holmes was mentally insane or if he was, perhaps, acting in order to prepare for an insanity defense.

During a break in our class, one of my fellow attendees showed me the front page, pointed out the photos, and said something like, "look at him. Do you think he's crazy?" Always one for pedantry, I said "well, I think that the editors of this paper were able to choose four photos that made him look crazy." I then added, "I think these photos say more about how the editors wish for us to see Holmes than they do about Holmes himself." I don't intend to argue about whether or not Holmes is mentally ill (to tell you the truth, I don't watch the news much and don't have any interest in engaging in those kinds of discussions anyway), but rather I want to use the conversation to discuss the reliance on  imagery as evidence, and to interrogate it a bit.

What's interesting to me about the discussion in my training class, and with similar discussions in the media, is that we (viewers/discussants) were being asked to decide whether or not Holmes was mentally ill based on the evidence of how he "looks" in photos selected for us by editors, rather than on the immensely anti-social nature of the crimes of which he's accused. Our impressions about the way he "looks" seems to carry more rhetorical weight with regard to the question of his sanity than does his actual behavior.In a way, we were being asked whether or not the way he looks might serve to explain, or even excuse the way he actually behaved.

For me, this is a testament to the power of visual evidence on our culture. We readily assume that visual evidence, in the form of photographs and especially in the form of video, is an unimpeachable and unmediated form of proof. After all, if we can see something, we can assess it's truth ourselves. If one wanted to prove a particular person broke into a store, for instance, a copy of the surveillance tape settles is. We know the actions of police in the Rodney King assault were abusive because we saw the assault on tape. And with the proliferation of cameras in modern public space in the forms of camcorders, media crews, police car dash cams, traffic light cameras, surveillance cameras, and cell phones, we have been acculturated  into the understanding that anything can be recorded, and recordings tell us the truth. Seeing is, as we say, believing.

We, therefore, tend to accept visual proof of any concept, event, or phenomenon as unmediated truth. Thus, we fail to interrogate visual imagery in any significant way (except maybe to argue about whether or not an image was staged or Photoshopped). We must learn to be careful, though, when considering visual evidence, just as we would be with any other kind of evidence. We must learn to ask who recorded this image. What was their purpose in doing so; what did they include and what might they have left out; how was it edited and why was it edited this way; what is the context of this image? These kinds of questions help us to determine just what a visual image is actually evidence of. They allow us to determine how much of the truth a visual image actually shows us. It is not enough simply accept that, because we see an image, we are privy to the whole truth of any situation.

One of the readers I have used to teach composition contains an interview by textbook authors David Rosenwasser, and Jill Stephen with photographer Joseph Elliott. In this interview, Elliott makes reference to a distinction he makes between artistic photographers as "Stagers and Recorders." Recorders, like Andreas Gursky, attempt to capture reality as they see it in front of them. Their photographs are somewhat journalistic--one might say "slice of life." Stagers, such as Gregory Crewdson on the other hand, well, stage their photographs. They plan them ahead, set lighting, build sets, use posing actors, and so on.

Stagers, according to Elliott, see themselves as "more honest" than Recorders because they are willing to show their hand in their photographs. Elliott explains, "any photographer is clearly implicated by the process of taking photographs." He goes on to say, "a photographer is inevitably selecting what to shoot and thus what not to shoot, and he or she is always framing the shot in various ways." In other words, the photographer necessarily acts as a mediator between the subject and the viewer, simply by choosing what to photograph. For instance, in the examples I've provided in the hyperlinks above, Crewdson's photographs are obviously staged in order to capture his artistic vision. But even Gursky, a Recorder, has framed the photo in order to capture a certain geometry, chosen a time of day that best suited the lighting he wished to capture, chose from some number of prints that which best fit his aesthetic sense of composition, and so on. So, though his photo may provide a "slice of life," it is necessarily a carefully chosen slice.

This concept has application when considering non-artistic images (those which we may use as evidence) as well. A newspaper photographer, a crime scene detective, or a cell phone user photographing a police officer on a traffic stop all, like an photographer/artist, chooses what to photograph, and by necessary extension, what not to photograph. There is always something going on in the margins, outside the frame. The photographer is always deciding that this is the thing he wants to photograph, and this is when. Even the most objective photographer is choosing what to photograph and when to snap the shutter, and in this way mediates the "truth" that the image captures.

This holds true of video as well. This is an important point because we give video an even greater evidentiary weight than we do still images. After all, videos are real time. If I want to test whether or not the photo editor who put together the four photos of James Holmes had fairly represented his demeanor for the whole of his court appearance, I might watch watch the video of the entire proceeding.

But in video, as in still photos, mediation occurs simply in the act of pointing the camera alone. Then, the act of editing the video adds another very important level of mediation. For instance, look at the effect editing had on the video below.

This is a very chilling example of the ways in which editors of video can more or less create truth that may or may not fairly represent the "reality" that the camera filmed. Of course, most camera operators and editors are not so unfair as the editor of this video, but the process of mediation remains the same even when editors truly wish to present images fairly. She still must edit a video based on what she believes is important, based on how she interprets the event.

Even an unedited video is subject to the limitations of framing, camera angle, video quality and so on. For instance, the cameras from a police dash cam might lead to an entirely different understanding of an event when viewed from one camera angle than when viewed from another. What if you were asked to decide whether or not the shooting filmed in these hyperlinks was justified based on the first video alone? You would believe you had seen the event as it really unfolded. After all, you saw it happen--from the officers own camera. The first video seems to clearly show officers shooting a man who was simply walking away. But of course, when you watch the second video, the limitation of the first becomes clear.

It is, therefore, important when judging the value of an image to ask why this photographer took this particular image at this particular time and in this particular way. Was this taken by a journalist who wishes to sell copies? An advocate of some particular cause hoping to make a certain emotional impression? Might this photographer have left something out, and if so, what and to what effect? What context might we be missing when viewing the images alone?

By positing these questions, I don't mean to turn readers into conspiracy theorists, assuming that photographers are trying to trick us. The point is that all photographers, videographers, and editors must make decisions about what the viewer will see and how they will see it, simply by the limited nature of what they are able to show. Furthermore, the choices choices are rhetorical. They take specific images, at specific times, and for specific reasons. An image is not an unmediated and inherently subjective window into truth, but rather a limited and controlled argument. Visual images are certainly very powerful pieces of evidence, but they must be treated as what they are--pieces.

Remember this next time you're presented with an image that is said to be indisputable truth of something.

Monday, July 30, 2012

Digital Divide and Public Outcry

This photo by Rusty Costanza appeared on the front page of the Times-Picayune on July 17th, accompanying an article about a hotel implosion and concerns by residents of the nearby Iberville housing project that particulate dust from the implosion might be hazardous. Interestingly, when readers called and e-mailed author Katy Reckdahl to complain, it wasn't about the implosion itself, but about the photograph. Readers were outraged that the eight year old child in this photograph was playing with an iPad, an expensive "luxury" item. The response was enough that Times columnist Jarvis DeBerry wrote about the outcry days later. His column sparked more debate on the issue, as commenters and commentators began to argue about what people on government assistance should be allowed to possess, presumably on the public dime. His column, as I write this, has 451 online comments in addition to the phone calls and e-mails that spurred the column in the first place. The issue has also gained the attention of more serious writers, academics, and think tanks.

The predominant debate has been whether or not people who are poor deserve to have things that are not necessities (an issue handled quite well by Jane Devin, whose blog led me to this story in the first place). While I think that, at the heart of this debate, is the feeling that many in the middle class seem to hold that, because they they foot the bill for those on subsidized food/housing, they exert a "sense of ownership" (Baker) over them, or at least over their activities. After all, if it's my money that they are using, I should have a say in how they spend it. Of course, as Devin points out, this sense of ownership and control seems only to extend to poor beneficiaries of government assistance but not the wealthy (if I buy a Chevy tonight, I still expect to pay for it, despite the fact that "my" tax dollars saved the company). Many in the middle class, then, seem to have what Devin calls "a kind of backwards jealousy," toward the poor. Though they may live in 800 square foot apartments in often dangerous and neglected housing projects, they didn't have to pay [much] for them. Meanwhile, I've had to pay for every bit of my 1700 square foot house. (That is to say, I'm slowly paying back a mortgage company who trusted me enough to buy a house for me, in no small part because I have a job, for which I am qualified by way of an expensive education paid for by federally subsidized loans, offered to me because Sallie Mae trusted the co-signature of my solidly middle class hard working parents.) Certainly, I shouldn't see these people "playing" with "luxury" items bought with "my" money.

But of course, we're not talking about "fancy rims," or "gold teeth," or "Air Jordans," (DeBerry). Instead, the luxury item held by a child (who, I guess, we think is supposed to be working to earn it) is a powerful literacy tool. As DeBerry explains:
The sight of a kid in public housing with an iPad doesn't offend me. Actually it gives me hope. So many poor people have no access to the digital world. They fall behind in school because of it. They miss the opportunity to apply for certain jobs. Yes an iPad is an expensive gadget, but we can't deny its usefulness. As computers go, an iPad comes cheaper than most laptops and desktops.
Most of us in the middle class would not, even for a moment, consider the digital divices in our own lives to be unnecessary luxuries. After all, we balance our checking account online, and pay bills online. our cell phones have replaced our landlines and are thus our basic tool of communication. We use our household computers, laptops, or tablets for work, for school, and for important social interactions.
 
In my household, which consists of two adults and two toddlers, we have four laptop computers (only one of which is currently functioning), two smart phones (each of which is more powerful than my family's first DOS based 286), and an e-reader. And that's not even all. I have two jobs, one with city government and one with a private university. Each of these employers has issued me a laptop computer, and provides me with access to desktop computers in several locations.
 
It is, therefore, not a stretch to say that computers are, in fact, an indispensable part of modern life. Certainly, computer technology and the literacies attached to it, are necessary for anyone who is to compete for good jobs--the kind of jobs that allow for upward mobility-- in Information Age America. Beyond this, Palfrey and Gasser (and many others) have stressed the importance of digital communities to those born after about 1980. Indeed, for this generation, computer mediated communication is an important part of identity formation. In other words, with regard to both our professional and personal lives, computer technology is extremely important.
 
With this in mind, arguments that the poor do not deserve and should not have access to what are, for the rest of us, indispensable technologies are especially insidious. In an age when many of our most important and influential literacy tools have shifted to digital media, such an argument is like claiming that children in underperforming inner-city schools ought not have access to books.
 
The assumptions behind such a claim are frightening. If I'm giving the people who make such an argument the benefit of the doubt, I will suspect that they are not thinking of the iPad as a powerful literacy tool, but are probably assuming that it is being used primarily as a gaming device, or social networking tool (which may in fact say much their own habits). These arguers may simply be ignorant about issues of literacy and digital access. Cartainly, many people have not accepted that anything beyond what they got in the golden age of their education is necessary for learning (we didn't have all that in my day, and I turned out okay). They may not purposefully be saying that poor children shouldn't have the opportunities that their own children have.
 
On the other hand, Sam Fulwood, a senior fellow at the American Center for Progress argues that:
Anyone alarmed by the sight of that photo surely must believe the poor aren’t deserving of anything save the barest of survival necessities—if that much. What else could explain their anger at the sight of an 8-year-old black boy learning about a world beyond his immediate community with an iPad in his hands?
Certainly, these arguments seem to involve the assumption that those living off of government assistance deserve only the bare minimum: tattered books in inner-city schools, limited and timed access via computers in public libraries, and absolutely no frivolous computer games or social media use. If these are indeed the assumptions of these arguments, then as Courtney Baker suggests "This thinking. . .mistakes education and technology as solely the domain of the entitled."

Certainly, there is a whole history of literacy education that suggests that those in privileged positions (often those who are only tenuously in these positions) tend to block the access of those in classes below them from literacy tools. It is as if there is an instinct among the middle classes to protect itself from those that would be their competition if they were afforded the same educational opportunities.

For me, this idea opens up new research questions: To what extent might this actually be happening today? Are there those who, whether knowingly or not, actively work to maintain the digital divide? Certainly, there are many who benefit from this access gap. These beneficiaries without a doubt have an interest in maintaining the status quo. So, to what extent, if any, does public policy reflect a desire of the middle class (which makes up the bulk of the voting public) to maintain and benefit from the digital divide?

Tuesday, June 19, 2012

Posthuman Collective Composer

The video above is from one of my son, Aodan's, favorite DVDs. It's an Animusic DVD that Charissa, my music teacher wife, brought home for my boys to watch. The Animusic series is a collection of cartoon music videos featuring machinery of various types playing music. Some of the machines are better-mousetrap style contraptions, others are like giant wind-up toys. The one above features robots on a spaceship playing percussion instruments. As we watched the DVD yesterday (and Aodan drummed along on a toy drum), I remarked to Charissa that I was uncomfortable with the possibilities this particular video explored. I am uncomfortable with the extreme posthuman (or maybe dystopian) theme of machines playing musical instruments without apparent human involvement (this is the kind of conversation I routinely subject my poor wife to). Despite my interest in digital literacies/cultures and computer mediation of human literacy habits, I want artistic sensibility and aesthetic to belong to humans. Humans make art. They may use technology to do so (of course they do; musical staffs and symbols are a technology after all), but it is humans that control it in order to turn sound into art.

Then today, a friend and colleague of mine posted a Gizmodo article about a scholarly journal article by Imperial College-London researchers published yesterday in PNAS called Evolution of music by public choice. This study essentially studied the evolution of sound into music in order to compare it to models of evolutionary biology. In this study, researchers basically (I'm skipping important steps for the sake of space. Go read the article) started with clips of randomized noise and allowed people to rate the clips according to musical quality. The top rated clips were then combined in a semi-randomized "genetic" style creating new clips, and the process repeated itself. Over time, the clips begin to sound like recognizable beats, then melodies, then they finally become relatively complex and interesting musical strains (I listened to all the published examples. I encourage readers to at least listen to the commentary and overview offered on the Gizmodo article. It's pretty amazing).

The idea of the study was to study consumer input in the evolution of musical aesthetic by isolating it. In other words, there was no human writing the music: no experimental artistry by a person trying to play with old conventions, no producer looking for a hook, no band members wanting a solo. In removing the factors these factors so they could look at data relating only to the issue they were studying, researchers also removed the people that are typically associated with making music--that is, the composer, lyricist, and producer. Instead, these processes were automated.

Historically, music is considered a humanistic art form, not because it touches a human audience, but because it is created by a human composer. This music, then, is remarkable because it doesn't have that singular person we typically associate with musical composition. But I think the Gizmodo article oversimplifies when it says that "it's possible for digital music to evolve by itself, without creative input from a composer." It is certainly the case that there is not a single human composer. By getting rid of this figure, this experiment abolishes the concept of the genius artist, individually achieving a transcendent artistic artifact. Instead, there is something decidedly posthuman in the composition of this piece.

When I speak of the posthuman here (a term which involves a kind of spectrum of thought), I'm thinking along the lines of Donna Haraway's "cyborg." This involves the idea that the separation between our selves and the objects we make/use is, as it turns out, rather blurry. As we evolve our instruments, those instruments evolve us as well (this idea of "man as ongoing process" is the central theme of posthumanist theories). It is this interplay between our selves and our technology that make us "natural born cyborgs" (Clark). We are, by our nature, part human, part tool.

The DarwinTunes are composed through the interplay between tool (a computer designed by humans running an algorithm programmed by human researchers) and active human beings (the people voting on which clips move on).

Not only is the music produced by DarwinTunes posthuman, but it is also an example of another important element of digital culture in that the human half of the composition process is completely collaborative. Just as the singular artistic genius is replaced by a computer program, so also is he replaced by, not one musician, but thousands of consumers, all of which bring to the process their own cultural histories (in the form of chord progressions, dissonances etc. which seem "natural"), personal aesthetic sensibilities, and so on.

Perhaps, then, the most surprising thing about the DarwinTunes is that, after about 500 generations, they start to sound pretty good.  Pieces of music composed, not by an individual or small collaboration of talented artists, but by a process of negotiation between a piece of technology and collective intelligence may, after enough generations, turn out to be as complex and sophisticated as any experimental piece by Philip Glass. This brings into focus one of the fundamental questions we begin to ask when studying digital culture: Just how important is the "expert/genius/author" after all?

Saturday, June 16, 2012

Being Really Useful: A (sort-of) tongue-in-cheek analysis of the Thomas and Friends series

As a father of twin sons, I have been subjected to hours upon hours of Thomas and Friends episodes. My sons also have Thomas the Tank Engine models (two of which actually propel themselves and pull wooden train cars), and Thomas blankets. In fact, my wife discovered a potty training breakthrough for our son Beckett by buying him Thomas underwear. She explained to him, "Thomas is our friend. We don't pee on our friends." After this, Beckett, who is innately logical, tried very hard not to wet his pants and, when he had an accident, he would sulk and say "we don't pee on our friends."

Their Thomas craze has subsided some in favor of Veggie Tales and Elmo, but during the height of their Thomas fandom, when we were watching the show constantly, I would make fun of the oft repeated phrase "really useful engine," a phrase repeated ad nauseam in the Thomas and Friends series. I, somewhat jokingly, insisted that the show was designed to brain-wash children into accepting the ethos of being "really useful" as sacrosanct. Of course, I made these assertions in jest, mostly because there is nothing so particularly insidious about teaching children to be useful.

Then, yesterday, I was picking up the horrifying mess my boys hourly make when I came across their "Thomas & His Friends Help Out" DVD. While I walked the DVD back to the living room (I found it in the bathroom, of course), I haphazardly read the back of the sleeve. The description of the stories contains a quote ascribed to the railway director, Sir Topham Hatt: "Helping out is one of the best ways to show that you're a Really Useful Engine." I instantly noticed the divine capitalization used in the phrase Really Useful Engine. This capitalization, usually reserved in modern English for proper nouns and pronouns for the deity, suggests that the ethos of being Really Useful is, in a very interesting way, treated as sacred within the context of these stories. Since this is the case, I thought, perhaps this ethic deserves a little attention. What does is mean to be Really Useful? What makes this ethos culturally appropriate, and what cultural systems does it privilege?

It may be tempting to think it silly to analyze a children's television program, that it is, on it's face, an over analysis, a pedantic intellectual exercise. I admit that this is true. I am, in fact, writing this blog mostly in jest. But, I would also argue that our children's television shows are indeed worthy of scholarly analysis. This is because, considering how much time our children spend watching television, educational and otherwise, it's reasonable to argue that these shows are one of the important activities through which we enculturate our children. In a way that probably ought to make us uncomfortable, it makes perfect sense that the nation that produced the ghastly Struwwelpeter stories also produced the Holocaust (see Katz, Steven B. "The Ethic of Expediency: Classical Rhetoric, Technology, and the Holocaust." College English 54.3. 1992). This is because a society's cultural productions, especially those considered didactic, inculcate its members into the society's organization, ethics, and systems. So, what ethics and what systems are being privileged when we teach our children that it's important to be "Really Useful Engines?"

First of all, understand that within the context of the Thomas the Tank Engine TV shows (I confess, I haven't looked at the books, so I cannot speak for them), the goal of every character is to be regarded as "Really Useful," especially by Sir Topham Hatt, the Director of the Railroad. It is indeed, their only form of payment. The phrase "Really Useful" appears in nearly every episode and is, in fact, part of the theme song of the show. An engine becomes Really Useful in the ways one might expect: by being on-time, by working hard, and by performing well at difficult, work-centered tasks.

This is a decidedly industrial ethic. Work hard, and you shall be regarded as Really Useful. And, of course, to be called Really Useful is the ultimate compliment in these stories. Thomas and Friends, then, privileges an ethos that is undoubtedly industrial rather than, say, one that is relational. In the quote I gave from Topham Hatt earlier in this post, the reader may notice that one "help[s] out" (relational) in order to show that we are "Really Useful" (industrial). This quote, which succinctly provides the moral of these stories, seems to privilege the industrial over the relational. We do the work of relationship (helping out) in order to become industrious (really useful). The relational serves the purposes of the industrial, rather than the other way around.

This seems to fit well with cultural ethic of the industrialized west, specifically the UK and the US where these shows are produced. In many eastern cultures (especially middle-eastern), which are more relational, one might expect an inversion of this moral. One tries to be "really useful" (industrious) in order to "help out" (foster relationships and cohesion). In the west, we tend to privilege industriousness in an extraordinary way. We place great personal value on industrial value. That is to say, we often judge others by their ability to contribute through work. We introduce ourselves to strangers by telling them what we "do," which always means what we do to make money. Indeed, a good puritan work ethic is part of the fabric of our society, a society that counts on its members' ability and willingness to work. Industriousness is an important ethic with regard to the the maintenance of the military-industrial complex, which is, after all, the core of a capitalist society.

What we learn from this (which will not really be a surprise) is that we begin inculcating cultural values at a very early age,a and through seemingly innocuous, even wholesome, media. As an individualistic, capitalist society, we teach our children from a very early age the importance of individual achievement and industriousness. We want them to want to be Really Useful, because our society requires useful members--that is, members who do good work. If these lessons aren't exactly purposeful (I abjure the thought that Reverend Awdry was purposefully brainwashing children into good little corporate slaves) it is only because these ethics are so naturalized that these lessons happen automatically any time we write something we believe to be educational. These lessons just seem seem right and good in and of themselves. The ethics they teach are, for better or worse, sacrosanct.


I hope you've found this analysis Really Useful. Even better if you're a publisher who finds it industrious.

Monday, April 23, 2012

Ron Artest and the Language of [non]Apology

Yesterday, perennial basketball villain and recidivist Ron Artest (who I refuse to call Metta World Peace) made the bad kind of news again with his flagrant foul against OKC Thunder guard James Harden, a vicious and unprovoked elbow to the head of a player not even involved in the play. Artest, who has attempted to change his image in part with a ridiculous name change, issued a pretty shallow apology in the locker room. He's also offered some further apologies/excuses via twitter since the incident (how people apologize in 140 characters or less is a subject worthy of study in its own right).

As a language and rhetoric person, I find Artest's apology, as a discursive act, very interesting. As a Thunder fan and Artest hater, I find it rather hollow. Here is the apology:

During that play, you know, I just dunked on, you know, Durant and Ibaka, and I got real emotional, real excited and it was unfortunate that James, you know, had to get hit, you know, with an unintentional elbow. And I hope he’s okay, you know. The thunder, they’re playing for a championship this year, you know, so I really hope that he’s okay. And, you know, I apologize to the Thuder, you know, and to James Harden. It was such a great game, and it was unfortunate, you know. So much emotion was going, going on at that time so. . .That’s it for today [smiles, apparently having been directed not to answer any questions].
There are some interesting rhetorical moves imbedded in this statement--moves which rather remind me of the spontaneous utterances (a legal term) offered by people I've arrested as a police officer. Here's what I notice.

1) "I got real emotional, real excited."
Here, Artest offers the excuse for his actions. His implicit argument here is that the incident was simply an expression of Artest's emotions, which are of course universal and involuntary human reactions. Again, this mirrors the excuses offered up by many who have just committed crimes who wish to minimize their culpability by blaming their own emotions with statements like "he was talking noise and I just lost control." I'd be interested in a study of such use of emotion as excuse is a feature in the language of apologies in other cultures. I have a sense that such moves are made possible by our culture's privileging of emotion.

Our culture views emotion as central to decision making, and in fact we assume that such a view is natural common sense. We base our decisions about who to marry (and divorce) on how we "feel" about another person. We judge whether or not we are in the right career based on how the job makes us "feel," and we assume that the "right" job carries with it the intrinsic reward of feeling good during and after the work. It is, then, not a surprise that appeals to emotion appear in our expressions of apology and excuse as well. Somehow, we are bad people if our trespasses are logic based (think of the concept of "premeditation). But if they are emotion based, we are just people who make mistakes. We didn't behave out of evil, but rather, we just over-reacted to our own emotions. Artest implies this here as well. He minimizes what he has done by making sure we understand that he did not draw up a plan on the bench to strike Harden and take out the sixth-man of the year. Rather, his action was an unplanned emotional outburst-- natural reaction to just having out-performed the likely league MVP (KD) and on of the best interior defenders on the league (Ibaka).

2) "It was unfortunate that James, you know, had to get hit. . ."
There are a couple things going on in this sentence. One is embedded in the word "unfortunate." Here, Artest transfers agency away from himself and onto the concept of fortune--fate, destiny, Moira. The elbow wasn't the action of a person but rather of plain old bad luck.

The second salient feature of this sentence is the use of passive voice. That is, in the structure of this sentence, no one does any hitting. Harden simply was hit. In fact, he "had to be" hit. Artest conspicuously leaves himself out of the action of this sentence. My sister Tina, a Spanish translator for an insurance company with a highly sophisticated and nuanced understanding of the language, once talked about her occasional frustration in dealing with insurance claims based on a cultural reluctance in many Spanish speaking countries to admit wrong-doing (as we all know, in auto-insurance claims, "fault" is a big deal after an accident). In Spanish, she says, it would be very uncommon for a person to say "I did such and such," but rather "such and such was done [by me]." So, the syntax of the language is actually structured in such a way that actually forces speakers to distance themselves from action.

Since she's told me about this feature, I've been more sensitive to this occurring in English, a language that privileges agency and active voice verb constructions. In English, if someone is leaving themselves out of a sentence, is is conspicuous and telling. And this is exactly what Artest does here. No one hit Harden. Rather, it is simply something that happened. In fact, it is something that "had to" happen--again, an act of fortune, or misfortune, as the case may be.

And all this happens even before he finally simply claims that it was "unintentional." All the features I've pointed out here function to distance Artest from his own actions. He minimizes his own agency, deferring instead to emotion and to fortune (twice each, as it turns out). It's an "apology" that, again, shares the features of confessions made by a criminals who make admissions with their defense still in mind. So, to those of us who were already critical of Artest, his statement hardly seems like an apology at all. These are not the words of a contrite man who had been trying to change his image only to have this setback. Rather, they are the words of a man who had really not changed that much at all.

Wednesday, March 28, 2012

The Joyful/Happy Dichotomy and Symantic Theology

In our Wednesday night Bible class, we are looking at a video series based on the Dallas Willard book The Divine Conspiracy. In the series, Willard is "interviewed" about his concepts by author and preacher John Ortberg. At one point in the conversation, Ortberg asks Willard about a statement in his book in which Willard describes looking at a beautiful beach in South Africa, realizing that God sees every beautiful thing in his creation at once, and suddenly feeling very happy for God. Ortberg points out that we don't often think about God as "happy," maybe joyful, but not happy. Willard replies by saying "what else would you expect joy to look like?" He goes on to suggest that perhaps joy and happiness aren't the same thing (a common modern Christian teaching) but that God is certainly happy with what he has made, even if there are parts of it (humans) that don't behave as he would like them to.

Of course, this opened the door for me to rant about how much I hate the teaching that "joy and happiness are not the same thing." We ran out of time and I didn't get to explain it (which I intend to do here), so I opened a can of worms that we didn't get to explore. I only got to explain that I can't stand this oft repeated Christian phraseology because it doesn't mean anything. These two words are in fact synonymous and our separation of "joy" and "happiness" into separate concepts is a semantic trick. Here's why I think so.

In popular Christianity, we have developed and taught the idea that joy and happiness are separate things. We base this on the [well grounded] assumption that "happiness" is an emotion and thus contingent upon circumstance. This is true, as far as I can tell. The issue, then, is what we do with the concept of joy. What is our working definition of this concept? The answer to the question "what is joy?" is always much more nebulous. It's apparently something more permanent, based on our relationship with God, and much deeper than fickle emotions. Thus, we can have "joy" even when we are not "happy." I am not at all satisfied with this definition. It's meaningless. And so, in my opinion, is a realistic distinction between these words. So where does such a distinction come from?

Our problem with equating "joy" with "happiness," and our subsequent desire to invent a distinction between the terms comes from our cognitive dissonance regarding versus like James 1:2. Here, James writes to the Jewish Christians scattered throughout the empire and tells them, "consider is pure joy, my brothers, whenever you face trials of many kinds, because you know the testing of your faith develops perseverance." Paul writes some similar things in his letters. We look at versus like this one, where we are being told to be joyful about trials, mistreatment, and suffering, and such commandments don't make sense to us. How could God expect us to be "happy" about suffering? So, rather than acknowledge that God asks us to do things which are hard, we invent this dichotomy. He tells us, not to be "happy," but to have "joy," and these are different things--even if we can't quite really describe how they're different.

In fact, this dichotomous understanding of these two concepts does not extend into any other context. That is to say, in none of our conversations, save for this topic alone, do we use these two words as if they are different things. When we refer to a joyous occasion, we are always talking about a happy occasion. We mean a wedding, not a funeral. When we think of "making a joyful noise," no one thinks about Barber's Adagio for Strings. Even scripture does not bear out this separation of concepts. John compares his feelings about the beginning of Jesus's ministry to a friend who is "full of joy when he hears the bridegroom's voice" (John 3.29). When the pregnant Mary enters Elizabeth's house, an unborn John "leaps for joy" (Luke 1.44). The Book of Acts describes Samaria as being full of "great joy" (8.8) at the healing of paralytics and cripples. All of these examples describe happiness. Such examples go on and on. So, even within the New Testament, happiness and joy are usually synonymous.

In fact, if we think about the people who we think of as being "filled with joy," we think of the people who always have a smile or encouraging word--in other words, people who appear to be happy, even when bad things are going on in their lives. So, happiness may not be joy, necessarily, but happiness is indeed the mark of joy.

The explanation for James's teaching, to me, is that indeed Christians can be happy, even in the midst of tribulation. Certainly, this seems unnatural and maybe even unfair. But remember that everything about Christianity asks us to behave differently than how the world would expect us to. But how can such a thing be possible? Emotions are natural reactions, and cannot therefore be inherently wrong. How can we be commanded to be "happy" about terrible things?

I think the answer, perhaps, lies in the person of Paul, a person who often talks about joy in hardship. In the Second Letter to the Corinthian church, Paul says to the church:
I have great confidence in you; I take great pride in you. I am greatly encouraged: in all our troubles, my joy knows no bounds. For when we came into Macedonia, this body of ours had no rest, but we were harassed at every turn--conflicts on the outside, fears within. But God, who comforts the downcast, comforted us by the coming of Titus, and not only by his coming but also by the comfort you had given him. He told us about your longing for me, your deep sorrow, your ardent concern for me, so that my joy was greater than ever. (2 Cor 7.4-7)
When Paul tells them that "in all our troubles, my joy knows no bounds," I do not think he intends to say that he is happy about his troubles, But I do think he intends to say that even during these troubles he remains happy. He is indeed referring to his emotional state. He goes on to explain how such a thing can be possible. Certainly, nothing about being harassed, being involved in conflicts, and being afraid makes one "happy." But, when the church at Corinth heard about this problems, their love for him spurred them into action. First they loved Titus. Then they sent Titus with ovations of love for Paul. Paul learned that they hurt for him, longed for him, loved him. And their expressions of love made his "joy greater than ever." I do believe that when Paul refers to "joy" here, he is expressing a feeling of happiness--the emotional stuff that's supposed to be separate and different than "joy." See, the trials themselves did not make Paul happy, but in the midst of these trials, Paul found something to be happy about.

The reason we as Christian should be able to find happiness even during struggles is that we are to see the world in fundamentally different ways than does the world. So we can indeed be "happy" when James tells us to be joyful of our trials. Because these trials means that God is planning to use us for something. And that's pretty cool--something to be actually emotionally moved by. Something to be happy about. And the love that our brothers and sisters show for us in our struggles is something to be happy about. Certainly, responding to trials with happiness is unnatural. Such responses take training. And I don't think this training comes from giving ourselves permission to be unhappy even while giving ourselves credit for being joyful. Instead, such training comes through intentionally looking at our trials and searching for what God is doing through them.

Inasmuch as we often use the word "joy" to mean something more like "peace" when we espouse the teaching I'm critiquing here, I am comfortable in saying that there may indeed be a slight difference in these concepts of "happiness" and "joy." When I say that I hate this teaching and that I think it's nonsense, I am admittedly being a bit hyperbolical. But it is true, and vitally important to understand, that if these are different things, than one is necessarily the fruit of the other. Those who possess joy will be happy, and they will often show happiness when the world would not expect them to, when they would in fact have permission to be unhappy. So, if joy and happiness are different, they nevertheless belong together.

Monday, February 27, 2012

Because I'm (sort of) a Rhetorician Who Was Once a Music Major

Several weeks ago, this digital poster made the rounds especially among my artsy, hipster friends. Of course, graduate school ruined me so that I can't look at anything without performing a miniature Toulmin analysis, so I quickly analyzed this poster, questioned it's assumptions, and filed it away in my brain to deal with it later. Well, it's later. I won't do a line by line Toulmin analysis here, because it wouldn't be particularly interesting, but I will address the implicit argument of this digital poster and the assumptions on which such an argument relies.

The argument itself is relatively straight forward. The poster first shows lyrics from the song "The Way You Look Tonight" which, if the adjacent photo denotes authorship, it mistakenly attributes to Frank Sinatra (the song was written by Jerome Kern and Dorothy Fields and originally performed by Fred Astaire for the 1936 film Swing Time. Sinatra recorded a version in 1964, by which time the song was considered a standard.) Underneath these lyrics, the poster lists the much less interesting, highly reductive (dare I say, stupid) lyrics of Justin Bieber's "Baby." Underneath these lyrics, in the popular style of the (de)motivational poster, is the line, "Music, w..what HAPPENED!?"

The argument here is easy to discern. Music (judged by lyrics) was at one time rich, complex, and good. Now, it's simplistic, stupid, and bad.

This claim presents as its grounds the lyrics of these two songs. Of course, this is a digital poster, so it has little time for a nuanced portrayal of the music of these two eras (either the 30s or the 60s depending on if the poster intends the actual song, or the Sinatra remake, and the early 21st century). A complete historical picture of these two eras obviously cannot fit onto a digital poster. The claim of this poster, then, relies on the assumption that each of these songs is representative of its era. The quality of each era can, therefore, be judged based on the quality of each of these songs.

This assumption is relatively easy to attack. After all, it relies on the related assumption that ridiculous lyrics, like those of the Bieber song, did not appear in songs of the earlier era. Such a claim disregards lyrics like these from the Johnny Mercer song "I'm an Old Cowhand" (also remade by Sinatra):
I know all the songs that the cowboys know
'bout the big corral where the dogies go
'cause I learned them all on the rad-ee-o
Hey, yippie-yi-yo-ki-yay
Yippie-yi-yo-ki-yay

Furthermore, this assumption suggests that our current era does not include lyrics of more sophistication that the Bieber song, thus discounting lyrics like those of Anna Nalick:
Cause you can't jump the track, we're like cars on a cable
Life's like an hourglass glued to the table
No one can find the rewind button, girl
So cradle your head in your hands

In other words, in order to support the claim of this poster, the author relies on a biased sample, a logical fallacy in which the arguer makes generalizations which cannot be supported by the evidence provided because they are purposefully biased.

Furthermore, claims like the one made by this digital poster tend to discount one very important aspect of history--that is, history will ultimately remember the things that were the best of the era. Every church in the Baroque era large enough to hire one had a church composer and organist. But they didn't all become Johann Sebastian Bach. Indeed, an extraordinary majority of them, though they may have been popular at the time, have been forgotten by history. We remember Bach because he was the best of the era. The same has been true of the music of the 1930s and will be true of current music as well. It is, therefore, inappropriate to compare the relatively banal lyrics of one particular Justin Bieber song to the songs that have survived because they held some quality that made them classic, and use this comparison as evidence of some kind of devolution of music.

Claims like the one made in this poster ultimately belong to the narrative of the "golden age." Such narratives are often presented for rhetorical reasons by folks who wish to show that their own generation had it right and that "kids these days" have it all wrong, or by people who wish to show that they are aware that things used be be better and that their awareness of this fact makes them more sophisticated than others of their own generation. In other words, there is an element of pop-culture elitism inherent in these arguments. To the careful observer, however, these arguments do not show the sophistication of the arguer, but rather, the arguers historical ignorance, or at the very least, the arguers abuse of rhetorical strategy.

Friday, February 10, 2012

Parenting 2.0



In case you haven't seen it, the video above was posted on February 8th by an angry father in an attempt to discipline his daughter for posting a nasty "letter" to her parents on Facebook on and his video has quickly gone viral. A dozen or so of my friends have shared it on facebook, it's been blogged half to death, and it even appeared on my MSN home page for much of the day. It features a father reading the ranting letter of his daughter's (which she thought she had blocked from her family and church), then shooting the laptop computer the family owns for her to use.

As the video has gone viral, many have congratulated this father for punishing an ungrateful child, while others have been vocally angry about his public punishment. I choose, here, to theorize about the situation, and what it means to parent in the digital age (and in digital format).

Though most (well, all) of my friends who have shared this video were congratulatory toward this father and his innovative punishment, I was initially worried. The very public nature of this punishment worried me (*though, to disclaim, I'll say: keep reading.). I have serious concerns about the highly public nature of this father's punishment.

When young people post videos about each other, we classify it as cyber-bullying. We know that one of the aspects of cyber-bullying that makes it so dangerous is its highly public nature. Information placed on the internet can spread very quickly, and the written/recorded nature of this information gives it an appearance of permanence. And though, realistically, what is said about or to another person on the internet may be quickly lost in the fast-paced, instantaneous flow of information on the web, for an adolescent, it feels as though everyone has seen it and it will never go away.

For these reasons, I have grave concerns over the appropriateness of this father disciplining his daughter in such a public way. As adolescents, we all said things and behaved in ways that we would later realize were immature. In fact, the things that she says in her letter are exactly the things many of us were in trouble for saying when we were adolescents. And, looking back, we are embarrassed by our own behavior. In this case, this daughter's embarrassing mistakes were made incredibly public. Millions of people she does not know are calling her ungrateful, citing her as an example of what's wrong with "kids these days" and congratulating her father for disciplining her.

Of course, embarrassment can be a powerful disciplinary tool. My dad used to tell us a story about how his father punished him when he was caught shoplifting by shaving his head. He hated it, and his girlfriend, who loved his longish flowing hair, broke up with him over it. Of course, we have all seen the pictures of kids whose parents make them hold signs proclaiming themselves as shoplifters. My favorite is of a kid who "wants to go to prison to be with daddy."

In fact, in a response to people questioning why he would punish his daughter so publicly, the father said that he was raised this way:
If I did something embarrassing to my parents in public (such as a grocery store) I got my tail tore up right there in front of God and everyone, right there in the store.

The difference, though, is that these describe very local forms of embarrassment. Most of the people who saw my dad's hair cut knew him. In this case, this daughter's punishment is on display to millions. For millions she has become an example of a typical, ungrateful teenager. The folks sharing links to this video, arguing about it in comments sections and on blogs (like this one. . .) do not know this girl--they do not know her other weaknesses or her strengths. Instead, she is a dehumanized and disembodied "example" of what's wrong with this generation.

Of course, to disclaim this entire conversation, she is reportedly good-natured about the whole situation. Her father was surprised that the video went viral (he expected only her facebook friends to see it) and, when it did, he had to talk with her about the attention it had received and what they should expect now. According to her father, they have read many of the responses to the video, laughing at the predictions that she would "commit suicide, commit a gun-related crime, become a drug addict, drop out of school, get pregnant on purpose, and become a stripper because she’s too emotionally damaged now to be a productive member of society." Her surprisingly good attitude probably says less about the validity or invalidity of this form of punishment than it does about the relationship between this man and his daughter.

So, a case like this one ought to cause us to begin to theorize what it means to parent in the digital age. Since digital environments have become as important in our lives as physical environments, the policing of rules of etiquette, social norms, and appropriate behavior is also important. This means that parenting will happen in, or at least regarding, these environments. So how do we do this? There has been a great deal of conversation about what we must teach our children about the internet, but little thought about what and whether we teach them on the internet. Where are the lines of appropriate behavior when it comes to parenting and interacting with our children in digital spaces? Is the digital world a place that offers us new innovative ways to parent, or just another dimension in which to screw them up?

Monday, January 30, 2012

A Cop Reviews a Cop Play

It rained all night in Oklahoma City the night after Thanksgiving. This was bad news to me because my partner and I had agreed to do a ride-along for Ben Hall and Mike Waugh, two local actors performing in Carpenter Square's production of "A Steady Rain," a play about two Chicago cops. We regretfully warned Ben and Mike that rain usually slows us down. I had hoped we would spend the shift beating the bushes, talking to the prostitutes and pimps who frequent our sector, and looking into the dilapidated low-rent housing of the inner city (in neighborhoods remarkably similar to south Chicago). Along the way, I thought we would have plenty of time to tell Mike and Ben about the both the triumphs and the frustrations of big city police work. So I was disappointed that the rain threatened to derail that.

We did manage to show the guys a few things. Mike (who rode with me) got to talk to a couple of meth addicts and, through talking about then hearing reference to a notable ghetto figure called "Mama K," he got a picture of the interesting close-knit networking of the city's underbelly. He also watched a creative arrest. Perhaps most importantly, since the play takes place in a summer when it rained non-stop in Chicago, the guys got to think about what unrelenting rain might be like to guys who spend entire nights in a car.

I don't know how much the ride-alongs helped Ben and Mike. But I hope they were able to take away some lessons about how cops live, think, and act in a world defined by grey. I've always thought that the hallmark of inner-city police work is the often fuzzy lines between the good guys and bad guys, legal and illegal, aggressive and abusive. The officer's primary struggle comes in negotiating this grey. And whether one ends up as a good cop or a bad one depends in large part on where he ends up when he's passed through a grey deeper than the rainiest Chicago twilight.

It is capturing this grey that Keith Huff did so well in crafting this very rich script. Both characters, Joey and Denny, are both quite good and also very bad. As Linda McDonald, the director and my former teacher, mentor, and friend, explained to me after I saw the show last Friday, "both characters are likable in their own way. But one of them loses his soul and the other finds it."

Indeed, as a city cop myself (a good one, I hope), I saw myself in both these characters. I appreciated Joey's (Ben Hall) heart. He is a cop who wants to do it right, despite a crippling addiction. And when Denny (Mike Waugh) railed against the seemingly illogical unfairness of the police department, I found it hard to keep from shouting out loud, "damn right!" even when I knew that he had brought his situation on himself. These character were both good cops, and bad cops--they were also both incredibly human characters whose stories were tragic and heartfelt. Of course, as anyone would when watching a play about their profession, I thought the script got some things wrong. But it rose well above the typical cop-movie stereotypes that Huff intentionally subverts.

Mike and Ben handled these characters with remarkable aplomb. Ben's narrative delivery is often lyrical and always empathetic. Mike (who I had seen in The Goat: or Who is Sylvia) was superb playing Denny, a character who can be hateful, but who the audience must ultimately love. They work well together as well, a difficult task when one considers that they must play characters who have been "best friends since kindergarten."

I am a former professional actor, failed playwright, and professional police officer and, thus, likely the most difficult audience this play could have had. And I was very impressed. I empathized and commiserated with these characters, and I have spent the last three days since thinking about the script. And that's what I like in a play.

"A Steady Rain" runs at CST through February 4th. Go see it.

Tuesday, January 03, 2012

What Toothbrushes Taught Me About Ways of Seeing


Since our boys were brand new, we have organized things by color in order to keep straight what belongs to which kid. The phrase "Blue is for Beckett," has been a mantra in our house for the last two and a half years. When they were newborns, we would dress Beckett in blue so that others knew which kid was which. Though we have pretty much dropped color coding their clothing (Beckett, as it turns out, likes much brighter colors), we still use this to organize items that they should not share.

Beckett's milk cup is still blue because Aodan needs lactose free milk, so the coloring helps us keep the two types of milk separate. We also keep their toothbrushes separated by color. Blue is for Beckett. But we recently learned that even this simple system is not fool-proof. Even something as seemingly straight forward as color scheme is, as it turns out, subject to interpretation.

I learned this when we were both in the bathroom at the same time while I was brushing the boys' teeth. Charissa said something like, "oh, you switched their tooth brushes. I guess it doesn't matter." Of course, I hadn't. I was using the blue toothbrush to brush Beckett's teeth, so I replied, "blue is for Beckett." I then learned that she thought the other toothbrush was the blue one. We had been using opposite toothbrushes the entire time we've had this set. She is not, by the way, color blind. Nor am I. But we saw these two toothbrushes very differently, obviously.

Every geeky kid with an existential streak will remember the moment when he began to wonder whether or not people really do see colors the same way. What if what I see as blue, you see as red? We would never know that what we saw was different because we would both always call what we were seeing blue. This is the kind of question that is interesting from a theoretical perspective but that really doesn't matter much. As long as we consistently call that color blue, it doesn't really matter what it looks like to us, we can still communicate about the color consistently. But what was happening here was something different.

When I asked Charissa what she saw when she looked at these toothbrushes, she said that the one on the left was a green toothbrush with blue trim, and the one on the right was a blue toothbrush with purple trim. This is because the bases and the very tops of these toothbrushes are green and blue, respectively.

But I see these completely differently. I see the long necks on these toothbrushes and the fats parts on the handles, so I see them as blue (on the left) and purple (on the right). So, though we both see the same colors, we define which color is predominant, and thus definitive, in different ways.

This hints at a fundamental difference in the way my wife and I see things. What I see as trim, or background noise, she sees as defining characteristics. From my perspective, it seems like she sees a negative image of the same world I see. What's unimportant to me, is definitive to her.

Of course, to what extent our way of seeing toothbrushes is analogous to our ways of seeing the rest of the world is not settled. But the lesson here is still an interesting one. My wife and I, despite sharing our lives together, and despite the fact that we agree on most things and have an extraordinary number of things in common, see the world through different eyes, and may perceive it completely differently.

And ultimately, we name things according to how we see them. We see things according to how we define their characteristics. So, which characteristics we see as important--as definitive--has everything to do with how we name the world.