Everything Rhetorical and the Rhetoric of Everything

Rhetoric, Composition, Politics, Society, Culture, Etc.

Tuesday, March 15, 2011

Context for Final Project


In 1986, Stephen A. Bernhardt published an article in College Composition and Communication titled “Seeing the Text” calling for composition teachers to recognize the visual nature of texts and to begin to teach students to consider the rhetoric of visual design in the texts they produce. Since then, an ever growing number of researchers have urged the incorporation of visual rhetoric and multimodal composition into classrooms (Clark, Hill, Lanham, Selfe). Ignoring for the moment arguments over the need to teach students to create and edit video or audio texts, or web-sites, wikis, and blogs, it is becoming increasingly clear that Bernhardt’s vision of a future “[i]nfluenced especially by the growth of electronic media, strategies of rhetorical organization will move increasingly toward visual patterns presented on screens and interpreted through visual as well as verbal syntax,” has been realized(103–5). Indeed, it is hard to argue with Charles A. Hill when he notes that “it would be difficult to deny the importance of electronic and other visual media in today’s society” (107). Today a large percentage, if not most, texts produced outside of our classrooms require the use of at least some visual rhetoric, and, in fact, texts have always been visual (Elkins 91, Mitchell 5–6).
Bernhardt and others (Trimbur, Wysocki “Opening New Media,” “What Should Be” ) have long urged us to recognize the materiality of texts. Hill asserts that it is “missguided” to think “that we could ever draw a distinct line between the visual and the verbal, or that concentrating on one can or should require ignoring the other” (109). Despite that all texts communicate visually as well as symbolically through language, many composition classrooms still require students to produce texts that ignore visual design. Bernhardt’s 1986 criticism that “[i]nstead of helping students learn to analyze a situation and determine an appropriate form, given a certain audience and purpose, many writing assignments merely exercise the same sort of writing week after week, introducing only topical variation” is as true now as it was 25 years ago. Students need to be able to think rhetorically about design, but the texts they are asked to write in composition course which, as Hill notes, may be “the only real exposure to rhetorical theory and principles that [students] will have” mostly ignore visual rhetoric. Some composition textbooks still, if they include visual rhetoric at all, only treat the visual as something to be analyzed and then written about in purely alphabetic texts (Ramage, Bean, & Johnson; Kennedy, Kennedy, & Muth).
This is a case where our pedagogy may be completely and unnecessarily out of synch with the needs of our students. Perhaps Hill says it best when he argues:
What many people fail to understand is that visual elements are powerful and essential features of almost any writtin text. Even when all of the propositional content is expressed in verbal form, the design of the page or the screen on which the text resides, the relative location and proximity of textual elements, and even the font used can not only enhance readability, but be part of the message that is conveyed. Overall, the visual aspects of writing can have as much to do with the effectiveness of one’s message as choosing an appropriate tone or sentence structure. (122)
The material nature of all texts and the effects of visual elements on rhetoric are an important part of composition. While much work in new media has focused on digital texts, video, and audio (see Selfe), the need to include visual rhetoric in our classes goes beyond arguments over the nead to incorporate new assignments that ask students to compose videos, webpages, audio texts, etc. Scholars like Bernhardt, Hill, and Wysocki have demonstrated the need to pay attention to the visual rhetoric and material nature of the texts our students already produce.
As Wysocki argues, the “results of digitality ought to encourage us to consider not only the potentialities of material choices for digital texts but for any text we make, and that we ought to use the range of choices digital technologies seem to give us” (“Opening New Media” 10). Some of those digital technologies are the ones students now use to compose nearly every text they produce. Our students are using computers to write, so why are they limited to creating texts that might as well have been written on a typewriter? Hill notes that “general-education writing courses pay almost no attention to issue of page design. By specifying a particular format and font…in their assignments, instructors control issues of design, and therefore prentend that these issues don’t matter” (122). Rather than ignoring design and the visual, material reality of texts, our students need us to help them learn to make effective design choices when composing includig the incorporation of visual elements.
Hill correctly notes that “by leaving design elements a nonissue in our courses, we leave students unprepared to analyze visual elements as readers and to use them effectively as writers” (122). He goes on to note that this situation is not only unacceptable but also unnecessary, noting “[n]ow that digital technologies have given all writers the ability to easily manipulate design elements in their tets, it is past time for teachers of writing to begin to pay serious attention to the communicative and rhetorical aspects of page and screen design” (122–3). It is with the intent of helping us and our students begin to give this “serious attention” to visual rhetoric that I prepared the following video tutorial. While many attemtps to include new media in the classroom are met with constaints on access to technology and ignorance regarding software, incorporating visual rhetoric into the classroom can be done with only some instruction in visual rhetoric and design, the assignments students already complete, and a little training with the software most of us already use to create our texts. The following tutorial demonstrates how to turn the wordprocessing software Microsoft Word, available on nearly every computer in every computer lab on every campus and most home computers, into a powerful design tool. This should provide a helpful tool for both teachers and students as we begin to pay more attention to the visual rhetoric of design and the interfaces (Wysocki “What should be”) of the texts students already produce.

Thursday, March 3, 2011

Story Board


Set up a wiki page with scholarly conversation about integrating new media instruction into composition classrooms:

Discuss:
·      access
·      technology
·      software
·      teacher resistance
·      using visual design capabilities to teach students to start seeing texts (Bernhardt, Wysocki) and designing the assignments they are already producing instead of just writing them
·      How this small shift toward document design represents an available step towards teaching new media because many of the same principles students learn by designing for print apply to designing in other modes and for digital media

Create the video:

Teaching new media with old texts: enabling student designers

Establishing Shot: Me working at my IMac

Turn to camera, Open with greeting and reference to the need for simple and accessible ways to make the assignments teachers already use sync with principles of new media and our digital environment. Discuss how nearly all texts are composed digitally, but teachers and students still treat computers like type-writers when computers allow for texts to be designed rather than simply typed.

With minimal effort, we can help students learn to think rhetorically about the materiality of their texts.

What a text looks like affects its rhetoric just as much as what it says, and the standard school paper doesn’t say good things. (Discuss interfaces-Wysocki) Interfaces like genres, or like texts themselves establish relationships and subjectivities, the school paper interface establishes students as perpetually erring trainees whose work is prepared for correction, and teachers as correctors and evaluators. How can we begin to change that?

Discuss: most designing done through high-powered desktop publishing programs like Adobe InDesign or Quark, but few students have access to these expensive large programs. Wordprocessing programs like Microsoft Word, however, have come a long way since the days when they were essentially digital typewriters. I’m going to show how Word’s publishing view feature can be used to design any standard assignment.

Video adjustments to MLA formatted school paper, going through the following subjects:

Typography—type as image, rhetoric of type, principles of choosing typefaces
Layout—Focal point, hierarchy, emphasis (size, weight, style, color, indent, outdent), whitespace, grids, margins, chunking, line-length, etc.
Adding Visuals
Visuals that explain type, 
Visuals explained by type,
Visuals that work with type to create something more than the sum of their parts.
Does size matter?
Thinking in Spreads/Making a magazine.

Thursday, February 24, 2011

Conference proposal revision (same title)


For fifteen years many scholars (see Lanham, Selfe, Sirc) have advocated incorporating new media literacies into composition. But, alphabetic literacy still hasn’t gone the way of the the floppy disk, nor is there yet indication that students’ ability to succeed in life might depend on their video editing skills. However, even the most stalwart defender of the alphabetic tradition has begun to feel pressure to address new media literacies. New media literacy has come to mean any number of things from the use of technologies like blogs and wikis as to the ability to produce and edit digital videos. While it is becoming clearer that composition will need to address digital literacy, it is far from clear which literacies are becoming truly vital for composition and which remain the domains of specialists. Technology changes at a rapid pace; the emergence of vital literacies moves much more slowly. Rather than devote limited and valuable time with students teaching them the next new technology for producing a video, perhaps we should start by acknowledging how principles of new media and digital literacy are already part of any composition (See Wysocki). Nearly all texts are produced as digital texts so all texts can be tools for demonstrating properties of digital literacies and new media. This presentation will discuss how instructors can begin to teach visual design and rhetoric, Lev Monovich’s concepts of modularity and variability (30–45), and other principles or affordances of new media using the assignments they already teach and minimal instruction about ubiquitous software programs like Microsoft Office. By teaching students to design texts rather than just write them, instructors can teach important principles relevant to whatever digital literacies students may have to develop without having to devote the majority of a semester or quarter to teaching software and technology.

Tuesday, February 22, 2011

Teach New Media with Old Texts: Introducing New Media Principles Through Current Assignments


For more than fifteen years now many scholars have been advocating the incorporation of new media literacies into composition classrooms—if not going so far as to prognosticate the disciplines destruction should we fail to do so. At this point alphabetic literacy hasn’t yet gone the way of the eight-track tape or the floppy disk, nor is there yet indication that our students’ ability to succeed in life might depend on their video editing skills. However, even the most stalwart defender of the alphabetic tradition has begun to feel pressure to address digital literacy. Digital literacy has come to mean any number of things, as have other terms like new media, including anything from the use of technologies like blogs and wikis as pedagogical tools to the ability to produce and edit digital videos. While it is becoming clearer that composition will need to address digital literacy, it is far from clear which literacies are becoming truly vital for composition and which remain the domains of specialists and hobbyists. Technology changes at a rapid pace; the emergence of vital literacies moves much more slowly (Who still uses DOS commands?). Rather than devote limited and valuable time with students teaching them the next new technology for producing a viral video, perhaps we should start by acknowledging how principles of new media and digital literacy are already part of any composition. Nearly all texts are produced as digital texts so all texts can be tools for demonstrating properties of digital literacies and new media. This presentation will discuss how instructors can begin to teach visual design and rhetoric, modularity, variability, and other principles or affordances of new media with the texts they already assign through minimal instruction in the desktop design capabilities of the ubiquitous Microsoft Office. By teaching students to design texts rather than just write them, instructors can teach important principles relevant to whatever digital literacies students may have to develop without having to devote the majority of a semester or quarter to teaching or providing access to software and technology, some of which may become obsolete before the end of the decade.

Saturday, February 12, 2011

Blogging: A Change Of Mind

Ok, so I have, in the past, expressed a fairly dim view of forced blogging as a pedagogical tool. Admittedly, however, a lot of that came as hold-over from miserable experiences being forced to participate in online discussion board threads, which are a different animal entirely. Blogging, somewhat similarly to discussion boards, do not necessarily lead to productive interpersonal conversation. It is really easy for students, including me, to see blogs as busywork or annoying inconveniences to be gotten out of the way as quickly and efficiently as possible, which only requires meeting the minimum requirements for posting and commenting. This view is unlikely to lead to productive conversations about classroom topics. I still feel this is the case, but I have begun to see that a few strategies can help mitigate this tendency.


As opposed to previous experiences with blogging as a student, for Computers and Composition, I tried to finish the readings with enough time not only to write up my own response, but to then read the blogs of the rest of the class and comment while the readings were fresh in my mind. Having done this right before class, not only were my own thoughts on the readings fairly fresh in my mind but some of the ideas raised by others in the class. This often helped me notice when many of us were thinking along the same lines, which, I feel, influenced class discussion. In the last few weeks, I was also able to participate in some on-going conversations in comments because I could respond to others comments on Thursday, having responded to blog posts already.

Now, my positive outlook about blogging for this course might be influenced by the fact that my schedule gives me a large span of time to blog and comment before class, but I think that it is always within my power to get the reading and blogging done early enough to read others' blogs and comment on them before class.

In the past, I did not find commenting very useful. I do think that part of that was because I would comment a day or so after we had blogged on and discussed in class the readings and commenting on blogs seemed superfluous to class discussion. I did make an effort this time to comment before class while the readings were fresh partly because I had had some success having my students read and comment on each others' blogs at the beginning of class last quarter when I was teaching in a computer lab. This quarter, my students' commenting isn't as productive because we can't spend that time reading and commenting.

As a means of ensuring that students do readings and attempt to think critically and reflectively about them, blogs are a great eco-friendly alternatives to response papers, but the potential is for them to be more than that. Pedagogically, the benefits of getting students not only to post their response to the readings but also to comment thoughtfully on their peers responses are pretty huge. Students can help each other understand difficult readings by essentially collaborating on interpretation. The difficulty, however, is to get them to do that, and I've found using the first ten to twenty minutes of class to help further conversation is one helpful way, though it would be better if students would engage actively in reading and commenting as soon as they finish their own posts.

I found the posts of my peers thoughtful and interesting, often focusing on aspects of the readings that I did not. It was nice to see the directions others are taking in their learning and research and to have the opportunity to participate through comments. I think that the blogging aspect of the class worked well for me this quarter.

Friday, February 4, 2011

Making my case, and asking hard questions.


I’m going to do my blog-post early, and probably slightly off-topic this week. Our somewhat contentious end of discussion Thursday left me thinking about what I want to get done in a writing class (focusing for the sake of argument not on FYC but on say a 308J class that I want to address what I see as essential parts of alphabetic literacy that students need but have likely not mastered in or since their 151 course). That is fairly easy for me to imagine since I taught a 308J course last year and found that my students still had a great deal of stuff to learn about reading and writing. I had a fairly successful course that focused on having the students do ethnographic research on the places, genres, scholars, and discourse ecologies of their majors or expected professions. Since then, however, I’ve decided that the approach I took was probably a little to advanced for what is for many students only the second college course they’ve taken that has asked them to write extensively let alone tried to teach them anything about writing. My intention, should I teach a 308J next quarter is to take a Writing about Writing approach using Elizabeth Wardle and Doug Down’s (whom I had the pleasure of working with for a couple of years) new reader Writing About Writing along with some additional Rhet/Comp articles not included in that text, Graff and Birkenstein’s They Say/ I Say, and Norton’s Little Seagull Hanbook . The proposed ten weeks would work as follows:

Week 1: Intro, syllabus, conversation about what makes writing “good”/successful, introduce the concept of writing as an emergent phenomena within complex ecological systems using articles by Porter and Cooper.
Week 2: Genre using articles by Bawarshi and Mauk and exercises in Genre analysis,
Week 3: Rhetorical Reading using articles by Haas and Flower, Kantz, Tierney and Pearson, and Penrose & Geisler
Week 4: Class workshop, small group and/or individual peer reviews of Paper 1
Week 5: From reading to writing articles by Swales, Greene, Kleine, and Casanave, They Say/I Say chapters.
Week 6: Workshop/peer review activities paper 2
Week 7: Students in the conversation/ rhetoric: Articles by Wardle, McCarthy, Harris, and Grant-Davie, They Say/I Say Chapters
Week 8: The rest of the process, articles by Nelson, Perl, Tomlinson, Lamott
Week 9: Individual Consultations with students
Week 10: Workshop etc. Paper 4, conclusions and reflections.

Final Portfolios due by scheduled final.

This is, of course, very rough and lists more, really, than I’ll actually be able to do. I will have to make choices about what articles will be most beneficial and which I can do without. I’ll have to balance discussion of articles with exercises and activities that help students practice skills related to the knowledge about writing we’re discussing. So reality will not be this optimistic. However, this isn’t as optimistic as I could be. I’d really like to take some time to discuss Grammar, style, proofreading, and editing. I’d like to take a day to go over print design (ha, there’s some multimodal stuff I’ve taught before but ended up cutting because other things seem more important), I’d like to meet individually with students more often ( I probably will force them to come in after the second paper, but I’ll have to do that outside of class time). I’ll also require responses to readings that students will post to the course blog to which I’ll require them to respond in comments. Paper 3 will be an annotated bibliography of their research for paper 4, which I want to try building via wiki with separate pages for the various aspects of writing the students choose to write on. The final paper will be a researched academic argument. The first two papers will be selected from a list of genres I find useful:

Rhetorical Analysis
Critical Analysis
Genre Analysis
Literacy Narrative
Discourse Ecology Ethnography
Autoethnography of their writing process
Etc.

So, where in this schedule is there room to teach students the affordances of other modes, the knowledge needed to make effective rhetorical choices using them, and the skills (especially technologis/software) to make one of these assignments a required multimodal project? I have in the past allowed students to make certain projects multimodal if they wanted and could. I always encourage students to make their papers look good as best they can to encourage thinking about design, but I don’t have room to teach principles of design, or software programs, or the terminology and affordances of video, audio, etc.

Except for workshop, peer review, peer response, and individual consultations, most of my students’ work on projects happens outside of class. With alphabetic genres, I don’t worry about students knowing how to type or use the basic functions of word processing software.

I do use technologies in my class (blogs, wikis, blackboard). I don’t have students turn in hard copies anymore. I comment on papers using track-changes/mark-up, but these technologies are so simple and ubiquitous that it doesn’t take much if anything to get students to where they can use these technologies. I don’t require them to put visuals in their blogs, and I wouldn’t require visuals on wikis just like I don’t require visual design of type or visual images in their research projects. I encourage all these things, but I leave it up to students to decide if that is something they are comfortable doing since they are on their own to figure out how if they don’t already know.

Now, I view 308J as a writing class not a writing intensive course. I know that others teach it as a writing intensive course where writing is secondary to other knowledge. I’m not comfortable with that since the 308J is listed as “Writing and Rhetoric II” and because it is only the second writing course students here take unless they are in a writing related program. If I were teaching a writing intensive course where I felt like learning writing itself was a secondary rather than a primary goal for the course I might feel comfortable including a multimodal project and taking a class or two to instruct students in the affordances of video/audio as well as some principles of design. I would be most comfortable doing that if it was part of the content of the course, but I could see doing it even when it wasn’t. In fact, as I’ve mentioned, my 284 students this quarter will be doing a collaborative multimodal composition as their third short project. The way I conceived it probably isn’t as rigorous as the authors of Multimodal Composition would like. I originally planned to give students the option of what multimodal technology they want to use (to allow for the fact that some students may not be comfortable designing a web-site, video, etc.), and I conceived collaboration as a way of pairing up those who are already comfortable composing in other modes with those who aren’t. I wasn’t planning on teaching much about technologies because that isn’t really one of my purposes. I wanted to do a multimodal project in order to let students experiment with multimodal composition within their comfort zones not to push them out of their comfort zones and teach them how to compose in multiple modes. Pardon my ignorance. If I do it again, I will probably plan a day to teach some basic competency with wikis and require the students to construct multimodal wiki sites for their projects. I still, however, don’t know how much time I would spend teaching visual rhetoric etc. when the course content is writing about culture, which I view as a course meant to focus less on the general affordances of writing and more on the discourse features specific to the ecology of cultural studies.

All this is why I feel that the only proper place to teach all the things Multimodal Composition is telling us we need to teach is in the context of a class specifically on multimodal composing, which I would be very interested in teaching. I just don’t think it is realistically feasible to teach these things to an extent that makes the effort worthwhile within the context of a writing course especially, but even in a writing intensive course. Yes I can see giving multimodal assignments that require only some basic instruction in certain applications, but that is not the same as taking the time to teach the affordances of video, still image, audio, how image and text interact, etc., etc. There is quite a difference between teaching students how to put together a video and teaching them all about how video communicates, its affordances, and the rhetorical choices involved. We can always remind them to think about audience purpose and genre, but that isn’t the same as teaching them how those rhetorical principles function differently in different modes.

So, do I think I can or should incorporate multimodal composition into my planned 308J course? No. Do I think I can design pedagogically better multimodal assignments for courses like 284? Sure, but I don’t think I can take enough time to teach multimodal composing itself beyond what the experience of doing the project will teach. If multimodal composing is or ever will be an urgently needed essential skill for students to have as so many texts have argued, I think the only way to teach it is to teach it in a class designed for that purpose.

On the other hand, I think that everything I teach in my planned 308J will help students compose better multimodal texts. Everything I teach about discourse ecologies, writing in conversation, using sources, rhetoric, and process applies to multimodal composition just as much as it does to alphabetic composition. Is the reverse true? Can everything we would take the time and effort to teach in a class on multimodal composition be equally applicable to alphabetic composition? I don’t think so. I could be wrong?

Is it okay to assign multimodal assignments without taking the time to teach the affordances of video etc.?
Is Multimodal Composition making the inclusion of other modes out to be more of a thorough time-consuming effort than it needs to be?
Are we incorporating other modes into our work or taking on the responsibility to make students literate in those modes? What is the difference?
Is it enough to include discussion on reading other modes when we teach reading rhetorically/critically?
In Chapter 9 of Multimodal Composition Alexander repeats previously stated ideas about multimodal not necessarily involving digital texts with video/audio, but would those urging us to teach new media literacy really think asking students to drop some visuals into their word documents with an effective tag line and some reference to the image in the text was good enough?

Tuesday, February 1, 2011

Why again?

I'm at a bit of a loss for what to blog about with this particular set of readings. It doesn't help that I'm still stuck on why I should include multimodal assignments. The various authors of the chapters in Multimodal Composition give us great advice for creating assignments that work and considering all the problems and affordances, but I am simply still unconvinced that doing such assignments is the best use of class time. The CC online Wiki page argues

"To keep pace with advancing technology, writing courses at NMSU must move beyond traditional alphabetic texts. Even though such texts, with solitary and individual writers are still the most common choice at the university level, the rest of the world is rapidly changing. Multimodal tools, such as wikiboards, facebook,blogs, twitter, and more, are communicative media, both in and out of classrooms and employment settings. Multimodal composition assignments provide students the skills necessary for creating and interpreting the many different contexts of reading and writing taking place within our technology-based world beyond the university."


Notice, however, that the specific examples they give (wikis, facebook, blogs, twitter) are all, largely, alphabetic dominated technologies. Instead of typing in a wordprocessor, I'm typing in a blogger publishing window. Yes, I could include images, videos, etc., but they aren't essential. There is a big difference between incorporating technologies like blogs and wikis as a pedagogical tool and asking students to design websites/web videos/ etc. Students could easily, if encouraged, incorporate images into their printed essays. None of the texts we've looked at so far seem overly concerned with whether or not students design GOOD multimodal compositions so we wouldn't really have to worry about whether their use of images is particularly aesthetically successful.


For these reasons, Dickie Selfe's article seemed the most interesting and useful since its focus was on using technologies in the classroom rather than necessarily assigning multimodal compositions. On the other hand, I would suggest that technology only be incorporated once it has been around long enough to reach a certain level of user accessibility/stability.


I've been around long enough to see certain patterns in technology development. I first used computers in the mid 1980s when my elementary school class learned some very basic computer literacy using DOS based computers (mostly we got to play early computer games like Oregon Trail and Where in the World is Carmen Sandiego back when the games were big green pixel graphics if they had any graphics at all). Back then there was the idea that everyone would need to learn DOS commands because computers were becoming a part of life. Then, of course, Steve Jobs developed Machintosh/Apple computers with user friendly interfaces and the need to learn DOS commands went out the window—or Windows after Bill Gates stole the Machintosh interface and wrote the DOS programming to reproduce it on PCs. This patter seems to repeat itself: by the time new technology reaches the point where it is NECESSARY for people to use it, user-friendly interfaces have developed to make it fairly simple to use it—I don't think we'd be blogging if we had to learn to code first, but now we have these nifty publishing windows.


So, why then develop assignments that force us and students to use technologies that are, as yet, the realm of specialists? Sure people are composing videos with text and sound and posting them on YouTube, or even producing them for work, but those who do have developed knowledge of the technologies needed as either a hobby or part of their job and had no need of composition course assignments to teach them how to do it. Everyone else still seems to get along just fine without in-depth knowledge of the finer points of IMovie.


Photoshop has created a world where anyone can produce professional quality photography, but most people still just point, click, download, and upload/print without even basic literacy with Photoshop or similar programs, can these Photoshop illiterate people really be compared to those who can't write a decent report, letter, proposal because of inadequate alphabetic literacy?


The more we read about multimodal assignments and the challenges/affordances, the more convinced I am that there are better ways to use class time in composition courses (unless it is a writing for digital media class, which I would expect to tackle such composing).


All of the advice from these readings, however, applies equally to alphabetic assignments. Also, all my talk of user accessible interfaces for blogs and wikis reminds me of Wysocki's assertion that we pay attention to the interfaces of all texts, which is something I do think I can and should incorporate into my classes, even freshman composition. I don't think students should be required to write in ugly MLA double-spaced formats when no discourse ecology outside academia would produce texts with such a terrible interface. I tend to encourage students to think about design, incorporate visuals, etc. 

Tuesday, January 25, 2011

E-expressionists and neo-platonists, now I see why this is so important...to them.

I have to credit Jon Holmes for first bringing my attention to the latent expressivism that has been underlying a majority of the texts we have studied. It is certainly interesting how much writers and student examples seem to focus on creativity, artistic expression, and allowing students to direct their own work. Geoffrey Sirc's article makes the expressivism here very explicit.

Sirc states early on that he is "most intertested in composition that has an ultimate poetic effect" (114). This poetic effect seems applicable to all the new media texts we've looked at both the texts we've read and the student examples those texts have pointed us to. All these texts seem to offer much more of an artistic experience than any substantial information or argument. What arguments they do make are severely restricted by their designs.

Is new media just the latest excuse for people who don't really like or want to teach pragmatic print literacy to teach something else? Forty years ago, teachers of Freshman Composition decided that they'd much rather teach students to write poems, personal stories, creative non-fiction than academic papers; now, those same teachers, or their descendants, find it much more interesting to teach their students to make web-videos, web-cites, collages, and other multi-modal texts than academic papers. Only now, those teachers can go on and on about how these are the texts/genres of the future and how everyone is communicating this way, or will be soon, so teaching these things is as vital if not more than teaching students to write print-based academic papers. Plus, students like playing around with digital technologies and constructing artistic digital texts, and they've never liked writing academic papers. Why would we want to try and teach students things they don't find fun?

All of the arguments against expressivism from the eighties seem to apply here. Sirc seems just as obsessive about the heroic individuality of the designer as early expressivists were about the individual heroicism of the writer, and just like those of old, he holds up artists as the examples to follow—not scholars, not professionals in common careers, not anyone who large numbers of students might have a chance to, or might actually want to, be like.

Sirc really loses me when he makes the absurd claim that students "compositional future is assured if they can take an art stance to the everyday, suffusing the materiality of daily life with an aesthetic" (117). Wow, I'd like my "compositional future" to be assured, but I've been around the block a few too many times to be naive enough to think that all readers appreciate "an art stance to the everyday." Some audiences absolute hate such stances—any audience would probably hate some "art stances" depending on the designers idea of "art." This claim is the same expressivist idea that all any text needs is a good voice to be successful. There's just more to writing than that.

On the other hand, new media scholars also seem to exhibit a bit of neo-platonism, at least in the sense that they forward new media texts as being somehow more "real" or "true." Granted that this camp is much smaller than the digital-expressivists, and few of them would actually openly argue any such thing, but there is an underlying sense that many new media scholars see new media texts as somehow more than alphabetic texts. If not more real/true then more empowering, expressive (there it is again), enabling,  free, independent, resistant, etc. Much of what we have read seems to hold to the strange idea that composing such texts is somehow more liberated and liberating when it comes to social norms and ideology than composing alphabetic texts. Our experience and understanding of ideology since Althussuser and Gramsci, however, tells us that we are never more under the influence of ideology and hegemony than when we think we aren't when we think we have found ways to break out and be subversive. Interpretation is coconsitutive with/by socio-political forces within ecologies and new media in no way does away with that.

So why did I think to label this a type of neo-platonism? Well, it was mostly because of this quote from Brooke's article:


According to Elkins, "we have largely forgotten perspective as practice." For better or worse, this was Plato’s fear of writing, that it would divorce language from its immediate context. Electronic writing restores that context in ways that exceed those of the spoken word, without entirely doing away with the durability that we associate with print.

http://enculturation.gmu.edu/4_1/style/

So, electronic writing is more real because it provides more "context" to help us interpret it—more likely more context becomes a way of hiding what is meant not to be seen. Once again we have a strange animosity toward writing (or at least alphabetic texts) that seems to ultimately be rooted in Plato (a prolific writer). Why?

I myself am way too much of a sophist to buy into this kind of neo-platonism. I'm more likely to be swayed by digital expressivism—I do have some sympathies for the expressivists—than by these prevalent assertions about the superiority of multimodal texts, or the urgency of teaching them.

Tuesday, January 18, 2011

Oops, that had the opposite effect

I was much more open to arguments for teaching "new media" texts before I read these three texts. For some reason I just am not persuaded that students lives will be irreparably damaged if I don't teach them to create and upload crappy videos.

I might have been more persuaded by the Ball and Moeler article if I hadn't kept having this irresistible urge to go out and by a Big Mac... ;)

Despite all the apocalyptic rhetoric of these texts covering seven years of imminent danger to composition if it doesn't realize that composition students' futures absolute depend on their ability to learn how to impose text over images while music plays in the background because, well, just because that's what everyone's doing, or will be doing some day...someday when I won't be able to write this blog entirely in alphabetic text because...well because...

So, I think I'm a bit skeptical about the need for everyone to learn how to compose multimodal texts. I really don't see a time when not knowing how to compose with Flash will severely limit anyone who isn't going into a career where knowledge of flash is essential.

I also don't fully accept the assumption that new media texts construct meaning in radically different and new ways. We had "multimodal" texts before computers came along. Television and Movies have been mixing visual, audio, and textual modes for over a century. That we can now do this so much more easily doesn't create a radical shift in how those modes interact to make meaning. The more things change: the more they stay the same. I don't deny that things are changing, but it is more a matter of style than substance.

All this skepticism isn't aided by the fact that these texts themselves are such terrible examples of new media texts. Their designs are horrible. Sorapure's text is itself at least visually appealing though awkward and ironically linear (at least if you don't intentionally fail to follow all the cues designed to help you read it linearly), but the examples of student texts it holds up as models of academically rigorous texts that we should replace alphabetic assignments like research papers with seem mediocre at best. They certainly fail to convey as much information as a good academic essay, even those written by freshman. The effect of these student examples is to destroy the ethos of these new media advocates, who are making huge claims and then supporting them with these compositions that can 't help but underwhelm anyone who has taken undergraduate graphic design classes.

Why teach students to write alphabetic texts when they could be making really bad social action commercials. That's as important as helping them learn to write academic essays and articles, isn't it.

Sorapure's text is, at least, fairly well designed itself, something I can't say about Ball and Moeller's, which presents itself as the epitome of  bad design from the McDonald's colors to the division of the screen directly in half with a box screen making the text marginally more readable, the visuals of this text are cringeworthy. Add to that the fact that once again, for all its pretenses, this is a linear alphabetic text (with a few opportunities to diverge if you want), and the persuasiveness of the text is destroyed. This is what we should be producing? Teaching? This is more important than some aspects of print literacy?

Admittedly I've been sick and sleepless for days, so maybe I'm overly critical, but these texts persuade me that the fervor over new media is largely just frivolous distraction.

Tuesday, January 11, 2011

Thinking about (teaching) design

I've taught visual design in freshman composition classes before, and I always encourage my students to be creative about the materiality of their papers and to think about what that materiality itself communicates. The experience of teaching some basic principles of designing texts visually complicates my views on teaching students to produce multi-modal texts.

Every time I took part of a semester (I haven't tried this on a quarter system) to discuss visual design and have students produce multimodal texts, I found I couldn't take enough time to really be effective. Just the most basic principles of producing visual texts: some basic typography, issues of layout, focal points, creating emphasis, avoiding distracting visual problems, white-space, variety takes considerable time to cover because, with most students, I would have to start from scratch. This compares to having to teach students basic reading and writing before teaching them composition.

I am not really surprised that a lot of those who advocate for having students construct multimodal texts seem to advocate just asking students to produce these texts rather than attempting to help them do so skillfully. This, however, conflicts with what Wysocki advocates in Writing New Media. If we are only asking students to produce multimodal texts without taking the time to address how those texts communicate materially we are certainly not "bring[ing] to new media texts a humane and thoughtful attention to materiality, production, and consumption, which is currently missing" (7). We are simply teaching abstract concepts about rhetoric and the communication of ideas through an invisible medium.

Add to this, of course, the whole complication of technology (which many of my students were largely ignorant about) and we find ourselves dedicating a great deal of class time just laying the groundwork for teaching students to produce successful texts. I find it telling that Takayoshi and Selfe avoid rather than answer the question: "When you add a focus on multimodality to a composition class, what do you give up?"

I, of course, have been discussing just teaching the visual design of printed texts, which is a minor step beyond teaching traditional alphabetic writing. What happens when we try to add video composition to a class? If we want to teach how to use images effectively, shouldn't we teach students to skillfully manipulate photographs?

So, what does all this have to do with discussing Wysocki's approach to defining new media texts as "those that have been made by composers who are aware of the range of materialities of texts and who then highlight the materiality"? I think that one of the implications of this definition is an attempt on Wysocki's part to downplay the difficulty of introducing the production of new media texts into the classroom. If writing new media simply means producing texts that are self-consciously material, then that might not seem like such a big endeavor as trying to teach students to create their own website—at least one that isn't ridiculously tacky and obviously uncaring of its materiality. My experience, however, suggests that even spending the time to call attention to the materiality of the essays students write and help them to incorporate images, even type as image, requires a dedication of time and resources that necessarily replaces something else.

I may be unfairly attributing such an implication to Wysocki's definition, but it is something that seemed implied in the rhetoric of the reading. Perhaps Wysocki is not trying to downplay the time, resources, and effort necessary to address new media in the classroom. If so, I rather like her definition of new media. I think it counteracts some major problems in the way new media is often presented—that is as if no one ever thought about, or needed to think about, the material appearance of texts before computers came along.

I agree that our attention to new media should embrace a broader attention to the materiality of texts and how we are situated by and through those various materialities—though "new media" does become somewhat meaningless here. After all, I didn't try to do units on design and layout just for kicks. I think they are important parts of writing that are significantly neglected—and significantly and often abused by writing instructors. I try, even when I don't attempt to teach visual design, to present texts that depart from the traditional materiality of texts, starting with my syllabus. But, I think that there are many other aspects to writing that are more important—so many, in fact, that I can't cover them all in a quarter or a semester. As of now, writing and new media is a great focus for an upper division class, maybe even a 308J class, but not something that I feel inclined to devote significant time to in a 151 class.

Of course, I think that students should take more than just two required writing classes, and I could easily get behind an effort to at least provide if not require more writing classes including writing and new media—ah the dreams of a world without business oriented administration.

Thursday, January 6, 2011

Process, Post-Process, Complexity Theory/thinking, and the splattering of my brain

John H. Whicker is a PhD candidate in Rhetoric and Composition at Ohio University currently working on an amalgamation and investigation of all things process through the lens of complexity theory in an attempt to revive and re/imagine a conceptualization of process that includes the rest of the process and makes the language of process available and useful—not to mention interesting—to twenty-first century composition.

When scholarly work in combination with teaching (English 284 "Writing About Culture" currently) and taking graduate seminars hasn't turned me into an office hermit, I like to go home to my wife (Juliann) and three children: Greyson 7, Charlotte 5, Colin 3.