Tuesday, April 27, 2010

Number 13: for the week of April 29, 2010

My dear readers,

The main problem at play in this week's readings is the idea of writing assessment. How should writing quality be assessed? As with most issues dealing with composition there are a number of different viewpoints. Huot in his article "Toward a New Theory of Writing Assessment” contends that in order for multiple-choice tests to have become as prevalent in assessing writing as they have, instructors must have put a great deal of faith in the testing technology. Well, that sounds to me like the same sort of thing McGee talks about with Microsoft Word and the “Invisible Grammarian”. Both cases involve people we possibly unfounded trust in a technology which may not be reliable. What I mean is, Microsoft Word’s grammar checker is not always correct. Unfortunately, my students think it is infallible, leading to numerous corrections on their papers that they are not expecting. If one piece of technology has limitations, why should we assume another does not? I understand the desire to take subjectivity out of the equation, but are multiple-choice assessments which are still subjective, owing to the fact that the person writing the exam will privilege what they feel is most important and ignore everything else really the best answer?

Huot in his other article for this week "The Literature of Direct Writing Assessment” discusses the three major ways correctly in use for assessing writing, namely, single trait, analytic, and holistic. Single trait, is just what it sounds like in that one facet of the piece of writing in question is considered above all others, its quality or lack thereof is the total basis for assessment, such as grammatical correctness. Analytic takes a couple of factors, such as grammatical correctness, and persuasiveness into account. Holistic grading, which I use, take everything into account. Of course, not everyone feels that way.

Edward White's article "An Apologia for the Timed Impromptu Essay Test” reminds us that "for most of our colleagues outside the English department and for almost all administrators, assessment means multiple-choice testing; evaluation of actual writing, whether on impromptu essay tests, term papers, or portfolios, is still generally seen as hopelessly subjective, unreliable, and arbitrary." While I argue a lot of the supposed subjectivity could be removed by more clearly defined rubrics, I can see the point from their point of view.

In his other article for this week, "The Scoring of Writing Portfolios” White makes a claim which I think is a nice wish. He says “one particular strength of portfolio assessment is its capacity to include reflection about the portfolio contents by the students submitting portfolios”. My senior year of high school, Lyon County school district instituted mandatory portfolios as a requirement for graduation beginning with that year’s seniors. To be blunt, nobody in my school had a clue in hell how to handle the new requirement. Yerington High School had never used portfolios before. So the administration improvised. No reflection went into the portfolio I submitted for graduation, at least no reflection from me. Every item in it was selected by one or more of my teachers, most of them without ever asking me if I wanted that specific item in my portfolio or not. Honestly, about one quarter of what was in my graduation portfolio, was entirely a mystery to me until about 20 minutes before my final presentation for graduation… why? Because the teachers advising me at not made up their minds when the last few items should be until that moment. And for those of you wondering how our portfolios were assessed, well, we got up in front of the school board members, talked for about 15 minutes each, were told "great job on your portfolio" and then told to quickly exit in order to make way for the next graduate. I declare to this day I still don't fully understand why most of the things that went in my portfolio were there at all.

As you might suspect I'm pretty sour on portfolios, but that's only because I still don't really understand the point of doing them, and the article really did not clear anything up for me. I'm much more enjoyed “Constructs of writing proficiency in US state and national writing assessments: Exploring variability”. Just thinking about the variability in exams and standards across the country drives home to me the idea that no matter what anyone may say there is not one single way for judging good writing, therefore, those who think that writing is a simple skill that can be mastered one way in the lower grades are off base.

I also enjoyed "Directed Self-Placement” by Royer and Gilles because to me it showed Peter Elbow's idea of letting students have a strong voice in directing their own learning. After all, Elbow students will be more engaged and ultimately perform better if you let them choose their tasks. Well, it is the next logical step from choosing writing tasks, to choosing a writing class. Also, Royer and Gilles make the point that many universities only place students the way they do as a means toward efficiency. They make the point that students, ideally, know the strengths and weaknesses of their writing the best. That is a point with which I think Elbow would agree, and so do I. Mind you, I was a little surprised that only 22% of students placed themselves in the lower writing class. Unfortunately that could show a weakness in the system into maybe not every student is either as knowledgeable about their own writing ability or as honest with themselves about it.

Overall, I found the explanation for the addition of a timed essay portion to the SAT satisfactory. I think that combining multiple-choice and essay format on the same test will give students and administrators alike a better idea of the ability of each individual.

Thank you so much for your time,

James Altman

No comments:

Post a Comment