Evaluation of Student Writing
Evaluation of student writing is a critical aspect of teaching that involves assessing students' progress towards defined learning outcomes. It can be one of the most challenging tasks for educators, yet it is vital for fostering student development. The evaluation process encompasses both informal and formal assessment methods. Informal assessments may include teacher observations, comments on student papers, peer evaluations, and portfolio assessments, while formal assessments often take the form of tests, exams, and timed essays. Each method serves to provide insights into a student's writing abilities and areas for improvement.
Educators utilize a range of tools, such as rubrics and checklists, to guide their evaluations and to clarify expectations for students. The goal of effective evaluation is not only to assign grades but also to encourage reflection and growth in students' writing practices. By actively involving students in the evaluation process—through self-assessment and peer feedback—teachers can create a more engaging and supportive learning environment. Ultimately, a well-rounded approach to evaluating student writing helps students understand their strengths and areas for growth while promoting a deeper engagement with the writing process.
On this Page
- Overview
- Informal Assessments
- Teacher Observation
- Teacher Comments
- Contract Grading
- Checklists
- Rubrics
- Self-Evaluation
- Peer Evaluation
- Writing Conferences
- Portfolios
- Holistic Scoring
- Formal Assessment
- Multiple Choice Tests
- Tests & Exams
- Timed Essays
- Applications
- Advantages to Using Portfolio Assessment
- Possible Uses of Portfolios
- Scoring Procedures for Portfolios
- Assessment Instruments (Rubrics)
- Commenting on Papers
- Strategies for Evaluating Writing
- Types of Peer Evaluation Groups
- Viewpoints
- Contract Grading
- Making Comparisons
- Use of Impromptu Timed Essays
- Use of Multiple-Choice Tests
- Commenting on Papers
- Grading
- Holistic Scoring
- Portfolios
- Rubrics
- Terms & Concepts
- Bibliography
- Suggested Reading
Subject Terms
Evaluation of Student Writing
Evaluation of student writing can be "the single-most difficult task required of teachers, yet it may be the most important part of a teacher's job insofar as helping individual students" (Julian, 1999, p. 56). Evaluation "designates the judgments [teachers] make about students and their progress toward achieving learning outcomes on the basis of assessment information" (Williams, 2003, p. 297). When applied to writing, these judgments include evaluation of the completion of learning outcomes in writing. White (2004) suggests that evaluating writing is "a natural outgrowth of the process theories of composition teaching that have dominated the field for the last generation" (p. 110). Evaluating writing can result from informal assessments and formal assessments.
Keywords Assessment; Evaluation; Formal Assessment; Global Errors; Grades; Informal Assessment; Pre-established Standards; Reflection-in-Action; Surface-level Errors
Overview
The evaluation of student writing can be "the single-most difficult task required of teachers, yet it may be the most important part of a teacher's job insofar as helping individual students" (Julian, 1999, p. 56). While assessment refers to "a collection of data, or information that can enlighten the teacher and the learner, as well as drive instruction," evaluation is the product of assessment (Burke, 1999, p. 168). After gathering the data, teachers must take assessment a step further and "evaluate the product of their efforts and the progress of their students" (p. 169). Teachers then grade by "reducing different information on student performance down to a letter or score" (p. 169).
Evaluation is defined as "the judgments [teachers] make about students and their progress toward achieving learning outcomes on the basis of assessment information" (Williams, 2003, p. 297). When applied to writing, the judgments that teachers make can be based on standards that are commonly applied to writing instruction. They may make comparisons with their students' papers and these pre-established standards of what good writing is or they may compare one student's writing with another student's writing (Williams, 2003). Atwell (1998) states that "whatever system (a teacher) uses has to take into account the range of abilities that come when anyone writes or reads" (p. 313). White (2004) suggests that evaluating writing is "a natural outgrowth of the process theories of composition teaching that have dominated the field for the last generation" (p. 110).
Informal Assessments
Evaluating writing can result from informal assessments (which include authentic assessments) and formal assessments. Informal assessments used in evaluating writing include:
• Teacher observation,
• Teacher comments,
• Contract grading,
• Checklists,
• Rubrics,
• Self- or peer-evaluation,
• Writing conferences,
• Portfolio assessments and
• Holistic scoring.
Teacher Observation
Teachers can evaluate writing through teacher observation. Through careful recordkeeping, teachers can evaluate writing by reviewing patterns of writing behaviors over time. These records may include anecdotal records and folders of work samples (Graves, 1983; Newkirk & Atwell, 1988; Dyson & Freedman, 2003). Through careful evaluation, teachers understand ways that students work through the writing process and they adjust instruction based on their evaluation.
Teacher Comments
Commenting on papers is an informal evaluation of writing. Through comments, teachers can advance students' writing abilities. According to White (1999), the overriding goal of commenting on papers is for "students to see what works best and least well in the draft so that revision can take place" (p. 123). Comments can come in the form of marginal or terminal comments. Marginal comments are made in the margin and can be comments about content or mechanics. Terminal comments are final comments issued at the end of a paper that direct students in how to improve their drafts or their next papers (Connors & Glenn, 1999).
Contract Grading
Contract grading is a contract between the student and the teacher about what tasks the student will complete. Contracts "spell out exactly what proficiency level a student must reach and the amount of work which will show that level in order to receive a specific letter grade for the terms" (Julian, 1999, p 58). Contracts allow students to enter into a dialogue about evaluation and learn to commit to a process in which they are actively involved.
Checklists
Checklists are written descriptive categories that include checkboxes and clearly organize what students need to do in completion of their writing tasks (Burke, 1999). Students can evaluate their own writing processes by comparing the categories in the checklists with their own progress.
Rubrics
Rubrics are tools that "explain what students need to do, what they will be graded on, and how they will be evaluated" (Burke, 1999, p. 174). In rubrics, teachers prepare a matrix in which the criteria and standards are listed. The matrix typically features four to six criteria that may include both higher and lower level criteria. Higher level criteria include criteria for content, logic, presentation, description and depth. Lower level criteria include formatting, organization, and mechanics (Mabry, 1999). Teachers evaluate the tabulation of the rubric and work with students to improve their writing, based on rubric results.
Self-Evaluation
Self-evaluation occurs when students use rubrics or some other assessment tool (such as checklists) to evaluate their own writing. Through self-evaluation students pay closer attention to the standards and apply them to their own writing efforts. They also begin to internalize the habits important to good writing by evaluating their own process and progress (Burke, 1999). As Beavens (1977) mentions, students who self-reflect move beyond the notion that they are writing to please the teacher.
Peer Evaluation
Through peer evaluation, students listen to or review one another's writing (Burke, 1999). Zinn (1998) states that peer evaluation should be considered for every writing classroom. Beavens (1977) states that students enjoy working with their peers in a collaborative effort. They also learn "how to handle language better as a result of well-structured meaningful group assessment and interaction" (Zinn, 1998, p. 3)
Writing Conferences
Writing conferences occur when teachers evaluate writing by responding orally to student writing through individual conferences. They make suggestions to students about their writing while encouraging students to become more independent and more effective in discussing their writing and assessing their process (Newkirk, 1989; Zinn, 1998). Through conferencing, teachers help students evaluate one or two major concerns in their writing that can be "realistically addressed in a single conference" (Zinn, 1998, p. 4). Zinn (1998) asserts that an instructor must resist the impulse to deliver a lecture or force students to defend their writing process.
Portfolios
Portfolios are also a form of informal assessment used in evaluating writing. A portfolio is a folder or a binder containing examples of student work (White, 1999). Portfolios became another alternative to writing assessment in the mid-1980's, providing a more accurate picture of individual writers (Elbow, 1986; Elbow and Belanoff, 1986). They are considered an authentic form of evaluation, as they provide students with "real-life writing experience and require preparation that is natural and broad-based" (Zinn, 1998, p. 6). White (1999) suggests that the greatest advantage to using portfolios in evaluating writing is their inclusion of numerous examples of student writing, produced over time, under a variety of conditions. White (1999) states:
Unlike multiple choice tests, [portfolios] can show a student's actual writing performance. Unlike essay tests, portfolios can showcase several kinds of writing and rewriting, without time constraints and without test anxiety (p. 149).
The preparation and self-assessment of portfolios are "inherently meaningful and worthwhile as a record of work they have done and want to keep" (White, 1999, p. 150).
Portfolios include reflective pieces where students discuss their process of writing and their final products. Through reflective evaluation, students formulate concepts about what good writing is all about and how writing changes, depending upon audience and purpose (Murphy and Underwood, 2000). Students evaluate their own portfolio pieces, selecting pieces that they are most proud of and why they chose these pieces (Graves, 1983; Dyson and Freedman, 2003; Murphy and Underwood, 2000).
Atwell (1998) states that portfolios focus on the "big picture - who a student is becoming and who he or she might become - as a writer and reader" (p. 311). Through portfolios, students become aware of the practice of gathering evidence of themselves as writers, including self-reflective pieces. Estrem (2004) asserts that "portfolios give increasing power to students and to teachers if used in interesting and rich ways" (p. 127).
Holistic Scoring
Another form of informal evaluation is through holistic scoring. Oftentimes, holistic scoring results from large-scale assessment, as teachers evaluate a mass amount of papers. Holistic scoring practices were quite common in the early 1970's and 80's. Students write on assigned topics, under a timed period in a testing situation. Teachers discuss scoring procedures and work towards reliability by comparing scores (White, 1985; Dyson & Freedman, 2003).
Formal Assessment
Formal assessment used in the evaluation of writing includes such measures as:
• Multiple-choice tests,
• Tests/exams and
• Impromptu timed essays.
Multiple Choice Tests
Multiple choice tests have often been used to evaluate writing. According to Zinn (1998), multiple-choice tests are the least popular with composition specialists. Often these types of tests are used to measure understanding of use of grammar, as students are often asked to identify grammatical errors. These tests are considered highly consistent, although they only measure editing skills (Cooper & Odell, 1977).
Tests & Exams
Tests and exams are considered formal assessments and often include opportunities for students to write. Exams can be effective in evaluating writing in that they invite what Burke (1999) considers to be a "powerful and appropriate intellectual performance that is linked to standards" (p. 179). Teachers must consider what purpose the writing portion of a test or exam may serve and what exactly the test/exam is testing. Exams or tests may come at the end of a unit of study, at the end of a grading period or course, or after materials are presented that a teacher deems important enough for a student to learn.
Timed Essays
Use of impromptu timed essays is another form of formal assessment for evaluating writing. White (1995) asserts that 70 percent of English faculty use some form of timed impromptu essays because they see its inclusion in the evaluation process as a validation that, indeed, the student who is writing the essay is the one sitting in the desk. The problem of plagiarism can be eliminated by using these timed essays in class. Students also learn to structure short essays, a common feature in tests and exams.
No matter what evaluations are used to assess student writing, evaluation should be an integral part of teaching and learning about writing. Students should play a central role in negotiating criteria and setting goals so that students can become successful in getting their message across to the readers of their writing (Townsend and Fu, 1997; Hart, 1994; Turnbill, 1989). As Zinn (1998) points out, evaluation of writing is necessary to help students improve their writing. As noted, teachers have many avenues for evaluating writing, through both informal and formal assessment.
Applications
Advantages to Using Portfolio Assessment
Zinn (1998) outlines the advantages to using portfolios in evaluating writing. Zinn (1998) states that portfolios:
• Provide a variety of writing samples, written for different audiences over a period of time, giving a more realistic picture of a writer's actual abilities than most other methods of writing evaluation.
• Emphasize process over product.
• Encourage the sharing of ideas.
• Encourage collaborative cohesion among group members.
• Limit plagiarism.
• Reduce anxiety, as students carry on their natural writing process and pace.
• Offer a valid evaluation that represents a more accurate picture of writing (p. 6).
Possible Uses of Portfolios
Portfolios can be used as:
• A showcase for the student's best work, as chosen by the student or the teacher.
• A showcase for the student's interests.
• A showcase for the student's growth.
• Evidence of self-assessment and self-adjustment.
• Evidence enabling professional assessment of student performance, based on a technically sound sample of work.
• A complete collection of student work for documentation and archiving.
• A constantly changing sample of work chosen by the student, reflecting different job applications and exhibitions over time. (Wiggins, 1998; Burke, 1999, p. 175)
Scoring Procedures for Portfolios
Portfolios are often scored in bulk, resulting in the need for procedures that will make scoring realistic and reliable. White (1999) states that reliable reading can take place best in controlled sessions with "all readers reading at the same time and place, under the direction of a chief reader." Readers read each portfolio twice, with any discrepancies in scores given what White (1999) calls "a resolution reading" (p. 158).
Assessment Instruments (Rubrics)
Assessment instruments, such as rubrics, allow teachers to evaluate papers for quality of writing. Rubrics are helpful for teachers who need guidance in evaluating papers for quality of writing. They are useful in identifying the important skills of writing; providing a framework for evaluation; and enhancing consistency in evaluating and grading writing (Flateby and Metzger, 2001). Flateby and Metzger (2001) suggest that assessment instruments be given to students and discussed in detail prior to beginning writing tasks so that students are aware of expectations and have a better understanding of what the criteria is for good writing. However, opponents of rubrics assert that rubrics restrict the flexibility that scorers need in order to become unique and creative writers.
Commenting on Papers
Commenting on papers is informal evaluation of student writing where teachers offer balanced advice or criticisms with praise that are used to advance writing. Comments can be either marginal or terminal comments. Marginal comments are short, single words, sentences or phrases that call attention to the strengths and weaknesses in a paper; the comments are marked in the margins at the place where the evidence occurs. White (1994) suggests that questions should be used as opposed to sentences, as questions inspire students to think about what they know and are learning about writing. Connors (1999) suggestS that teachers focus marginal comments on substantive matters that "deal with arrangement, tone, support and style" (p. 102). Beaven (1977) outlines three sorts of comments, such as those that:
• Ask for more information on a point that the student has made.
• Mirror, reflect, or rephrase the student's ideas, perceptions, or feelings in a non-judgmental way.
• Share personal information about times when you, the evaluator, has felt or thought similarly. (p. 139)
• Terminal comments are made at the end of a paper and are considered to be the most important message to a student about his/her writing. Within a short space, teachers must:
• Document the strengths and weaknesses of a paper.
• Let students know whether they have responded well to the task or assignment.
• Help create a psychological environment in which the students are willing to revise or rewrite.
• Encourage or discourage specific writing behaviors.
• Set specific goals that the teacher thinks the student can meet. (Connors, 1999, p. 104).
Connors and Lunsford (1993) suggest that teachers open with praise of some portion of the paper that is effective. Teachers should then continue with comments about the rhetoric of the paper, how the content, global organization, and general effectiveness can be revised. Finally, surface-level issues should be addressed, including discussion about form and mechanics. These surface-level issues can be summarized by looking for patterns of error that can be corrected with additional instruction.
White (1999) offers guidelines for commenting on papers:
• A grade on a paper with no comment or only a cryptic phrase or two will not add much to student learning.
• Sarcastic or harsh comments will force the students to displace dissatisfaction with the paper with a dislike of the teacher and thereby short-circuit learning.
• Steady red-marking of all possible errors will bewilder and frustrate students.
• Puzzling abbreviations to mark such points as awkward writing, etc. are abstruse to the student.
• Generalized comments with little meaning (such as "Nice job!") frustrates students to want to know what the teacher found "nice" and what made reading the paper enjoyable (p. 123).
Strategies for Evaluating Writing
Burke (1999) states that there are several strategies that may be useful in evaluating writing performance. They include:
• Beginning with descriptors of the performances we expect of our students, the outcomes that will actually be assessed.
• Giving students as much information as possible so that they understand the requirements, the standards and the criterion.
• Ensuring that the means of evaluation are fair and appropriate for all, by providing a variety of ways to demonstrate knowledge and skills.
• Keeping in mind the purpose of assessment and evaluation;
• Developing the habit of critical reflection-in-action in both the student and the teacher (p. 169).
Types of Peer Evaluation Groups
White (1994) outlines peer evaluation groups that can be used for evaluating writing. Collaborative groups work together to complete a finished written task; all group members are graded with one grade. Reader-response groups read other papers and react, making notes for the writer to use to improve his or her paper (Zinn, 1998; White, 1994).
Viewpoints
Contract Grading
While contract grading has many advantages in that students are actively involved in the evaluation process, there are several disadvantages to using contract grading in the classroom. Contract grading can be time-consuming, as each assignment may be valued differently (i.e., a different percentage for each assignment) from student to student. Teachers also must deal with students who want to change their contract at mid-point (Julian, 1999).
Making Comparisons
Teachers make comparisons all the time when they are evaluating writing. They compare student papers to standards or to other papers. However, Williams (2003) states that comparisons never really give an overall view of student abilities because teachers are relating their evaluation to a performance on a specific task at a given time and not "to a broader concept of ability." He asserts that students don't merely learn how to write, but that they must learn "how to write very particular texts for particular audiences" (p. 300). He suggests that portfolios be used to evaluate writing, as opposed to other measures.
Use of Impromptu Timed Essays
Impromptu timed essays, a form of formal assessment in writing, is frequently under attack for being too formulaic, unresponsive to the nature of writing, and destructive to the curriculum (White, 1995; Zinn, 1999). Impromptu timed essays do not permit enough time to pre-write and revise, thus interfering with the natural writing process. Students learn quickly that the five-paragraph essay is an acceptable form of this timed essay; success in this limited form can falsely encourage students that they write well, even though writing requires more complex thinking skills than might be identified in a five-paragraph paper. Zinn (1999) states that this form of essay should only be considered as a first draft and should not be evaluated as though it is a finished piece of writing.
Use of Multiple-Choice Tests
Multiple-choice tests are often used to evaluate writing, specifically, editing skills. Opponents of the use of these tests for evaluating writing state that using these tests sends a signal to young writers that good writing equals grammatically correct writing (CCC Committee on Assessments, 1996). While multiple-choice tests may measure a student's ability to identify editing errors, opponents of such tests remark that editing skills can be "more effectively evaluated with a student-generated writing sample" (Zinn, 1998, p. 5). While these sorts of tests are easier and faster to grade, they should not be used as the principle means of evaluating writing.
Commenting on Papers
Common informal assessments in the evaluation of writing include commenting on papers. However, commenting on papers can make a difference between aiding students in their writing to turning students away from improving their writing. While students may benefit from solid comments about content and mechanics, excessive commenting gives students little opportunity to improve their own writing or to focus on specific areas for improvement (Graves, 1983; Hilger, 1986; Hillocks, 1986; Wolf, 1988). Oftentimes, comments are focused on mechanical errors, or surface level errors, rather than on global issues, such as on content. A focus on mechanical errors can overshadow content ideas that students may address in their papers (Searle and Dillon, 1980; Dyson and Freedman, 2003). Comments that are too general may carry little meaning to the students, as they move to improve their own writing (Butler, 1980; Hahn, 1981; Sommers, 1982; Sperling and Freedman, 1987; Dyson and Freedman, 2003). Teachers must be aware of their own purpose for commenting on papers and should steer away from making comments that are merely there to justify the grades that they gave students (Dyson and Freedman, 2003).
Grading
Zinn (1998) states that the worst part of evaluating writing is that a grade is generally assigned at the end of the process. Lindeman (1995) indicates that a grade is a final judgment and students often become close-minded to revision once a grade is assigned. Grading journals and portfolios pose special problems in the evaluation of writing, as evaluation methods for these tasks are often subjective (Zinn, 1998).
Holistic Scoring
Brown, Palincsar and Purcell (1986) outline issues that are of concern for teachers who may choose to use holistic scoring in their evaluation processes:
• Writing under a testing situation has little function in improving student writers' abilities and is mainly used for evaluation purposes.
• Students do not choose their own topics, as every student in a holistic evaluation must write on the same prompt.
• The writing process is not included in a holistic scoring evaluation, which does not permit students to carry out the fundamentals of a good writing process.
Portfolios
Portfolios can be unwieldy, as their sheer bulk can be a distraction to including them in the evaluation of writing. Oftentimes, portfolios are assembled under uncontrolled conditions which may cause difficulty in achieving reliable and consistent scores (White, 2004, p. 150).
Rubrics
Opponents of rubrics assert that rubrics overwhelm the writing curriculum by providing the same type of strictures on writing that multiple-choice and impromptu timed essays writing exhibit. Teachers often require students to write to the rubric, deprofessionalizing the ability of teachers to effectively make decisions about what good writing really looks like. While rubrics are supposed to function as scoring guidelines, "they often serve as arbiters of quality and agents of control" (Mabry, 1999, p. 8). According to Mabry, the only way rubrics can be effective in evaluating writing is if "the rubrics are comprehensive enough and flexible enough to accommodate different genres, voices, and styles of writing" (p. 8).
Terms & Concepts
Assessment: Assessment is a term that is often confused with evaluation. However, Williams (2003) states that assessment involves four related procedures: deciding what to measure, selecting or constructing appropriate measurement instruments, administering the instruction, and collecting information.
Formal Assessments: Formal assessments are those assessments that are driven by data which support the conclusions made from a test. They are usually referred to as standardized measures.
Global Errors: Global errors include those content-specific points of a paper that require revision for the thoughts to make sense.
Grades: Grades are defined as "a chosen or agreed upon symbol used to communicate a quality or value to the learner, other teachers, and the community at large" (Burke, 1999, p.169).
Informal Assessment: Informal assessments are those assessments that are not data driven but rather content and performance driven.
Pre-established Standards: Pre-established standards are standards that are used as a comparison with a student's writing. These pre-established standards can be: other students' work, individual teacher standards based on prior experience, or district-wide, state or national standards in writing (Williams, 2003).
Reflection-in-Action: Reflection is the careful consideration of an action or activity and the resulting thought or opinion that results from that consideration. Schon (1983) states that reflecting is central to improved performance.
Surface Level Errors: Surface level errors are errors in mechanics, such as spelling, grammar and handwriting.
Bibliography
Atwell, N. (1998). In the middle. Portsmouth, NH: Heinemann.
Beaven, M. (1977). Individualized goal setting, self-evaluation and peer evaluation. In C. Cooper and J. Odell (Eds.). Evaluating writing: Describing, measuring, judging (pp. 135-156). Urbanna, IL: NCTE.
Brown, A., Palinscar, A., & Purcell, L. (1986). Poor readers: Teach, don't label. In U. Neisser (Ed.). The academic performance of minority children: A new perspective (pp. 105-144). Hillsdale, NJ: Lawrence Erlbaum.
Burke, J. (1999). The English teacher's companion. Portsmouth, NH: Boynton/ Cook-Heinemann.
Butler, J. (1980). Remedial writers: The teacher's job as corrector of papers. College Composition and Communication, 31, 270-277.
CCC Committee on Assessment. (1996). Writing assessment: Opposition statement. College Composition and Communication, 47, 549-565.
Connors, G. (1999). The new St. Martin's guide to teaching writing. Boston: St. Martin's.
Connors, R., & Lunsford, A. (1993). Teachers' rhetorical comments on student papers. College Composition and Communication, 44, 200-223.
Dyson, A., & Freedman, S. (2003). Writing. In J. Flood, D. Lapp, J. Squire, & J. Jensen
Dyson, A., & Freedman, S. (2003). Writing. In J. Flood, D. Lapp, J. Squire, & J. Jensen (Eds). Handbook of research on teaching the English language (pp. 967-992). Mahwah, NJ: Lawrence Erlbaum.
Elbow, P. (1986). Portfolio assessment as an alternative in proficiency testing. Notes from the National Testing Network in Writing, 6, 12.
Elbow, P., & Belanoff, P. (1986). Using portfolios to judge writing proficiency in SUNY Stony Brook. In P. Connolly & T. Vilardi (Eds.), New directions in college writing programs (pp. 95-105). New York: Modern Language Association.
Flateby, T., & Metzger, E. (2001, January/February). Instructional implications of the Cognitive Level and Quality of Writing Assessment (CLAQWA). Assessment Update,13, , 4-7. Retrieved December 27, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=10350244&site=ehost-live
Graves, D. (1983). Writing: Teachers and children at work. Portsmouth, NH: Heinemann.
Hahn, J. (1981). Students' reactions to teachers' written comments. National Writing Project Network Newsletter, 4, 7-10.
Hart, D. (1994). Authentic assessment: A handbook for educators. New York: Addision-Wesley.
Haswell, R. (1983). Minimal marking. College English, 45, 600-604.
Hilgers, T. (1986). How children change as critical evaluators of writing: Four three-year case studies. Research in the Teaching of English, 20, 36-55.
Hillocks, G. (1986). Research on written composition: New directions for teaching. Urbanna, IL: ERIC Clearinghouse on Reading and Communication Skills.
Julian, L. (1999). Part one: Strategies for teaching writing. In L. Troyka (Ed.). Strategies and Resources for teaching writing (pp. 1-99). Upper Saddle River, NJ: Prentice-Hall.
Mabry, L. (1999, May). Writing to the rubric. Phi Delta Kappan, 80, 673-680. Retrieved December 27, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=1832512&site=ehost-live
Murphy, S., & Underwood, T. (2000). Portfolio practices: Lessons from schools, districts and states. Norwood, MA: Christopher-Gordon.
Newkirk, T., & Atwell, N. (1986). Understanding writing (2nd ed.). Portsmouth,
NH: Heinemann.
Schon, S. (1983). The reflective practitioner: How professionals think in action. New York: Basic.
Searle, D., & Dillon, D. (1980). The message of marking: Teacher written responses to student writing at intermediate grade levels. Research in the Teaching of English, 14, 233-242.
Sommers, N. (1982). Responding to student writing. College Composition and Communication, 33, 148-156.
Sperling, M., & Freedman, S. (1987). A good girl writes like a good girl: Written response and clues to the teaching/learning process. Written Communication, 4. 343-369.
Townsend, J., & Fu, D. (1997). Preventing school failure. Writing Assessment, 41, 71-77. Retrieved December 27, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9704093037&site=ehost-live
Turnbill, J. (1989). Evaluation in a whole language classroom: The what, the why, the how, the when. In E. Daly (Ed.). Monitoring children's language development: Holistic assessment in the classroom (pp. 17-21). Portsmouth, NH: Heinemann.
White, E. (1985). Teaching and assessing writing. San Francisco: Jossey-Bass.
White, E. (1994). Assigning, responding, evaluating: A writer's guide (3rd ed.). Boston: Bedford-St. Martin's.
White, E. (1995). An apologia for the timed impromptu essay test. College Composition and Communication, 46, 30-44.
White, E. (2004, Spring). The changing face of writing assessment. Composition Studies, 32, 109-116. Retrieved December 27, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15385508&site=ehost-live
Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.
Williams, J. (2003). Preparing to teach writing: Research, theory, and practice (3rd ed.). Mahwah, NJ: Lawrence Erlbaum.
Wolf, D. P. (1998). Opening up assessment. Educational Leadership,45 , 24-29.
Retrieved December 27, 2007, from EBSCO online database, Education Research Complete: http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=8521035&site=ehost-live
Zinn, A. (1998, Winter). Ideas in practice: Assessing writing in the developmental classroom. Journal of Developmental Education, 22, 2-10. Retrieved December 27, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=08943907&site=ehost-live
Suggested Reading
Alexander, B., & Crowley, J. (1997, September). E-COMP: A few words about teaching writing with computers. T H E Journal, 25, 66-68. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9710102978&site=ehost-live
Baldwin, D. (1994, October). A guide to standardized writing assessment. Educational Leadership, 62, 72-75. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=14635600&site=ehost-live
Barlow, L., Liparulo, S., & Reynolds, D. (2007, April). Keeping assessment local: The case for accountability through formative assessment. Assessing Writing, 12, 44-59. Retrieved December 15, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=25768828&site=ehost-live
Bizzell, P. (1987). What can we know, what must we do, what may we hope: Writing Assessment. College English, 41, 575-584.
Boomer, G. (1985). The assessment of writing. In A. Evans (Ed.), Direction and misdirection In English evaluation (pp. 63-64). Ottawa, Ontario, Canada: Canadian Council of Teachers of English.
Brackett, M. A., Floman, J. L., Ashton-James, C., Cherkasskiy, L., & Salovey, P. (2013). The influence of teacher emotion on grading practices: A preliminary look at the evaluation of student writing. Teachers & Teaching, 19, 634-646. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=90821954&site=ehost-live
Brand, A. (1992). Portfolio and test essay: The best of both writing assessment worlds at SUNY Brockport. In ERIC digest [On-line]. Available at http://ericae.net/db/digs/ed347572.htm
Cooper, C., & Odell, L. (1977). Evaluating writing. NY: NCTE.
Covey, S. (1989). The seven habits of highly effective people. New York: Simon and Schuster.
Crawford, L., Helwig, R., & Tindal, G. (2004, March/April). Writing performance assessments: How important is extended time? Journal of Learning Disabilities, 37, 132-142. Retrieved December 15, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=12329452&site=ehost-live
Dyer, J., & Thorne D. (1994, March/April). Holistic scoring for measuring and promoting improvement in writing skills. Journal of Education for Business, 69, 226-231. Retrieved December 15, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9405097735&site=ehost-live
Estrum, H. (2004, Winter). The portfolio shifting self: Possibilities for assessing student learning. Pedagogy, 4, 125-127. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=12261580&site=ehost-live
Ezarik, M. (2004, January). Beware the writing assessment: Q & A with George Hillocks, Jr. District Administration, 40, 66. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=11823677&site=ehost-live
Faigley, L., Cherry, R., Jolliffe, D., & Skinner, A. (1985). Assessing writers' knowledge and processes of composing. Norwood, NJ: Ablex.
Flateby, T., & Metzger, E. (1999). Writing assessment instrument in higher order thinking skills. Assessment Update, 11, 6-7.
Fuchs, L. (1994). Connecting performance assessment to instruction. Reston, VA: The Council for Exceptional Children.
Isaacson, S. (1999). Instructionally relevant writing assessment. Reading and Writing Quarterly, 14, 29-48. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=1463707&site=ehost-live
Jeffery, J. V. (2011). Subjectivity, intentionality, and manufactured moves: Teachers' perceptions of voice in the evaluation of secondary students' writing. Research in the Teaching of English, 46, 92-127. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=65154808&site=ehost-live
Lindeman, E. (1995). A rhetoric for writing teachers. NY: Oxford UP.
Marzano, R. (1994). Lessons from the field about outcome-based performance assessment.
Educational Leadership, 51, 44-50.
Middletown, H. (2006, Fall). On a scale: A social history of writing assessment in America. Composition Studies, 34, 135-137. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=23532528&site=ehost-live
Murphy, S., & Ruth, L. (1999). The field testing of writing prompts reconsidered. In M. Williamson & B Huot (Eds.). Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 266-302). Cresskill, NJ: Hampton Press.
Newkirk, T. (1989). The first five minutes: Setting the agenda in a writing conference. In C. Ansons (Ed.). Writing and responding (pp. 317-331). Urbanna, IL: NCTE.
Nickoson-Massay, L. (2006, Spring). Coming to terms: A theory of writing assessment. Composition Studies, 34, 139-142. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=21294503&site=ehost-live
Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy of Sciences.
Routman, R. (1994). Invitations: Changing as teachers and learners, K-12. Portsmouth, NH: Heinemann.
Strickland, K., & Stickland, J. (1998). Reflection on assessment: Its purposes, methods, and effects on learning. Portsmouth, NH: Boynton-Cook.
Tucker, M. & Cogging, J. (1998). Standards for our schools: How to set them, measure them, and reach them. San Francisco: Jossey-Bass.
Underwood, T. (1999). The portfolio project: A study of assessment, instruction, and middle school reform. Urbanna, IL: National Council of Teachers of English.
Wolf, S., & Gearhart, M. (1997, Autumn). New writing assessments: The challenge of changing teachers' beliefs about students as writers. Theory into Practice, 36, 220-231. Retrieved December 27, 2007, from EBSCO Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=6951715&site=ehost-live