Performance-Based Assessment
Performance-Based Assessment (PBA) is an evaluative approach that emphasizes students' ability to apply their knowledge and skills in practical, real-world contexts. Unlike traditional assessments that focus on rote memorization, PBAs require students to engage in tasks that demonstrate their understanding and abilities through various formats such as projects, performances, and portfolios. This method aims to provide a richer, more individualized assessment experience, allowing students to choose or design tasks that resonate with them personally.
Instructors often employ rubrics to evaluate performance, assessing multiple competencies rather than a single correct answer. This can help mitigate bias, as assessments can be tailored to accommodate diverse learning styles and backgrounds. PBAs are often associated with experiential learning and authentic assessment, as they aim to reflect students' real-life skills and knowledge application. Though they can be more time-consuming to develop and implement compared to standardized tests, PBAs are increasingly valued for their potential to foster critical thinking, creativity, and collaboration among students.
On this Page
- Testing & Evaluation > Performance-Based Assessment
- Overview
- Basis in Authentic Assessment & Experiential Learning
- Types of Performance-Based Assessments
- Use of Rubrics
- Goals & Objectives
- Designing the Assessment
- Administering Performance-Based Assessments
- Applications
- Interpreting the Results: Reflection
- Validity
- Implementing a Performance-Based Program
- Viewpoints
- Terms & Concepts
- Bibliography
- Suggested Reading
Subject Terms
Performance-Based Assessment
Performance-based assessments can “provide students with rich, contextualized, and engaging tasks and allow students to choose or design tasks or questions that are meaningful to them” (Lam, 1995, ¶ 9). Such individualization can help produce bias-free scores in a way that standardized tests cannot because the tasks can be used with students of varying learning experiences and backgrounds. Performance-based assessments allow instructors to use methods most appropriate for each student, which minimizes bias by considering extraneous factors in the assessment design that can affect student performance.
Keywords Analytic Rubric; Authentic Assessment; Experiential Learning; High-Stakes Testing; Holistic Rubric; No Child Left Behind Act of 2001 (NCLB); Goals and Objectives; Performance-Based Assessment; Portfolio; Rubrics; Scoring Rubric; Summative Assessment; Test Bias
Testing & Evaluation > Performance-Based Assessment
Overview
Performance-based assessments require students to utilize and prove their depth of knowledge and skill-set. This can be accomplished in many ways, including writing exercises, completing mathematical computations, and conducting experiments. Performance-based assessments can range from very basic in nature to more comprehensive collections of work over time, such as an entire school term. Regardless of the format, performance-based assessments tend to have several features in common. They also require direct observation of student behavior instead of grading a paper to determine student competence (Elliott, 1995).
Performance-based assessments can “provide students with rich, contextualized, and engaging tasks and allow students to choose or design tasks or questions that are meaningful to them” (Lam, 1995, ¶ 9). Such individualization can help produce bias-free scores in a way that standardized tests cannot because the tasks can be used with students of varying learning experiences and backgrounds. Performance-based assessments allow instructors to use methods most appropriate for each student, which minimizes bias by considering extraneous factors in the assessment design that can affect student performance. For example, instead of using a paper-and-pencil word problem test in mathematics, an instructor could present the questions orally to ESL students, rewording the sentences or using another language in order for the student to better comprehend the questions. Performance-based assessments also allow students an opportunity to defend and explain to the instructor their project and how they arrived at their conclusions. (Lam, 1995).
Basis in Authentic Assessment & Experiential Learning
Performance-based assessments are a type of authentic assessment and can be used in conjunction with more traditional types of assessment. For example, if an instructor would like to focus on recall of facts, then a multiple choice or short answer assessment may be an appropriate choice. However, when instructors are focused on more complex learning outcomes such as reasoning, communication, and teamwork, then a performance-based assessment may be the more appropriate choice (Perlman, 2002, as cited in Moskal, 2003a). Performance-based assessments do not have only one right answer. Instead, there are levels of proficiency that may be attained by students. Performance-based assessments can come in different formats, and the activities may be completed in either a group or individually. A major difference between performance-based assessments and other forms of assessment is that performance-based assessments require students to demonstrate their knowledge in a particular context (Brualdi, 1998; Wiggins, 1993, as cited in Moskal, 2003a).
Performance-based assessments gained popularity in the early 1990s when there was a movement toward curriculum reform that promoted a more hands-on, experiential learning. They were conceived to help develop students' higher order thinking and reasoning skills and not just test their ability to memorize facts and details or calculate. Performance-based assessments often consist of a problem to solve or a concrete task to complete, and students may be judged on their ability to investigate, the methods they use, the logic they display, and the conclusions they develop as well as whether or not they correctly solved the problem. Performance-based assessments may also require students to work in a group setting or participate in group discussions to learn real-world skills they will need later in life. Performance-based assessments may not have a single correct approach and tend to better assess students' ability to use the knowledge they have attained, whereas multiple choice, true or false, and fill-in-the blank questions can sometimes measure students' test-taking skills more than their knowledge of the subject matter (Seal, 1993).
Types of Performance-Based Assessments
Performance-based assessments can be broken down into five categories:
• Portfolios: a collection of student work that represents a student's progress and activities and may include drafts of student work to show the evolution of a project.
• On-Demand Tasks: require students to answer an instructor's prompt or respond to a problem within a short period of time with little in interpretation.
• Projects: last longer than on-demand tasks, such as an entire term, and can require working in a group setting.
• Exhibitions: presentations of various kinds of student work and are similar to a portfolio but shared with more than instructors and parents.
• Instructor Observations: should be unobtrusive and occur primarily to rate student performance and also to determine students' strengths and weaknesses (Kane & Khattri, 1995).
Use of Rubrics
Unlike most other forms of assessment, performance-based assessments do not have only one right answer. Instead, there are levels of proficiency which may be attained by students, which means instructors need an instrument that will allow them to rate each student's performance. This can most easily be accomplished by using a rubric. A rubric is a rating sheet that allows instructors to determine the level of competency each student has achieved for each concept being assessed (Brualdi, 1998). Most rubrics list levels of competency based on numbers or impartial phrasing. Four levels tend to be the preferable number, such as:
• "Exceeds Expectations,"
• "Adequate,"
• "Needs Improvement,"
• "Inadequate"
Whereas three levels can limit the rubric to
• "Consistently Successful,"
• "Making Progress,"
• "Needs Improvement."
This can make it difficult to translate into a letter grade or score for summative assessment grading purposes and report cards (Andrade, 2000).
Since performance-based assessments assess more than one skill, rubrics are generally used to score the assessment. Rubrics for portfolios need to be somewhat generic, and rubrics used for projects, on-demand tasks, and exhibitions should be tailored to the specific assignment. To help students understand the connection between science and medicine, an example of a performance-based assessment would be to have students choose a specific topic that interests them, explain the topic's relationship to the human body, and then outline what advances still need to be made regarding the chosen topic. Students would be assessed on the thoroughness and quality of their research, their writing ability, and the visual presentation they produce, all of which would be addressed by the scoring rubric. A performance-based assessment for mathematics could include having students complete three different categories of problems-puzzles, investigations, and applications-with students assessed on their ability to communicate and to apply the concepts to word problems (Kane & Khattri, 1995).
Goals & Objectives
Before a performance-based assessment or scoring rubric is developed, instructors must clearly identify the purpose of the activity, which will help guide the development of both the assessment and rubric.
• Goals are the broad statements of expected student outcomes, and
• Objectives are the observable behaviors broken down for each goal (Rogers & Sando, 1996, as cited in Moskal, 2003a).
To help determine specific goals and objectives, instructors should try to determine what knowledge and/or skills they want their students to learn; what content, skills, and knowledge the activity will assess; and what evidence is needed to appropriately evaluate the selected skills and knowledge. Moskal (2003a) suggests that to assist in writing goals and objectives, instructors should consider the following:
• The goals and objectives should provide focus for both instruction and assessment and be clearly aligned with instruction. These should be developed prior to instruction and serve as a guide for both instruction and assessment.
• Both goals and objectives should reflect important learning outcomes that are worthwhile for students.
• The objectives for each goal should be obvious since the objectives are the framework upon which a goal is evaluated.
• All the important components of a goal and what students are expected to learn should be reflected through the objectives.
• Objectives should describe measurable student outcomes and specify the student behavior that will demonstrate attainment of the goal (Moskal, 2003a, p. 3-4).
The most important aspect of writing effective goals and objectives is to make sure that student outcomes are clearly defined and can be measured. Instructors should share goals and objectives with their students so that they know what is expected of them, which keeps them focused on the task.
Designing the Assessment
Once goals and objectives have been determined, then the assessment can be designed. In order to develop a performance-based assessment, instructors should consider the following (Moskal, 2003a):
• Performance-based assessments should allow students the opportunity to demonstrate their skills and knowledge in real-life situations (Airasian, 2000; 2001; Wiggins, 1993, as cited in Moskal, 2003a). Assessments should mirror the kind of work that students may encounter after graduation, such as developing project reports, presentations, and collecting and analyzing data and should not just be reflective of prior instruction (Wiggins, 1990, as cited in Moskal, 2003a).
• The completion of the assessment should be a useful learning experience for everyone involved. Students should have increased knowledge of both content and construct, and instructors should have a better understanding of what their students know and can do and what they have yet to master.
• The goals and objectives of the activity should be clearly aligned with measurable outcomes of the activity. This helps instructors determine whether their instruction is properly aligned with projected student outcomes and allows them to make adjustments as necessary.
• The assessment should not evaluate unintended competencies. Instructors should try to determine if there are elements of the task that require additional knowledge that will not be addressed and if those elements will adversely affect student performance. If so, then adjustments should be made.
• The assessment should be fair for all and free from test bias. The assessment should be constructed so that it does not give an advantage to any one group of students over the rest of the students.
• Instructors should make sure that both written and oral explanations of the tasks are clear and concise and students are familiar with the language being used and understand what is expected of them. For written instructions, students' reading level should be taken into consideration. Additionally, students should be permitted to ask questions if they need clarification.
Administering Performance-Based Assessments
In order to properly administer a performance-based assessment, the following should be taken into consideration (Moskal, 2003b):
• Before administering the performance-based assessment, instructors should determine exactly what tools may be necessary to complete the tasks. Then they need to make sure that all appropriate tools their students may need are readily available. This could include access to the school library, making sure a particular computer program is installed, providing access to a laboratory, and providing scientific calculators.
• The scoring rubric should be discussed with students well before the actual assessment activity. This will enable students to make any adjustments in their learning and studying to meet the criteria and should provide everyone an opportunity to perform to the best of their capabilities.
A scoring rubric can be useful because it can be created for a variety of subjects and different situations and can be used to rate effort, knowledge, skill, and work habits for assigned tasks. A properly constructed scoring rubric should give independent raters similar scores when used to assess the same tasks, and the rubric should perform consistently over time. Having a set of papers available that clearly show the different levels of performance can help provide a comparison for raters. Having a set of anonymous examples of the different levels of performance from previous classes should be provided to students to help them see what constitutes mediocre, average, and exemplary work. This clarifies instructor expectations for both students and their parents and enables them to see exactly what competencies the performance-based assessment seeks to assess. If an analytical rubric is used, the report should contain the score for each level. If a holistic rubric is used, then an explanation of how it was determined and how it is connected to the scoring criteria need to be included (Moskal, 2003b).
Applications
Interpreting the Results: Reflection
The results of performance-based assessments should be used by instructors to help improve both their instruction and the assessment process. Instructors should reflect on their students' responses to determine if they have learned anything. For example, if students consistently did poorly on one of the components, then there could be a gap in instruction or the instructor's presentation methods need to be revisited. This can help improve future classroom instruction. Reflection can also allow instructors to improve both the performance-based assessment and the scoring rubric if necessary by the information each provides (Moskal, 2003b). Instructors can use performance-based assessments to help direct their instruction by using the data these assessments provide.
Validity
A valid performance-based assessment should have meaning for students and instructors and motivate quality performance. It should also require students to demonstrate complex cognition and current standards of subject matter quality. Additionally, a valid performance-based assessment should minimize the effects of secondary skills that are not relevant to the primary function of the assessment, and there should be explicit standards for the rating continuum (Baker et al., 1993, as cited in Elliott, 1995). In trying to determine the validity of a performance-based assessment, there are a few considerations that should be addressed:
• Instructors should look at how the assessment relates to similar measures that assess the same constructs.
• Instructors should also determine whether the assessment can predict future performances.
• Instructors should also make sure the assessment adequately covers the content being taught.
Other aspects of validity that should be considered included whether or not the assessment instrument results in discriminatory practices against any group of individuals and whether it is used to evaluate people who should not be assessed by the instrument, such as parents or instructors (Elliott, 1995).
Implementing a Performance-Based Program
Schools implementing performance-based assessments should provide appropriate professional development opportunities for their instructors. Training should focus on assessment design, setting student performance standards, designing and using scoring rubrics, and how to align instructional methods and materials with performance-based assessments. A detriment to the effectiveness of performance-based assessments can be the coordination between the intended purpose and the format of the assessment instrument. It is difficult to implement a more rigid type of performance-based assessment that does not give instructors any leeway to use their best judgment and expect it to be able to bring about any changes in instructional practices, and if the assessment is partnered with a specific scoring rubric, there is little flexibility for instructional adjustment. The opposite is true when using more generic state performance guidelines and rubrics because when instructors are using their judgment and adjust the assessment accordingly, it will not be valid for state accountability purposes (Kane & Khattri, 1995).
Viewpoints
Detractors of performance-based assessments contend that they are too subjective, time consuming, vague, and expensive (Andrew, 1997). Performance-based assessments can also discriminate against students who have low social skills or introverted students who lack adequate skills to present, debate, and verbalize their thoughts. Performance-based assessments also make it difficult to compare student results when students are permitted to select their own topics for writing assessments, different experiments, etc. (Miller & Legg, 1993, as cited in Lam, 1995). Additionally, “if students are delegated the responsibility of determining how they should be assessed, such as choosing an essay topic, picking out best work” for inclusion in their portfolio, those who lack awareness of their own strengths and weaknesses are beginning the project at a disadvantage (Lam, 1995, ¶ 11).
Performance-based assessments can have a positive impact on the classroom by helping students build on their skills and work on more advanced projects that require a broader knowledge base and skill if the tasks are truly multidisciplinary in nature. However, this more holistic way of assessing students is a lot more time consuming to develop, administer, and score and also requires more time in the classroom for students to perform the tasks. In the wake of the No Child Left Behind Act, performance-based assessments have lost popularity because of the time element involved and also because the country has moved to a more high-stakes testing environment that requires instructors to be more focused on meeting the requirements of the act and assessments that are more standardized in nature to make adequate yearly progress determinations.
Terms & Concepts
Analytical Rubric: Analytical rubrics break projects down into individual parts. Each part is then rated using a scale provided with the rubric.
Authentic Assessment: Authentic assessment requires students to use prior knowledge, recent learning, and relevant skills to complete realistic, complex projects.
Experiential Learning: Experiential learning combines direct, real-world experience that is meaningful to the student with guided reflection and analysis.
High-Stakes Testing: High-stakes testing is the use of test scores to make decisions that have important consequences for individuals, schools, school districts, and/or states and can include high school graduation, promotion to the next grade, resource allocation, and instructor retention.
Holistic Rubric: A holistic rubric requires the overall project or presentation be scored as a whole without rating each component separately.
No Child Left Behind Act of 2001 (NCLB): The No Child Left Behind Act of 2001 is the latest reauthorization and a major overhaul of the Elementary and Secondary Education Act of 1965, the major federal law regarding K-12 education.
Portfolio: A portfolio is a systematic collection of teacher observations and student work representing the student's progress and activities. Portfolios often include projects that have not yet been completed, in order to show the evolution of the assignment and how it looks at different stages.
Rubric: A rubric is a set of ordered categories to which a given piece of work can be compared. It is a guide that shows how what learners do will be assessed and graded.
Summative Assessment: Summative assessments are intended to summarize what students have learned and occur after instruction has been completed.
Test Bias: Test bias occurs when provable and systematic differences in the results of students taking the test are discernable based on group membership, such as gender, socioeconomic standing, race, or ethnic group.
Bibliography
Andrade, H. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57 , 13. Retrieved June 26, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=3270122&site=ehost-live
Andrew, R. (1997). The statewide waste and discouragement of performance assessment. Contemporary Education, 69 , 11. Retrieved August 21, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9712054006&site=ehost-live
Brualdi, A. (1998). Implementing performance assessment in the classroom. Washington, D.C.: ERIC Clearinghouse on Assessment and Evaluation. (ERIC Document Reproduction Service No. ED423312). Retrieved August 21, 2007, from Education Resources Information Center http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/15/c9/a1.pdf
Elliott, S. (1995). Creating meaningful performance assessments. Washington, D.C.: Office of Education Research and Improvement. (ERIC Document Reproduction Service No. ED381985). Retrieved August 21, 2007 from EBSCO Online Education Research Database. http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/13/dc/37.pdf
How rubrics work (2005). Teaching Professor, 19 , 6. Retrieved June 26, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=16070168&site=ehost-live
Kane, M. & Khattri, N. (1995). Assessment reform. Phi Delta Kappan, 77 , 30. Retrieved August 21, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9510086578&site=ehost-live
Krolak-Schwerdt, S., Böhmer, M., & Gräsel, C. (2013). The impact of accountability on teachers' assessments of student performance: A social cognitive analysis. Social Psychology of Education, 16, 215-239. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=87989422&site=ehost-live
Lam, T. (1995). Fairness in performance assessment. (Report EDO-CG-95-25).Washington, D.C: Office of Educational Research and Improvement. (ERIC Document Reproduction Service No. ED391982). Retrieved August 21, 2007 from EBSCO Online Education Research Database. http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/14/61/30.pdf
Moskal, B. (2003a). Developing classroom performance assessments and scoring rubrics - part I. (Report EDO-TM-03-02). Washington, D.C.: Office of Educational Research and Improvement. (ERIC Document Reproduction Service No. ED481714). Retrieved August 21, 2007 from EBSCO Online Education Research Database. http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/1b/7d/eb.pdf
Moskal, B. (2003b). Developing classroom performance assessments and scoring rubrics - part II. (Report EDO-TM-03-03). Arlington, VA: National Science Foundation. (ERIC Document Reproduction Service No. ED481715). Retrieved August 21, 2007 from EBSCO Online Education Research Database. http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/1b/7d/ee.pdf
Pierce, D. (2012). Performance assessment making a comeback in schools. Eschool News, 15, 21-23. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=71425556&site=ehost-live
Saxton, E., Belanger, S., & Becker, W. (2012). The Critical Thinking Analytic Rubric (CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments. Assessing Writing, 17, 251-270. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=82065149&site=ehost-live
Seal, K. (1993). Performance-based tests. Omni, 16 , 66. Retrieved August 21, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9311047539&site=ehost-live
Suggested Reading
Center for Performance Assessment (2001). Performance Assessment Series: Classroom Tips and Tools for Busy Teachers. Englewood, CO: Advanced Learning Press.
Glatthorn, A., Bragaw, D. & Kawkins, K. & Parker, J. (1998). Performance Assessment and Standards-Based Curricula: The Achievement Cycle. Larchmont, NY: Eye on Education.
Kane, M. & Mitchell, R. (1996). Implementing Performance Assessment: Promises, Problems, and Challenges. Florence, KY: Lawrence Erlbaum Associates, Inc.
Rogers, S. & Graham, S. (1998). The High Performance Toolbox: Succeeding with Performance Tasks, Projects and Assessments. Evergreen, CO: Peak Learning Systems.