College Entrance Exams
College Entrance Exams (CEEs), also known as college-admissions tests, are standardized assessments designed to evaluate high school graduates' potential for academic success in college. These exams, primarily the Scholastic Aptitude Test (SAT) and the American College Test (ACT), play a significant role in college admissions, especially at selective institutions. Despite their widespread use, the predictive validity of CEEs has been questioned, as research shows performance disparities across gender, ethnic, and socioeconomic groups, leading to criticism and calls for their reform or elimination.
Typically taken by high school juniors and seniors, these exams assess verbal and mathematical skills and can influence admission decisions significantly. Students often retake these tests to improve their scores, and while high scores can enhance college admission prospects, they may not reliably forecast a student's long-term college performance. Factors such as high school grades, personal attributes, and support systems can be equally or more important in determining a student's success in higher education.
Moreover, the focus on CEEs has raised concerns about narrowing educational content and creating undue pressure on students. As discussions continue regarding the fairness and effectiveness of these assessments, alternatives to using CEEs for admissions are being explored by various academic institutions.
On this Page
- Testing & Evaluation > College-Entrance Exams
- Overview
- History
- The SAT
- The ACT
- Declining Test Scores
- Recent Changes
- Further Insights
- College-Preparatory Curricula
- Test Preparation
- College Admissions
- Test Measurement
- Viewpoints
- Advantages
- Disadvantages
- Test Measurement Validity
- Test Weighting
- Socioeconomic Factors
- Conclusion
- Terms & Concepts
- Bibliography
- Suggested Reading
Subject Terms
College Entrance Exams
College-entrance examinations (CEEs), also known as college-admissions tests, are standardized achievement and aptitude tests used to predict high-school graduates' potential for academic success in college and to either accept or deny their entry. Students' CEE scores are a main determinant of college admissions, particularly at more selective and elite colleges and universities, though the predictive validity of the tests as measurements of future college academic success is questionable. Research indicates that the tests demonstrate performance gaps between males and females as well as between ethnic and socioeconomic groups. Because of these flaws, the tests have received much criticism, with some critics calling for their abolishment.
Keywords Achievement Tests; Aptitude Tests; College Admissions Tests; College Board; College Entrance Examinations; Composite Score; Content Validity; Correlations; Criterion-Referenced Test; Norm-Referenced Test; Predictive Validity; Profile Reports; Standard Error of Measurement; Standardized Tests; Test Battery; Test Reliability
Testing & Evaluation > College-Entrance Exams
Overview
College-entrance examinations (CEEs) are taken by most public and private high-school students, primarily juniors and seniors, during their secondary education. They are one of the many national standardized tests used by states and local school districts to measure aptitude or achievement. Two best known CEEs—the Scholastic Aptitude Test (SAT) and the American College Test (ACT)—are offered by private corporations (Bishop, 2005; Cavanaugh, 2005; Weber, 1991).
Most colleges and universities require their applicants to take a CEE and meet or surpass a minimum score in order to be considered for admission. It is generally recommended that the exams be initially taken in the spring of the high school students' junior years. This gives students ample time to retake the exam and try to improve their scores. In fact, large numbers of students do take a "preliminary" or "trial" CEE. Half of SAT examinees, for example, take the test twice or more. Portions of the exam can be taken over a period of two weeks or longer. All students generally take exams in a few core-area fields and other elective subject-area exams (Angoff, 1991; Bishop, 2005).
History
During the period from 1946 to 1961, testing programs administered by the CEEB were expanded in the post-World War II measurement renaissance. The CEEB turned over the SAT to the Educational Testing Service or ETS when it was created in 1947 (Enotes.com, 2007; Karmel & Karmel, 1978). Today, the Princeton, NJ-based Educational Testing Service or ETS develops, conducts, administers and scores the SAT college-admissions tests for the College Board (Bradley, 2005; Enotes.com, 2007; ETS, 2007; Webb, Metha, & Jordan, 1992).
The SAT
The SAT was developed in the 1920s as a successor to the National Intelligence Test (NIT) (Saretzky, 1982). Carl Campbell Brigham, the creator of both the NIT and the SAT, was skeptical that the tests measured intelligence. The SAT was originally developed, at least in part, to promote equity and to expand the pool of students eligible for elite Eastern colleges and universities (Wiggins, 1998). The SAT was probably sought and used by socially selective schools to evaluate borderline candidates who were unable to otherwise demonstrate their qualifications. Based on its origin and history, Saretzky (1982) concluded that it was doubtful that the SAT was developed as an instrument of prejudice.
In 1926, the objective SAT was used for the first time. It was initially dubbed an 'aptitude' test, not an 'achievement' test. With the advent of the SAT program, students no longer had to travel to each college or university they wanted to attend and take each school's individual admissions. It did not take long for the SAT to become a household name for aspiring college students (Hubin, 1997; Karmel & Karmel, 1978; Popham, 2006; Wiggins, 1998).
The SAT has now existed for more than three-quarters of a century. The College Entrance Examination Board (CEEB), or College Board, is the original developer and the continuing sponsor of the SAT—the oldest and best known of the college-admissions testing programs. The test measures verbal and quantitative aptitudes, including critical reading, writing and mathematics skills. Most items on the SAT are multiple-choice, but the test also includes a timed essay. The SAT itself is timed at three hours and 45 minutes (ETS, 2007; Karmel & Karmel, 1978; Popham, 2006). In 2013, the College Board announced that the SAT would again be redesigned, this time to be in line with the Common Core Standards (ADAMS, 2013).
The SAT subject tests are one-hour long, multiple-choice exams which measure students' knowledge of specific subjects and their ability to apply that knowledge. Many colleges now typically also require or recommend one or more SAT subject-area tests for admission (ETS, 2007).
The ACT
ACT Incorporated of Iowa City, Iowa directs the American College Testing or ACT Program, and develops and produces the ACT CEEs (Bradley, 2005). The ACT program was founded in 1959 and its objectives were similar to the College Board's SAT program (Bishop, 2005; Karmel & Karmel, 1978). Approximately 925,000 high-school graduates took the ACT in 1996. This figure encompassed approximately 60 percent of the nation's entering college freshmen. The national average composite score was 20.9 in 1996, increasing from 20.8 in 1995 (ACT Inc., 1996).
The ACT assesses high-school students' general educational development and ability to complete college-level work (ACT, 2007). The ACT is measures skills and knowledge more closely related to the traditional disciplines than the SAT. Its battery of multiple-choice tests covers skills in English, reading, mathematics and science. ACT academic subject-area tests are about three-hours long (ACT, 2007; Bishop, 2005). The ACT Writing Test is an optional exam which measures students' skill in planning and writing a short essay (ACT, 2007).
Declining Test Scores
Average SAT scores declined consistently from the mid-1960s through the 1990s. SAT scores in the late 1980s were substantially lower than they were in the early 1960s and below the levels of 1970 (Cavazos, 1989; Cetron & Gayle, 1991; Kifer, 2001; Martz, 1992; Stevens & Wood, 1987). Trends from CEEs during the late 1990s indicate that scores were lagging or stagnating on at least certain core exams or exam sections. It appeared that the most talented students were losing ground. Top students of the early 1990s, for example, were not scoring as high as top students of the 1960s. The College Board has asserted that the decline of scores of U.S. students on CEEs was due to lapses in high-school instruction (Cavanaugh, 2002; Martz, 1992).
Recent Changes
Beginning in the mid-1980s and extending through the mid-1990s, a research and development program was in place to evaluate and change the SAT program. The College Board actually changed parts of the test as a result of these efforts to make the content and objectives of the SAT more closely align with the ACT. The SAT was historically a general test of intellectual skill and was not coupled to particular courses of study as was the ACT. The SAT now also offers subject-area tests similar to those offered by the ACT. In 2003, ACT Inc. made the ACT's written essay optional. Timed-essay exams were incorporated into the SAT in the spring of 2005 and into the ACT in the Fall of 2004 (Cavanaugh, 2005; Matzen & Hoyt, 2004; Minke, 1996; Wiggins, 1998).
Recent years have seen the usefulness of standardized CEEs brought into question. While some have suggested that admissions tests be de-valued in determining the qualifications of college applicants, others have proposed that CEEs be eliminated altogether as measures of admission to colleges and universities (Dennis, 2001; Olson, 2007).
Further Insights
College-Preparatory Curricula
There has been an increasing level of focus on college-entrance standards in recent years. Most students are not well prepared to take college-entrance exams, let alone to enter and compete effectively in higher education. Lack of academic preparation is the chief reason for college failure. It has been estimated that only one-third of college-bound students are academically prepared for college and that over three-quarters of students will struggle in algebra, biology and writing (Carris, 1995; Cavanaugh, 2004; Krause, 2005; Schmoker, 2006). Minority students are generally not as well-prepared for the academic rigors of college as whites (Brown, 1997; Clayton, 2001).
College-bound students are not enrolling in the rigorous, college-preparatory classes necessary to prepare them for higher education. Nor are students taking sufficiently challenging secondary math courses, such as Algebra I, Algebra II, Geometry and Trigonometry, which are needed to succeed on the mathematics sections of CEEs and later on in college mathematics classes (Bouris, Creel, & Stortz, 1998; Cavanaugh, 2002; Devarics, 2005). Although the SAT and the ACT are national tests unrelated to a particular curriculum, they are related to college readiness and preparation in subject-area courses (Kifer, 2001).
Test Preparation
Research has also found that reviewing test materials and developing a familiarity with test formats can improve student performance on standardized tests like the SAT and ACT (Gage & Berliner, 1988). Significant improvements result from small amounts of coaching time; in some estimates the correlation between time spent being coached and score improvement is as high as .70.
Test-preparation programs which teach students test-taking strategies can help improve their scores on CEEs (Borja, 2003; Schwartz, 2004). A wide variety of products are available, including:
• Test-prep books which are available from a number of publishers like Kaplan and the Princeton Review
• Computer programs which use games and tutorials to familiarize students with test material and teach test strategies.
• Multimedia materials like CDs and DVDs which teach test-taking strategies (Agency Group 09, 2007; Grimes, 2004; Ritter & Salpeter, 1988).
• Online tutorials and test demonstrations (Borja, 2003)
High schools are also taking a direct hand in helping to prepare their students for the tests. Some schools now offer test-prep programs and courses, and may even have a test-prep teacher. Other schools may incorporate test-taking practices into students' everyday coursework or hold test-preparation clinics to coach students for the exams. High school counselors or principals can also provide information about CEEs and available school-based test-prep programs (Borja, 2003; Carris, 1995; Cole, 1987).
College Admissions
Students have their SAT or ACT results sent directly to the colleges or universities to which they are applying. Colleges and universities then use applicants' CEE scores along with other application materials to assess students' overall academic capability and their desirability for admission, as well as suitability for scholarships. Most institutions require SAT or ACT scores of all applicants and set minimum composite scores for admission. These minimums vary depending on the selectivity of the school. Some colleges also set minimum scores for specific subjects tests (Stephen, 2003).
The importance of CEE scores is commensurately higher at more selective colleges and universities: one way elite schools preserve their status is by demanding high test scores. However, some top schools are becoming wary of applicants with perfect test scores. Extremely high test scores can indicate that an applicant may feel excessive pressure to succeed, a state that can lead to psychological problems and eventual burnout. To gain a more holistic picture of student aptitude, admissions officers will also look at students' grade point averages, essays, and extracurricular activities (Fruen, 1978; Micceri, 2007; Stephen, 2003).
At their discretion, some colleges and universities may give advance-placement credit for superior scores on the SAT or ACT assessments. Scores may exempt a student from certain introductory college courses such as English composition. To apply toward a course requirement, first-time freshmen must provide qualifying scores upon admission.
Students with more than a set number (e.g., 30 or more) of semester hours of college work may not need to submit CEE scores. In addition, students older than 21 years of age may not need to submit SAT or ACT scores for admission to a college or university.
Test Measurement
Despite the fact that scores do not come from nationally representative samples, CEEs are commonly viewed as "thermometers measuring school health" and "barometers of educational quality." Scores on CEEs are widely perceived to be measures of the overall ability and aptitude of U.S. high-school seniors or high-school students in general.
However, CEEs are not comprehensive measures of learning during high school and should not be regarded as definitive determiners of an individual's intellectual ability. Nor are CEE scores good indicators of students' general academic performances, achievement outcomes of schools, or the quality of schooling (Bishop, 2005; Kifer, 2001; Marklein, 2002; Minke, 1996; Popham, 2006; Stevens & Wood, 1987; Weber, 1991; Wiggins, 1998).
As with all standardized assessments, CEEs must deal with the comparability of their measures over time. The SAT, for example, is calibrated by the College Board or its proxy ETS that the scores will hypothetically be comparable from year to year. The calibrating process is intended to insure, for instance, that a verbal SAT score of 550 means the same thing regardless of the year the test was taken (Kifer, 2001).
CEE test designs are intended to measure students' academic capabilities and to predict success in college, though they are by no means definitive. CEEs can be valid predictors of success in college only if scores of entering students correlate and compare strongly with their later college GPAs, academic achievement, and general success in higher education (Shrock & Coscarelli, 2000; Weber, 1991). Because test scores and test-takers' subsequent grades in college correlate at a modest level - .50 to .60 – the tests provide "some degree of prediction" of future academic achievement, but they are not infallible (McMillan, 2001; Shrock & Coscarelli, 2000).
Though CEEs may measure verbal or mathematical abilities, they do not necessarily measure the knowledge and skills needed to succeed in college, like study habits, organization, and time management. They also tend to be a poor predictor of students' first year achievement. Students' high-school grades are usually better predictors of college achievement, since they are taken over time and can better reflect these secondary skills (Dennis, 2001; McMillan, 2001; Marklein, 2002; Olson, 2007).
Numerous factors affect students' performance in college beside their academic achievement or aptitude as measured by CEEs. Family life, income, motivation, long term goals, peers, study skills, interpersonal skills, test anxiety, and general health and wellness are only a few. Other attributes such as character, leadership skills, and effective-communication skills tend to be better gauges of long-term success than CEE scores. Exam scores cannot measure a given students' grit, drive, and determination to succeed (Dennis, 2001; McMillan, 2001).
Viewpoints
Advantages
Students are motivated to score well on CEEs: the exams aid students' chances of being admitted into their choice colleges. CEE test scores can provide a useful common measure for all applicants and eliminate examiner and personal biases. The tests are designed to be administered to groups, making them convenient and efficient (Bouris et al., 1998; Fruen, 1978; Weber, 1991).
Disadvantages
The educational literature is replete with examples of the drawbacks of CEEs.
The emphasis on CEEs can narrow and restrict the content and pedagogy of the high-school academic curriculum. Though CEEs may have little relation to school districts' classroom objectives, districts must nonetheless accede curriculum to test makers if they want their students to go to college. The tests do not fully measure the knowledge and skills of the American Diploma Project's benchmarks, which are being used by many states to align high-school standards, curriculum, assessments and accountability systems with the demands of college and work (Achieve Inc., 2007; Fruen, 1978; Olson, 2007). Nor do they effectively measure the high school curricular goals delineated by the College Board (Gardner, 1991).
As a result, fewer students take non-required courses or explore subjects and activities that are not perceived as useful for getting into college. When the objective of learning is to score well on a test, fewer students connect materials to their personal lives or the public good (Olson, 2007; Stevens & Wood, 1987). Interestingly, College Board data indicate that students who study the arts, which are presumably non-core courses for most students, annually outperform their non-arts peers on the SAT and improve their achievement in subjects such as reading, writing and mathematics (Paige & Huckabee, 2005).
Test Measurement Validity
The SAT, ACT and other standardized exams, some critics say, measure only small parts of a student's ability. By focusing on only mathematical and verbal abilities, the tests neglect other intelligences like interpersonal, intrapersonal, spatial, and body-kinesthetic intelligences, which can also be important factors in a students' academic and professional success (Cetron & Gayle, 1991). Students, parents, teachers or the general public should not believe that what is tested on simplistic, indirect proxy CEEs such as the SAT and ACT are all that good colleges expect of their students. Great harm is done to students who discover too late that they are not prepared for their future aspirations because high schools did not go beyond simplistic tests Gardner, 1991; Wiggins, 1993).
Critics have also questioned how well CEEs actually measure what they purport to measure. Studies show that CEE exam scores cannot reliably distinguish between comparable candidates and that test scores contain inherent errors. Tests can also be poorly constructed and test items poorly written. In the early 1980s, for example, a mathematical item on the SAT that seemed superficially correct was faulty. The item was administered to more than 100,000 examinees over a period of several years before it was discovered that the item had no correct response among the given options (Fruen, 1978; Martz, 1992; Osterlind, 1998; Weber, 1991). The National Council of Teachers of English (NCTE) has also criticized the writing sections on CEEs, saying that they are biased against students from poorer school districts, narrow the scope of essay writing, and will not improve the teaching of writing in schools (Honawar, 2005).
Test Weighting
College and university admission departments have also been criticized for giving too much weight to test scores. CEEs have undue influence in relation to which colleges students will be able to attend. Students may miss out on opportunities to attend their college of choice because of low scores. Students should be maximally qualified for admission to the most worthy programs and not minimally qualified to be admitted to just any college. When colleges impose higher SAT and ACT requirements for admission, it adversely affects minority and disadvantaged students (Dennis, 2001; Gandara & Lopez, 1998; Markelin, 2002; Webb, Metha, & Jordan, 1992; Wiggins, 1993).
College admissions decisions should not be made on the basis of a single test score. CEE scores measure only one aspect of candidates' abilities. CEE scores quite often do not match the performance of students in the classroom. There is significant disparity between high-school grades and students' scores on CEEs. Grade-point average, which can also be used as an admissions criterion, reflects individual student performance over a period of time. More balanced admissions policies can tap into a broader talent tool of applicants and candidates who possess the attributes that are requisite to future success in college (Dennis, 2001; Fruen, 1978; McMillan, 2001; Stanglin & Bernstein, 1996).
Socioeconomic Factors
Studies have also shown that African-Americans and Hispanics tend to score lower than whites and Asian-Americans (Marklein, 2002). In one study, whites scored over sixty points higher on average than minorities on the SAT (Micceri, 2007). Furthermore, Interviews conducted with Hispanic students found that SAT scores did not reliably predict their college GPA, time to complete their degrees, or their likelihood of applying to graduate school (Gandara & Lopez, 1998).
Family income can also be a factor, since affluent families can more easily afford test-prep products and services, as well as pay for their children to take tests multiple times. Some critics of the tests suggest leveling the field by using state funds to pay for prep-courses for minority and low-income students (Clayton, 2001; Cole, 1987; Fruen, 1978; Marklein, 2002; Micceri, 2007).
Research has also shown that males tend to outperform females on mathematics sections of CEEs and also show a slight advantage in verbal sections (Altermatt & Kim, 2004). Males, on average, score seventy-five points higher than females on the SAT (Micceri, 2007). However, when the variables of ethnic group, family income, high school courses, and choice of college major are controlled, the differences in the scores of males and females on the SAT disappeared (Landers, 1989; Webb, Metha, & Jordan, 1992). These results on CEEs are a bit puzzling in light of the fact that females tend to outperform males on almost all other academic performance measures (e.g. grades and graduation rates) (Micceri, 2007).
The tests can also cause undue pressure on students, causing them to spend inordinate amounts of time cramming and prepping for the exams. And all this effort is more often driven by a quest for high test scores rather than by an intrinsic interest in academic learning (Bouris et al., 1998; Marklein, 2002; Stevens & Wood, 1987). Some students crack under the pressure, experiencing text anxiety on the test day and performing below their ability level. For these students, receiving a poor or even just average test score can be psychologically debilitating (Gandara & Lopez, 1998; Stephen, 2003).
Conclusion
Some educational researchers manage to see hope and optimism in the face of almost overwhelming evidence contrary to their opinions, hypotheses, theories, personal allegiances and financial interests. In their recent paper, "Tracking Human Capital Over Two Decades," Lubinski, Benbow, Webb, and Bleske-Rechek (2006), for example, conclude that CEEs identify intellectual students with extraordinary potential for challenging careers in information-age occupations that require creativity and scientific and technological innovation.
Terms & Concepts
Achievement Tests: Tests which are intended to measure students' knowledge of specific facts and/or their understanding and mastery of basic principles (Borg & Gall, 1989).
Aptitude Tests: Tests which are intended to predict students' later performance in a specific type of behavior (Borg & Gall, 1989).
College Admissions Tests: Also known as college-entrance examinations; standardized achievement or aptitude tests such as the SAT and ACT, which are used to predict high-school students' potential for academic success in college and to either accept or deny entry.
College Board: Short for College Entrance Examination Board; the original developer and the continuing sponsor of the SAT, which is the oldest and best known of the college-admissions testing programs.
College-Entrance Examinations (CEEs): Also known as college-admissions tests, standardized achievement, or aptitude tests, these tests are used to predict high-school students' potential for academic success in college and to either accept or deny entry.
Composite Score: A score combining several other scores, sometimes called subscores, according to a specified formula.
Content Validity: The degree to which the sample of test items included on a test represents the content that it is designed to measure (Borg & Gall, 1989).
Correlations: Numerical estimates of the magnitude of relationship between two variables; generally expressed as a ratio, for example, .20 indicates a low correlation, .50 indicates a moderate correlation and .90 indicates a high correlation.
Criterion-Referenced Test: Types of standardized tests that draw random or stratified samples of items from a precisely defined content area or domain for which content limits are clearly specified and that are interpreted in terms of students reaching an established criterion (Borg & Gall, 1989).
Norm-Referenced Test: Types of standardized tests which produce scores telling how an individual's performance compares with other individuals (Borg & Gall, 1989).
Predictive Validity: The degree to which predictions made by tests are confirmed by the later behavior of the subjects (Borg & Gall, 1989); with CEEs, for example, the degree to which students' scores are confirmed by students' later academic achievement, performance and success in college.
Profile Reports: Students' score results on the CEEs—SAT or ACT— which are sent to the college or university admissions offices at their request.
Standard Error of Measurement: In simplistic terms, a statistic which allows an estimate to be made of the amount of measurement error in an individual's test score and the range within which the individual's true score most likely falls; an individual's score obtained on a test is only an estimate which can vary considerably from the individual's 'true score' (Borg & Gall, 1989).
Standardized Tests: Tests that produce similar results when different individuals administer and score them following the instructions given and for which there is normative data present to describe how subjects from different specified populations perform (Borg & Gall, 1989).
Test Battery: "A group of several tests that are comparable, the results of which are used individually, in various combinations, and/or totally" (Karmel & Karmel, 1978, p. 130).
Test Reliability: The capacity of a test or measure to yield similar scores on the same individual when tested at different times or under different conditions (Borg & Gall, 1989).
Bibliography
Achieve, Inc. (2007). Aligned expectations? A closer look at college admissions and placement tests. Washington, DC: Author.
Ackerman, P. L., Kanfer, R., & Calderwood, C. (2013). High school advanced placement and student performance in college: STEM majors, non-STEM majors, and gender differences. Teachers College Record, 115, 1-43. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=91548713&site=ehost-live
ACT Inc. (1996). ACT assessment 1996 results: Summary report. Iowa City, IA: Author.
ACT Inc. (2007). The ACT: America's most widely accepted college entrance exam. Retrieved August 13, 2007, from http://www.act.org/aap/
Adams, C. (2013). College Board begins redesign of SAT exam. Education Week, 32, 4. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=85953750&site=ehost-live
Agency Group 09. (2007). 'eKnowledge' prepares service members, family members for academic rigors. FDCH Regulatory Intelligence Database, Department of Defense.
Altermatt, E. R., & Kim, M. E. (2004). Getting girls de-stereotyped for SAT exams. Education Digest: Essential Readings Condensed for Quick Review, 70 , 43-47.
Angoff, W. H. (1991). The determination of empirical standard errors of equating the scores on SAT-Verbal and SAT-Mathematical. Washington, DC: Education Resources Information Center (ERIC Document Reproduction Service No. ED384658).
Bishop, J. (2005). High school exit examinations: When do learning effects generalize? Yearbook of the National Society for the Study of Education, 104 , 260-288. Retrieved August 10, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=17238812&site=ehost-live
Borg, W. R., & Gall, M. D. (1989). Educational research: An introduction. New York, NY: Longman.
Bouris, R., Creel, H., & Stortz, B. (1998). Improving student motivation in secondary mathematics by the use of cooperative learning. Unpublished master's thesis, Saint Xavier University, Chicago, IL.
Bradley, A. (2005). ACT college-testing firm offers fee waivers to displaced students. Education Week, 25 , 4. Retrieved August 10, 2007, from EBSCO Online Database, Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=18591293&site=ehost-live
Brown, O. G. (1997). Helping African-American students prepare for college. Bloomington, IN: Phi Delta Kappa.
Carris, J. (1995). Coaching kids for the SAT. Executive Educator, 17 , 28-31.
Cavanaugh, S. (2002). Officials tie entrance-score dips to curriculum. Education Week, 22 , 5.
Cavanaugh, S. (2004). Barriers to college: Lack of preparation vs. financial need. Education Week, 23 , 1.
Cavanaugh, S. (2005). ACT admissions test, like rival, adds essay, but makes it optional. Education Week, 24 , 14.
Cavazos, L. F. (1989). State education performance chart, 1989: Remarks of Lauro F. Cavazos, U. S. Secretary of Education. Washington, DC: Education Resources Information Center (ERIC Document Reproduction Service No. ED308624).
Cetron, M., & Gayle, M. (1991). Educational renaissance: Our schools at the turn of the century. New York, NY: St. Martin's Press.
Clayton, M. (2001). In student test scores, a wider gap. Christian Science Monitor, 93 , 1.
Cole, B. P. (1987). College admissions and coaching. Negro Educational Review, 38 (2- 3), 125-135.
3), 125-135.
Dennis, R. (2001). New Urban League report reveals too much weight is placed on college entrance exams. New York Amsterdam News, 92 , 34-35.
Devarics, C. (2005). Report: High school rigor essential for students of color. Black Issues in Higher Education, 21 , 6-7.
Enotes.com. (2007). Educational Testing Service: Introduction. Retrieved August 13, 2007, from http://business.enotes.com/company.histories/educational-testing-service
ETS. (2007). Tests: SAT. Retrieved August 13, 2007, from http://www.ets.org/portal/site/ets/menuitem.c988ba0e5dd572bada20bc47c3921509/?vgnextoid=178daf5e44df4010VgnVCM10000022f95190RCRD&vgnextchannel=e809197a484f4010VgnVCM10000022f95190RCRD
Fruen, M. (1978). The use of tests in admissions to higher education. NCME Measurement in Education, 9 , 1-9.
Gage, N. L., & Berliner, D. C. (1998). Educational psychology. Boston, MA: Houghton Mifflin Company.
Gandara, P., & Lopez, E. (1998). Latino students and college entrance exams: How much do they really matter? Hispanic Journal of Behavioral Sciences, 20 , 17- 38.
Gardner, H. (1991). The unschooled mind: How children think and how schools should teach. New York, NY: Basic Books.
Grimes, W. (2004). Ignorance is no obstacle. New York Times, 154 (53026), 28-29.
Honawar, V. (2005). NCTE is critical of new college-admissions essay tests. Education Week, 24 , 5.
Hubin, D. (1997). The SAT: Past as prologue. College Board Review, , 4-8.
Karmel, L. J., & Karmel, M. O. (1978). Measurement and evaluation in schools.
Karmel, L. J., & Karmel, M. O. (1978). Measurement and evaluation in schools. New York, NY: Macmillan Publishing Co., Inc.
Kifer, E. (2001). Large-scale assessment: Dimensions, dilemmas and policy. Thousand Oaks, CA: Corwin Press, Inc.
Klasik, D. (2013). The ACT of enrollment: The college enrollment effects of state-required college entrance exam testing. Educational Researcher, 42, 151-160. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=87039273&site=ehost-live
Krause, T. (2005). Whatever happened to the average student? School Administrator, 62 , 50-51.
Landers, S. (1989). New York: Scholarship awards are ruled discriminatory. The American Psychological Association Monitor, 20 , 14.
Lubinski, D., Benbow, C. P., Webb, R. M., & Bleske-Rechek, A. (2006). Tracking exceptional human capital over two decades. Psychological Science, 17 , 194- Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=19826529&site=ehost-live
Maltese, A. V., & Hochbein, C. D. (2012). The consequences of 'school improvement': Examining the association between two standardized assessments measuring school improvement and student science achievement. Journal of Research in Science Teaching, 49, 804-830. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=77729966&site=ehost-live
McMillan, J. H. (2001). Essential assessment concepts for teachers and administrators. Thousand Oaks, CA: Corwin Press, Inc.
Marklein, M. B. (2002, June 26). SAT exam up for big revision. USA Today.
Martz, L. (1992). Making schools better: How parents and teachers across the country are taking action—and you can, too. New York, NY: Times Books.
Matzen, R. N., Jr., & Hoyt, J. E. (2004). Basic writing placement with holistically scored essays: Research evidence. Journal of Developmental Education, 28 , 2-34.
Micceri, T. (2007). How we justify and maintain the white, male academic status quo through the use of biased college admissions requirements. Washington, DC: Education Resources Information Center (ERIC Document Reproduction Service No. ED497371).
Minke, A. (1996). A review of recent changes in the Scholastic Aptitude Test I: Reasoning Test. Washington, DC: Education Resources Information Center (ERIC Document Reproduction Service No. ED397092).
Olson, L. (2007). Caution in use of college-entry tests urged. Education Week, 26 , Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=24785445&site=ehost-live
Osterlind, S. J. (1998). Constructing test items: Multiple-choice, constructed-response, performance and other formats. Boston, MA: Kluwer Academic Publishers.
Paige, R., & Huckabee, M. (2005). Putting arts education front and center. Washington, DC: Education Commission of the States.
Popham, W. J. (2006). Branded by a test. Educational Leadership, 63 , 86-87. Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=20472874&site=ehost-live
Ritter, C., & Salpeter, J. (1988). Studying for the big test: Software that helps students prepare for the college entrance examinations. Classroom Computer Learning, 8 , 48-53.
Saretzky, G. D. (1982). Carl Campbell Brigham, the Native intelligence hypothesis and the Scholastic Aptitude Test. Princeton, NJ: Educational Testing Service.
Schmoker, M. (2006). Results now: How we can achieve unprecedented improvements in teaching and learning. Alexandria, VA: Association for Supervision and Curriculum Development.
Schwartz, A. E. (2004). Scoring higher on math tests. Education Digest: Essential Readings Condensed for Quick Review, 69 , 39-43.
Shrock, S. A., & Coscarelli, W. C. (2000). Criterion-referenced test development: Technical and legal guidelines for corporate training and certification. Washington, DC: International Society for Performance Improvement.
Stanglin, D., & Bernstein, A. (1996). Making the grade. U. S. News & World Report, 121 , 18.
Stephen, A. (2003). U. S. parents are so ambitious for their children that only Harvard, Yale or Princeton will do: Even the youngest kids are stressed out; by the time they get to college, they're suicidal. New Statesman, 132 (4624), 8.
Stevens, E., Jr., & Wood, G. H. (1987). Justice, ideology and education: An introduction to the social foundations of education. New York, NY: McGraw- Hill Publishing Company.
Webb, L. D., Metha, A., & Jordan, K. F. (1992). Foundations of American education. New York, NY: Merrill/Macmillan Publishing Company.
Weber, A. L. (1991). Introduction to psychology. New York, NY: Harper Perennial.
Wiggins, G. P. (1993). Assessing student performance: Exploring the purpose and limits of testing. San Francisco, CA: Jossey-Bass.
Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco, CA: Jossey-Bass.
Zehr, M. A. (2001). Study: Test-preparation courses raise scores only slightly. Education Week, 20 , 6.
Suggested Reading
Borja, R. R. (2003). Prepping for the big test. Education Week, 22 , 23-25. Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9887319&site=ehost-live
Dennis, R. (2001). New Urban League report reveals too much weight is placed on college entrance exams. New York Amsterdam News, 92 , 34-35.
Krause, T. (2005). Whatever happened to the average student? School Administrator, 62 , 50-51.
Lubinski, D., Benbow, C. P., Webb, R. M., & Bleske-Rechek, A. (2006). Tracking exceptional human capital over two decades. Psychological Science, 17 , 194-200. Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=19826529&site=ehost-live
Micceri, T. (2007). How we justify and maintain the white, male academic status quo through the use of biased college admissions requirements. Washington, DC: Education Resources Information Center online submission (ERIC Document Reproduction Service No. ED497371).
Olson, L. (2007). Caution in use of college-entry tests urged. Education Week, 26 , Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=24785445&site=ehost-live
Popham, W. J. (2006). Branded by a test. Educational Leadership, 63 , 86-87. Retrieved August 10, 2007 from EBSCO Online Database Academic Search Premier, http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=20472874&site=ehost-live