Institutional Effectiveness in Higher Education

Institutional effectiveness is an information-based decision-making model wherein the data gathered through organizational learning activities is used for quality improvement. Specifically, it refers to the ongoing process through which an organization measures its performance against its stated mission and goals for the purposes of evaluation and improvement. The term was first used to describe activities related to accreditation in the 1980s and is now a crucial component of the accreditation process, as well as the fundamental factor in accountability and performance funding in higher education.

Keywords Accountability; Accreditation; Community College Survey of Student Engagement (CCSE); Institutional Effectiveness; Organizational Learning; Performance Funding; Quality Assurance; Quality Enhancement; Strategic Planning; Student Learning Outcomes

Overview

Institutional effectiveness refers to the ongoing process through which an organization measures its performance against its stated mission and goals for the purposes of evaluation and improvement. Since the Southern Association of Colleges and Schools (SACS) adopted the term in the 1980s, institutional effectiveness has moved to the forefront of the dialogue among government agencies, accrediting organizations, and higher education administrators. In this age of ever-increasing pressure for accountability, students, parents, government officials, accrediting agencies, industry leaders, taxpayers, and the mass media are demanding responsiveness from institutions of higher education (Welsh & Metcalf, 2003). These stakeholders are pressuring higher education decision-makers for improved documentation about performance and a system wherein public policy tools are coupled with reliable data from institutions to improve the alignment of the performance of higher education with the expectations of the public it serves (Welsh & Metcalf, 2003).

Institutional effectiveness most often includes the measurement of performance in areas such as student learning outcomes, academic program review, strategic planning, performance scorecards, and benchmarking and quality measurement. These areas are studied in myriad ways using numerous divergent instruments to collect pertinent data. Although there are variations in the terminology used, each of the six accrediting agencies in the United States requires that colleges and universities have a process through which institutional effectiveness can be evaluated, measured, and reported (Welsh & Metcalf, 2003).

Sullivan and Wilds (2001) describe institutional effectiveness as the process of studying performance and engaging in related activities within the context of a number of concepts and criteria. In their model, all constituents of the higher education community must participate in the process, with each group bearing specific responsibilities for program review and data collection. The accomplishment of the institutional mission; the reflection of its vision, philosophy, goals, and objectives; and an interpretation of the environment are at the crux of an evaluation of institutional effectiveness. In order to measure performance against these self-defined standards, a historical review of institutional accomplishments, weaknesses, and aspirations followed by the preparation, collection, and interpretation of data by institutional staff and faculty must be undertaken. Faculty must be charged with the development and evaluation of curriculum and with the evaluation of student performance in relation to that curriculum. Administrators are called upon to interpret and utilize relevant data and information in ways that promote increased effectiveness. Finally, the president of the institution bears the responsibility for defining and communicating institutional priorities and for working with the board to secure the resources required to meet those priorities (Sullivan & Wilds, 2001).

While the plan for the study of institutional effectiveness must be outlined and communicated by senior administrators, the process is rarely straightforward. Smith and Parker (2005) comment that while the relationship between organizational learning (the data-collection component of institutional effectiveness) and the focus upon learning and research that defines higher education may seem apparent, the process tends to be disruptive to campus patterns. "From bringing together campus constituents across institutional boundaries and accessing campus information data systems to obtain usable information, the process of using an organizational learning approach for evaluation has challenged many campuses" (Smith & Parker, 2005, p. 122).

Accreditation, Accountability & Performance Funding

The process of institutional review began in the nineteenth century with the inception of accreditation. Originally intended as a means through which some external control could be exerted over educational standards, by the 1930s and 1940s, there was a trend toward an added emphasis on improvement (Selden, 1960, as cited in Dodd, 2004). Since the introduction of the concept of institutional effectiveness in the 1980s, institutions have improved the work done on writing annual goals and objectives, evaluating their accomplishment of those goals, and describing their responsive improvements based on that data (Sullivan & Wilds, 2001, p. 2).

The Accrediting Agencies

There are six regional agencies in the United States that accredit higher education institutions in their respective areas.

• New England,

• Middle States,

• Southern,

• Western,

• North Central, and

• Northwest.

Middle States was the first agency to require institutions to engage in the self-study and peer-review process, while Southern States was the first to incorporate the concept of institutional effectiveness into its requirements. The other five accrediting agencies quickly followed suit. An institution must be accredited by one of these agencies in order to receive federal financial aid, making the accreditation process a central focus for an institution when it is time for its review (Selden, 1960, as cited in Dodd, 2004). Additionally, there are a number of discipline-specific accrediting agencies that review curriculum and programs and whose standards must be met within the framework of the larger, regional accrediting bodies. While their accreditation may not have a direct impact upon the federal funding on which institutions rely, they can impact students' eligibility for professional exams and licensure, among other things. Hence, at any given time, an institution may be engaging in the self-study and accreditation process for a variety of agencies.

Since these accrediting agencies serve as the primary intercessors between postsecondary institutions and policymakers, they standardize accountability through their requirements for accreditation. In order to meet accreditation criteria, institutions must collect, format, report, and then use for improvement data about their programs and services (Welsh & Metcalf, 2003). With accredited institutions, Dodd (2004) explains, "constituencies such as students, the public, and government representatives have at least some assurance of quality and value. [The institution] is accomplishing the goals it has set within the context of its mission" (p. 14). Furthermore, argue Head and Johnson (2011), accreditation “can protect an institution from unwarranted criticism and . . . provide the stimulus for the improvement of courses and programs.” Accreditation also “promotes internal unity and cohesiveness” (p. 37). Evidence of these accomplishments may impact enrollment and funding.

Internal Quality Assurance

Focus on Outcomes

Accrediting agencies have revised standards to reflect a focus on the achievement of outcomes rather than an adherence to standards. The primary suggestion for reform is related to the development and refinement of internal quality assurance measures that ensure institutional effectiveness (Dill, Massy, Williams & Cook, 1996, as cited in Dodd, 2004). Earlier approaches failed to compare outcomes with known approaches and did not examine how data were used in decision-making and strategic planning (Ewell, 1998, as cited in Dodd, 2004).

The self-assessment required for accreditation, coupled with the periodic peer review, results in a wealth of information that can be used for both accountability and for institutional and program improvement. Though the process does provide for some quality assurance, its focus is on the inputs and the processes and not the student (Miller & Malandra, 2006). Since each institution must meet the same requirements, there is some standardization among the data that are collected, but its dissemination is not widespread unless individual institutions choose to publish it. "Higher education institutions and systems are focused inward and have a tendency to be unclear in communicating goals and outcomes to the public" (Miller & Malandra, 2006, p. 4).

Competing for Students

While the accreditation process served as the impetus for institutional effectiveness and provides some insight into performance, it is no longer the sole reason that colleges and universities engage in the continuous process of self-study and improvement. Miller (2006), writing to inform the commission that published A Test of Leadership: Charting the Future of US Higher Education (2006), explains that one of the core strengths of American postsecondary education lies in the number and variety of choices offered to students, which creates competition among institutions for students. Along with the ongoing competition for limited funds, the competition for students has forced increased accountability upon institutions so that student consumers can make the most informed choices. Miller (2006) asserts that having data available is not enough and that "it is essential to create a transparent system, which allows comparisons of rankings of institutions" (p. 4). Absent rankings and ratings, there is no impetus for change. He continues, "Today the U.S. News & World Report ranking serves by default as an accountability system for colleges and universities. Consequences that can modify behavior are an essential element of a productive accountability structure" (Miller, 2006, p. 4).

Performance Funding

The final factor contributing to the growing interest in institutional effectiveness and the trend toward institutional accountability is a result of the trend toward performance funding. While legislators have traditionally funded higher education institutions based upon their enrollment, there is a trend to adopt a performance funding model, which makes institutions accountable not just for making postsecondary institutions more accessible to a larger number of students (which enrollment funding accomplished) but for the quality of the education that is offered as well (Hoyt, 2001; McKeown-Moak, 2013). Performance funding offers incentives to institutions for providing documented quality services and for engaging in activities designed to improve programming. Changing the focus of the budget process from one that calls upon legislators to meet the need for more resources to one that calls upon the institution to justify its existing budget expenditures has created a level of accountability that is helping to shape the landscape of institutional effectiveness (Hoyt, 2001).

Twenty percent of state higher education officers had implemented a performance funding model in 1997, with an expected 52 percent of others intending to adopt the model within five years (Burke, 1998, as cited in Hoyt, 2001). The Tennessee Higher Education Commission (2007) reports that by 1999, twenty-eight states had implemented performance funding or were close to doing so, and many others were considering implementation of a performance funding program. Among the indicators for performance funding are

• retention (rather than enrollment) rates,

• completion rates,

• effectiveness of remedial programs,

• the number of tenured or tenure-track faculty teaching undergraduates,

• scores on national exams,

• the number of research dollars awarded,

• student achievement on licensure and certification exams,

• student evaluations of faculty,

• placement of graduates, and

• rates of transfer to four-year colleges and universities.

A variety of measures apply specifically to individual types of institutions because there is diversity among the mission statements of community colleges, four-year colleges and universities, and large research institutions. In light of this growing trend, Hoyt (2001) expresses concern that "it is important that policymakers understand the impact and validity of the outcomes measures they are selecting to fund higher education" (p. 1). Poor measures and invalid indicators could impact the success of any performance funding program (Hoyt, 2001).

Accreditation, accountability, and performance funding have all contributed to the rising importance of institutional effectiveness in the context of higher education. Smith and Parker (2005) view this as a call for institutions to increase the depth and breadth of internal research to address the level of accomplishment of their institutional goals. An information-based decision-making model wherein the data gathered through organizational learning activities is used for quality improvement is the core of institutional effectiveness.

Further Insights

According to Sullivan and Wilds (2001), "no matter the wording, the most important purpose of an institution of higher education is to educate students" (p. 1). It follows, then, that the most important aspect of institutional effectiveness is student outcomes. Student achievement relative to the curriculum is of paramount importance to academic effectiveness. While achievement is important in and of itself, effectiveness dictates a broader scope in that it "presumes improvement in instruction, methodology, or technology based on the interpretation of data" (Sullivan & Wilds, 2001, p. 1). Accrediting agencies look for institutions to assess student outcomes and to make improvements to the curriculum based upon that data. Institutions must be able to document program improvements that have their roots in assessment data (Sullivan & Wilds, 2001).

Difficulties in Assessing Student Learning

Because there is a lack of reliable data available to external constituents, there is a broad assumption that colleges and universities are offering quality services. This dearth of information is compounded when viewed in light of the few reliable means that exist to compare what students learn and experience among institutions. Miller and Malandra (2006) assert that "there is no solid, comparative evidence of how much students learn in college, or whether they lean more at one school than another" (p. 4). According to Miller and Malandra (2006), two-thirds of all colleges nationwide do not participate in any type of assessment to determine whether they are meeting the curricular goals of their educational programs, and there are no commonly used tests or assessments to gauge undergraduate learning. The assessments in the areas of writing, literacy, math, and technology that have been administered and the data that were subsequently disseminated have indicated that the skills of many undergraduates barely improve in college and the skills of others decrease, resulting in employers offering expensive communication and critical-thinking training for new employees (Miller & Malandra, 2006). Hence, there is a growing need for reliable instruments with which institutions can collect relevant student data.

Student outcomes assessment, or the measure of student learning, requires that any instrument used have the dual ability to measure what the institution seeks to assess and to be validated against reliable standards. This has presented challenges to institutional researchers as there is no reliable, rigorous, and comparative means through which to measure the student learning outcomes for undergraduates. Colleges and universities now are using a variety of self-designed or purchased exams to evaluate this aspect of institutional effectiveness, and decisions about funds allocation and program improvement are being made based on data gathered through their use (Miller & Malandra, 2006).

New Assessments Available

A number of standardized assessments have been made available, and as their use becomes more widespread, a great deal of uniform data about student outcomes across institutions may be collected. One instrument, the National Survey of Student Engagement (NSSE), and its counterpart, the Community College of Student Engagement (CCSE), annually surveys hundreds of colleges about students' college experience relative to their participation in activities and programs that enhance learning and development. Though it does not assess how and what students learn, it does evaluate an important part of the college and university experience (NSSE, 2012).

According to Miller and Malandra (2006), "use of the NSSE is helping to encourage a focus on the quality of the undergraduate experience, and the emergence of a national culture of evidence and assessment" (p. 5). According to the U.S. Department of Education (2006), "these instruments provide a comprehensive picture of the undergraduate student experience at four-year and two-year institutions," and data can be used "to improve [student] experience and create benchmarks against which similar institutions can compare themselves" (p. 38). Finally, data collected by the NSSE and CSSE are publicly reported and can be extrapolated for the purposes of institutional performance review and for establishing accountability standards and strategic planning (NSSE, 2012).

The Collegiate Learning Assessment (CLA) is the result of a multiyear trial by the Rand Corporation and measures key cognitive outcomes in the areas of critical thinking, analytical reasoning, and written communications. It is among the most comprehensive national efforts to standardize student outcomes, and since 2002, more than seven hundred college and universities have around the world have administered the instrument (Council for Aid to Education, 2013). The CLA measures student achievement over time as it is administered to first-year students and senior-level students. Since the results measure institutional performance rather than individual student achievement, "results are aggregated and allow for inter-institutional comparisons that show how each institution contributes to learning" (U.S. Department of Education, 2006, p. 33). The effectiveness of the CLA as a measure of actual learning was examined by researchers Roksa and Arum (2011).

The Baldridge Criteria

A comprehensive framework for institutional effectiveness is the Baldridge criteria for education. With its emphasis on student learning outcomes, it facilitates organizational improvement within the context of the institution's mission and goals. Baldridge is an integrated quality management system that employs seven criteria that connect all of the institution's goals and outcomes. According to Dodd (2004), the focus of the program is fourfold and involves improvement trends, benchmarking, stakeholders, and learning outcomes.

Categories one through three make up the leadership triad, defined as

• leadership (category one);

• strategic planning (category two); and

• student, stakeholder, and market focus (category three).

The leadership triad links to the results triad, defined as

• faculty and staff focus (category five),

• process management (category six), and

• organizational performance results (category seven).

Underlying these two integrated triads is a foundation of measurement, analysis, and knowledge management (category four) (Dodd, 2004, p. 22).

Baldridge is unique in that it is connected to a quality management program, and it seeks to "facilitate sharing best practices, and to serve as a tool for learning about and improving performance" (Dodd, 2004, p. 23). This, in turn, helps to provide a framework with which institutions can compare outcomes.

Viewpoints

There are almost as many definitions of institutional effectiveness as there are institutions struggling with the concept and process of such accountability in higher education. Demonstrating effectiveness at the institutional level is a challenge. Sullivan and Wilds (2001) assert that

Institutional effectiveness is the result of institutional leaders making responsible data-based decisions tempered by current fiscal and political environments. [It] is exemplified by line officers . . . interpreting their responsibilities as they relate to the institutional mission. [They] must use information about the work of their units to project future plans and employ historical information to project budget needs (p. 1).

Lack of Support for Effectiveness Activities

Welsh and Metcalf (2003) note that though the work of institutional effectiveness is a priority at colleges and universities, campus support for them is not gaining momentum. They find that "gaining the interest, commitment and support of institutional constituents is arguably the primary challenge colleges and universities face in designing and implementing institutional effectiveness activities" (p. 34). This support is fundamental to the institutionalization of the activities and to their being incorporated into the culture of the institution.

Miller and Malandra (2006) cite several reasons for this lack of support for activities related to institutional effectiveness. "There is a resistance to accountability and assessment, a fear of exposure and misinterpretation. Academics are afraid they will be blamed for variables (poverty, low SAT [Scholastic Aptitude Test] scores, poor high school performance) over which they have little control" (p. 4). There is also a concern that whatever data collected will not be used purposefully and, thus, the collection of said data would be a waste of valuable resources. The overriding reason that they suggest, however, is that "faculty and administrators are reluctant to look at results or make major changes because there is a lack of compelling pressure to improve undergraduate education and because they are isolated from reliable evidence of their students' progress beyond individual classes" (Miller & Malandra, 2006, p. 5).

Lack of Transparency

Another challenge to institutional effectiveness is the complex and decentralized nature of higher education institutions. Though research lies at the core of education, most institutions have not used information-based decision-making models, and there is a significant lack of relevant data available. Smith and Parker (2005) explain "While a research culture encourages transparency of data and information in the academic setting, such information can be quite difficult at the institutional level. Information often has institutional and political significance that needs to be taken into consideration" (p. 122). Fear of controversy or harm to the institution's reputation or ranking in U.S. News & World Report make administrator's reluctant to disseminate the information that is gathered. There is a strong impulse to shed the best light on the institution, and there is pressure from both admissions and development offices to do so. They argue that "nonetheless, the use of basic, disaggregated institutional data is fundamental to monitoring and discussing progress" (Smith & Parker, 2005, p. 122).

Conclusion

Despite institutional reluctance to embrace institutional effectiveness, indications are that it will remain at the forefront of the national dialogue on higher education. The Department of Education, in its landmark study A Test of Leadership (2006), found that increased transparency and accountability among postsecondary institutions were necessary for the United States to remain competitive in a global context. The report called for increased assessment and accountability to all stakeholders and transparency of data that can be aligned so that sensible comparisons can be made among institutions.

Colleges and universities must become more transparent about cost, price and student success outcomes. This information should be made available to students, and reported publicly in aggregate form to provide consumers and policymakers an accessible, understandable way to measure the relative effectiveness of colleges and universities (p. 20).

Among the report’s many recommendations for the improvement of higher education was that postsecondary institutions adopt a "culture of continuous innovation and quality improvement by developing new pedagogies, curricula and technologies to improve learning" (U.S. Department of Education, 2006, p. 41). The report outlined a number of goals and recommendations that, if adopted, will substantially change access to the delivery and evaluation of postsecondary programs.

Terms & Concepts

Accountability: Accountability in higher education refers to the institution's responsibility to provide the high quality programs and services within the context of its stated mission to its students and its willingness to report related outcomes to all stakeholders.

Accreditation: Accreditation is the nongovernmental process through which peer review and self-study ensure that federally funded educational institutions and programs in the United States are operating at a basic level of quality.

Community College Survey of Student Engagement (CCSE): The Community College Survey of Student Engagement is a survey instrument administered annually by community colleges to assess students' college experience relative to their participation in activities and programs that enhance learning and development.

Institutional Effectiveness: Institutional effectiveness is an information-based decision-making model wherein the data gathered through organizational learning activities is used for quality improvement. Specifically, it refers to the ongoing process through which an organization measures its performance against its stated mission and goals for the purposes of evaluation and improvement.

National Survey of Student Engagement (NSSE): The National Survey of Student Engagement is a survey instrument administered annually by hundreds of colleges to assess students' college experience relative to their participation in activities and programs that enhance learning and development.

Organizational Learning: Within the context of organizational theory, organizational learning describes the adaptive process through which an institution gathers information and plans and instigates changes based upon that information; in its simplest form, it is the process through which an organization learns and changes from experience.

Performance Funding: Performance funding is "an incentive-based funding initiative for public higher education that financially rewards exemplary institutional performance on selected measures of effectiveness" (University of Tennessee, 2000-2005).

Strategic Planning: Strategic planning describes the process through which an organization first defines its mission and direction over a period of time and then makes plans and allocates resources accordingly.

Student Learning Outcomes: Student learning outcomes describe how well an institution develops students' knowledge, talents, and abilities within the context of its goals and mission.

Bibliography

Council for Aid to Education. (2013). Performance assessment: CLA+ overview. Retrieved December 22, 2013, from http://cae.org/performance-assessment/category/cla-overview.

Dodd, A. (2004). Accreditation as a catalyst for institutional effectiveness. (2004). New Directions for Institutional Research. 123, 13-25. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15359318&site=ehost-live

Head, R. B., & Johnson, M. S. (2011). Accreditation and its influence on institutional effectiveness. New Directions for Community Colleges, 2011 , 37-52. Retrieved December 22, 2013, from EBSCO online database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=59468674&site=ehost-live

Hoyt, J. (2001). Performance in higher education: The effects of student motivation on the use of outcomes tests to measure institutional effectiveness. Research in Higher Education, 42 , 71-85. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=7422603&site=ehost-live

McKeown-Moak, M. P. (2013). The "new" performance funding in higher education. Educational Considerations, 40, 3-12. Retrieved December 22, 2013, from EBSCO online database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=87590712&site=ehost-live

Miller, C. (2006). Issue paper: Accountability/consumer information. A national dialogue: The secretary of education's commission on the future of higher education. (DOE Publication) Washington, DC: Government Printing Office. Retrieved November 21, 2007 from http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/miller.pdf.

Miller, C. & Malandra, G. (2006). Issue paper: Accountability/assessment. A national dialogue: The secretary of education's commission on the future of higher education. (DOE Publication) Washington, DC: Government Printing Office. Retrieved November 21, 2007 from http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/miller-malandra.pdf.

National Survey of Student Engagement (2012. A fresh look at student engagement: 2012 annual results. Retrieved December 22, 2013, from http://nsse.iub.edu/index.cfm.

Smith, D. & Parker, S. (2005, fall). Organizational learning: A tool for diversity and institutional effectiveness. New Directions for Higher Education, 131, 113-125. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=18225978&site=ehost-live

Roksa, J., & Arum, R. (2011). The state of undergraduate learning. Change, 43, 35-38. Retrieved December 22, 2013, from EBSCO online database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=59330060&site=ehost-live

Sullivan, M. & Wilds, P. (2001). Institutional effectiveness: More than measuring objectives, more than student assessment. Assessment Update. 13 , 4. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=10350145&site=ehost-live

United States Department of Education. (2006). A test of leadership: Charting the future of US Higher Education: (DOE Publication) Washington, DC: Government Printing Office. Retrieved November 21, 2007 from http://www.ed.gov/about/bdscomm/list/hiedfuture/reports.html.

University of Tennessee. Office of Institutional Research and Assessment. (2000-2005). Executive summary. Performance Funding Standards. Retrieved November 23, 2007.

Welsh, J. & Metcalf, J. (2003). Cultivating faculty support for institutional effectiveness activities: Benchmarking best practices. Assessment and Evaluation in Higher Education, 28 , 33. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=9259419&site=ehost-live

Suggested Reading

Assessing student learning and institutional effectiveness: Understanding middle states expectations. (2005). Philadelphia: Middle States Commission on Higher Education. Retrieved November 23 from http://www.msche.org/publications.asp.

Babaoye, M. (2006). Student learning outcomes assessment and a method for demonstrating institutional effectiveness. Assessment Update. 18 , 14-15. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=21897500&site=ehost-live

Bok, D. C. (1986). Higher learning. Cambridge, Mass.: Harvard University Press.

Carducci, R. (2004). Community college institutional effectiveness: Recent literature. Journal of Applied Research in the Community College, 12 , 65-68. Retrieved November 23, 2007 from EBSCO online database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=15869400&site=ehost-live

Duh, G., Kinzie, J., Schuh, J. & Whitt, E. (2005). Assessing conditions to enhance educational effectiveness: The inventory for student engagement and success. New Jersey: Jossey-Bass.

Ekman, R. (2007). By the numbers. University Business, 10 , 35. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=24035624&site=ehost-live

Skolits, G. & Graybeal, S. (2007). Community college institutional effectiveness. Community College Review, 34 , 302-323. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=24680541&site=ehost-live

Welsh, J. & Metcalf, J. (2003). Administrative support for institutional effectiveness activities: Responses to the “new accountability.” Journal of Higher Education & Policy Management, 25 , 183-193. Retrieved November 23, 2007 from EBSCO online database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=11622170&site=ehost-live

Essay by Karin Carter-Smith, M.Ed.

Karin Carter-Smith is a graduate of Bryn Mawr College in Bryn Mawr, Pennsylvania, where she majored in English literature and minored in history of religion. She earned a master’s of education degree in psychology of reading from Temple University in Philadelphia. Ms. Carter-Smith served as Director of the Office of Learning Resources at Swarthmore College, an independent four-year college in suburban Philadelphia. In her role as Director of the Office of LearningResources, Ms. Carter-Smith was responsible for academic support, advising, disability accommodations, and the supervision of the award-winning Student Academic Mentors program.