Statistical Analysis
Statistical analysis is a critical component of quantitative research, involving a range of techniques to collect, organize, and interpret discrete data. It is primarily focused on identifying relationships, correlations, and patterns within data sets while adhering to the scientific method, which emphasizes objectivity, reproducibility, and validity in research. This analysis is prevalent in various fields, particularly education research, where it informs policy decisions, funding allocations, and assessments of student retention rates.
Quantitative research employs both descriptive statistics, which characterize data through measures like mean and standard deviation, and inferential statistics, which compare data sets to determine if differences are statistically significant. Sampling techniques such as simple random, stratified, cluster, and quota sampling are vital for ensuring representative data. While statistical analysis has gained prominence in policy studies, it has also faced criticism, particularly regarding its applicability and reliability in capturing the complexities of educational experiences. Overall, understanding statistical analysis provides valuable insights into data-driven decision-making processes across diverse contexts.
Statistical Analysis
Abstract
Statistical analysis encompasses the whole range of techniques used in quantitative studies, as all such studies are concerned with the examination of discrete data, with describing this data using quantifiable measures, and with comparing this data to theoretical models or to other experimental results. Statistical analysis is used to adequately sample populations, to determine relationships, correlations, and causality between different attributes or events, and to measure differences between sets of empirical data. Statistical analyses are grounded in the scientific method, and as such rely on experimental designs that are free of bias, reproducible, reliable, and valid. Statistical analysis is prevalent in the field of education research today, specifically in policy research and in studies of school management, funding, staffing, and student retention rates. It is less common in studies of curriculum development and analysis, though in the early to mid twentieth century it dominated this field as well. Presently, qualitative research, in the form of interpretive and critical methods, is most commonly used method in curriculum research.
Overview
Statistical analysis is used in quantitative research to collect, organize, and describe empirical data. All quantitative studies rely on statistical analyses because quantitative research is a method of approaching questions that is based on concrete, observable, "objective," and measurable data. Quantitative measures aim to explain causal relationships--though most often, in social science research, these methods can only determine correlations between the different factors studied, not make definite predictions about the universality of their application (Creswell, 2003).
A quantitative study in the social sciences follows the prescriptions of the scientific method: it must be reproducible or reliable, free of bias, accurate, and valid. Reproducibility is foundational to quantitative experimentation; if a study cannot be reproduced by others, or by the same researcher at a later time, by definition it does not follow the constructs of the scientific method. Because quantitative analysis applies mathematical equations to collected data to make the analysis objective, the social sciences researcher must be particularly attentive to the collection of nonbiased data (Sax, 1985). If the data is biased, statistical analyses might lead to false or inaccurate theories or predictions. In the life or physical sciences, collecting neutrally biased information is less challenging than in educational research, as measurements of widths of cells and of electron transport, for example, have been standardized. In educational research, however, particular care must be paid to the way in which an investigator's biases may lead to the collection of a specific kind of information, or to a particular sampling method that might not fairly represent the population in question (Creswell, 2003). Because all researchers hold values, qualitative education researchers have criticized quantitative studies since the late twentieth century for being only marginally useful in determining the "objective truth" of life in the classrooms or of methods of curriculum construction (Pinar et al, 2004).
In addition to concerns of reliability and bias, quantitative research must also be accurate and valid. Accuracy refers to the extent to which the experimental results accord with theoretical models, or the extent to which empirical results measure the phenomenon in question. Validity in the social sciences can be assessed in three ways: content, criterion, or construct (Twycross & Shields, 2004). Content validity refers the suitability of a study's data collection and analysis methods for the questions being investigated. Criterion validity refers to whether a study uses previously validated methods or, if it is novel in its approach, it has predictive value. Construct validity is similar to accuracy, in that it indicates the strength of the relation between a study's findings and theoretical constructs or other studies' findings (Twycross & Shields, 2004).
Quantitative research in curriculum construction and evaluation grew out of the common school movement in the nineteenth century and, though popular in the early to mid-twentieth century, drew increasing criticism into the late twentieth and early twenty-first centuries (Pinar et al, 2004). However, at about the time it was losing popularity with educational researchers, quantitative research gained new importance in the newly developed field of education policy studies. The movement toward accountability, toward assessing "learning gaps" between various groups, toward increasing the efficiency and effectiveness of policy implementations--along with the increasing role of the federal government and of national organizations in education--was founded upon quantitative analysis (Heck, 2004). The quality of quantitative research in education has been harshly criticized, however, and it remains a matter of concern for policy analysts and quantitative researchers. Some common critiques are that educational researchers are not adequately trained to approach social problems quantitatively, and that researchers often dismiss as irrelevant data that does not seem to match the expected results, offering explanations, after the fact, of why this data was not included in the final analysis (Sax, 1985; Heck, 2004). Even quantitative researchers have posed these critiques. They continue to advise policy makers, students of education, and professionals in the field of the importance of critically appraising a quantitative educational study before relying on its analyses.
History of Statistical Analysis in Education Research. Quantitative research in education grew out of the common school movement in the nineteenth century. The common school, a term coined by educational advocate and reformer Horace Mann (1796-1859), was a government-funded free school in which any child from any socioeconomic background or status could enroll. The movement formed the foundation for modern day public education (Travers, 1983). The establishment of common schools ushered in an era of educational management. The educators of the time, whose names remain well known today, were generally involved not directly in teaching, but rather in making educational policies, effectively governing boards of education, advising and consulting with principals and teachers, and founding educational journals and conferences. These management decisions relied on adequate, accurate quantitative descriptions of school enrollment and retention rates, of effective organizational structures, and of a variety of other matters thought to correlate to a school's efficiency and productivity (Pinar et al, 2004).
The quantitative movement soon permeated the curriculum field, and educational theorists became increasingly interested in using scientific methods to measure and improve curriculum construction. Edward Thorndike (1874-1949) was an American psychologist who is credited as the leader of this movement toward quantifying curriculum (Travers, 1983). He emphasized the importance of establishing "facts;" once these basic facts are discovered, Thorndike wrote, one would have a grasp of a field of knowledge. His theory was criticized by many scientists as a misunderstanding of the scientific method as applied to social science, and is cited as a primary reason for the misunderstanding of quantitative research in the field of education studies. Noam Chomsky, a prominent linguist and cognitive theorist, wrote that the scientific method is based not only on the collection of data, but on the theories subsequently developed from this data and their ability to encompass a comprehensive range of phenomena (cited in Travers, 1983).
Thorndike's work, despite the critiques voiced by natural and physical scientists, inspired a movement in education aimed at classifying, structuring, and analyzing all aspects of schooling, not just school management. IQ scores were increasingly used to evaluate students, and soon inspired the movement toward subject tests (Pinar et al, 2004). The Scholastic Aptitude Test (SAT) grew out of Carl Brigham's research and development of army aptitude evaluations. Ralph Tyler (1902-1994) developed a method of curriculum construction based on quantifiable objectives, goals, procedures, and outcome assessments--a paradigm frequently used in schools today (1949). Educational psychology and school counseling were founded on a basis of quantitative studies and data, too (Travers, 1983). Quantitative, statistical analysis thus firmly established assessment tools, school organizational structures, curriculum construction guidelines, and other objective methods and approaches to pedagogy.
After the initial wave of enthusiasm for scientific formulations of curriculum subsided after the first half of the twentieth century, qualitative research methods gained appeal through their emphasis on the human, subjective factors previously ignored in the positivist paradigm of scientific formulations. However, at about this time, quantitative, statistical analysis formed the methodological foundation for the new fields of educational politics in the 1960s and 1970s, and of education policy in the 1980s through to the present day (Heck, 2004). Education policy is of a similar intent as the initial push toward standardization that grew out of the common school movement. It is concerned with the efficient and effective management of classrooms, schools, teachers, and funding. One policy initiative is the Head Start Act of 1981 (initially created in 1965), an initiative that provides education and health services for low-income children, as well as parental support and guidance. Another initiative is the No Child Left Behind Act of 2001, legislation that provides federal funding to schools according to their effectiveness as measured by standardized test scores. A third initiative is Race to the Top, begun in 2009 to reward state school systems for complying with and improving upon certain educational policies; the Common Core State Standards initiative is a major component of Race to the Top.
Applications
The Theory & Practice of Statistical Analysis.
Data Visualization & Modeling
Statistical analysis is applied to experimental data after it has been collected. Before applying statistical analysis to empirical data, a quantitative study must define the variables it will quantify and measure. Attributes, or variables, may be independent or dependent. Independent variables are measurable attributes manipulated by the researcher, while dependent variables are those measured as part of the study. The dependent variables are dependent on--or a function of--the independent variables (Bertsekas & Tsitsiklis, 2002). For example, in a study describing the distribution of IQ scores for a population as a function of age, the IQ scores depend on the ages of the participants studied (the "data" for this experiment). The IQ score is a dependent variable (always plotted on the y-axis in a two-dimensional case), while the age is an independent variable (always plotted on the x-axis in a two-dimensional case). Studies can collect and analyze multidimensional data sets that include more than one dependent or independent variable.
Most often, once data is collected, a visual plot is made to analyze the spread of data. Visualizations guide researchers toward determining the best statistical tools for further describing the data--something that cannot be done generally from examining a table of measured quantities. Several possibilities for these visualizations are given below in Figure 1.
Visualizations are then used to determine a mathematical model that might be used to describe the data. In Figure 1(a), a linear model might be used; in Figure 1(b), a polynomial or exponential model is appropriate; in Figure 1(c), a distribution model is called for, while in Figure 1(d), cluster analysis would be helpful in grouping the data. These mathematical models are visualized in Figure 2.
Sampling Techniques. These examples demonstrate a critical aspect of quantitative studies: empirical data is always discrete, while mathematical models are continuous. In order to arrive at an accurate continuous model from discrete experimental data, valid and appropriate sampling methods must be used. Sampling is the technique through which units from a population are chosen for an empirical study (Coladarci et al, 2008). There are many sampling methods; the method used is determined by the nature of the question asked by the study, and by the characteristics of the population. As an example of how inadequate sampling might affect experimental results, consider Figure 3.
Some commonly used sampling methods include simple random, stratified, cluster, and quota sampling. Simple random sampling is a technique in which each member of the studied population has an equal probability of being chosen, and in which the researcher chooses among this population at random. Peltzer and Promtussananon used simple random sampling, for example, to choose among students from a rural village in South Africa to later interview about their knowledge of health matters (2003). They found that older children had a more objective understanding of diseases such as chicken pox or AIDS, while also having a better understanding of preventative measures. If plotted, this data might follow the pattern in Figure 1(a) or (b), where the dependent variable is the level of health understanding, and the independent variable is age. This study determined that more comprehensive health education is needed for preschool and lower elementary grades children in rural South African villages. Random sampling is best used when each individual has an equal chance of being chosen; this might not be the case, for example, if a survey on the general American population is done over the phone, as many low-income families do not have residential phone services and are, therefore, not listed in phonebooks.
Stratified sampling is a method in which a population is divided into sub-populations, and a representative sample is taken from each of the sub-populations. O'Hara Tompkins, Zizzi, Zedosky, Wright, and Vitullo, who studied physical education opportunities across schools in West Virginia, used stratified sampling to divide schools into sub-groups based on their size and age range (2004). They found that only 2% of junior high schools offered adequate physical education opportunities, while 31% of high schools did so. These results suggest that more physical education programs should be instituted in public schools in West Virginia, especially at the lower school levels. Stratified sampling is best used when the population studied can be easily classified into groups based on a variable to be studied. For example, O'Hara Tompkins et al. were interested in the amount of physical education offered by grade level; if they had simply chosen at random from all the schools in West Virginia, their sample may have contained many high schools and few elementary or middle schools. In order to assure equal representation for all grades, then, they used stratified sampling.
Cluster sampling is a technique in which the population is divided into small groups, or clusters, and researchers choose several of these clusters at random. Liu and Johnson used this technique to study teacher hiring processes across the United States (2006). They divided new and second-year teachers into groups by location, and chose among these at random. They found that, across the board, the hiring process was decentralized and locally administered, yet new teachers still did not have much personal contact with principals and school communities, or adequate information about their new positions. Cluster sampling is best used when the population studied is large, and when useful details can be gathered from studying a group of related individuals. For example, if Liu and Johnson had randomly selected teachers from all over the United States, they might not have been able to determine hiring practices at the school district level. However, by clustering teachers into school districts and then randomly choosing which districts to study, they were able to derive general patterns of hiring practices at the school district level.
Quota sampling is a method in which members of a population are chosen to meet a predetermined quota. For example, a study might call for 100 men and 100 women to be interviewed. This method is used sparingly in education research because it suffers from several methodological flaws, such as a lack of randomness (Curtice & Sparrow, 1997).
Further Insights
Descriptive & Inferential Statistics. There are two general branches of statistical analysis: descriptive and inferential statistics. Descriptive statistics are used to characterize empirical data, while inferential statistics are used to compare two or more empirically collected data sets or to compare experimental data with theoretical constructs (Coladarci et al, 2008).
Descriptive Statistics. The goal of descriptive statistics is to specify essential features of the attributes of a population. It can be used both as a stand-alone research method and as part of a larger study, such as one which tracks features of a population before and then after an intervention. For example, Peltzer and Promtussananon used statistics to describe how familiar students were with health practices (2003); O'Hara Tompkins et al. used statistics to describe the amount of physical education in West Virginia schools and to establish a trend between physical education programs and grade levels; Liu and Johnson used statistical analysis to characterize new teachers' hiring experiences and school districts' hiring practices.
Methods in descriptive statistics include visualizing data as in Figure 1, finding mathematical models to describe data trends as in Figure 2, and using other measurements such as mean, mode, median, range, and standard deviation to characterize data. Standard deviation is a measurement used in probabilistic models such as Figure 1(c) to describe the average "spread" of the data.
Two sets of data, for example, might have the same mean, so another measure is needed to describe the way the data is distributed (see Figure 4).
There are many different kinds of probabilistic models or distributions. Some of the more common ones are given in Figure 5.
The most common distribution is the "normal", "bell", or Gaussian function. The Gaussian distribution has been found to model many aspects of natural and social phenomena, such as IQ scores, the weight of sugar beets, or the height of cornstalks (Coladarci et al, 2008).
Loeb, Dynarski, McFarland, Morris, Reardon, and Reber (2017) have argued that descriptive analysis is undervalued in educational research compared to randomized trials. While randomized trials can be useful for determining the effectiveness of various approaches in optimal conditions, understanding the current conditions in a population can be an important tool in determining what interventions that particular population might need.
Inferential Statistics. Inferential statistics are used when the difference between two data sets or between empirical data and theory must be measured. For example, Ornstein measured students' attitudes toward science by analyzing the responses of students engaged in challenging, open-ended laboratory investigations, and those of students engaged in simple, pre-formulated laboratory exercises (2006). Comparing the two sets of data suggested that students involved in more challenging tasks had a more positive attitude toward science.
T-Test & ANOVA. There are several methods for determining the differences between two sets of data or between empirical data and theory. Differences cannot be computed exactly, because only approximate models can be derived from discrete data. Therefore, for data tests to be considered different, it is not enough to measure the difference between them, but to determine whether this difference is statistically significant--whether it is likely to have occurred by chance or not. The t-test is a commonly used tool to measure whether the mean of two samples is statistically significant. For example, Hook, Bishop, and Hook used the t-test in their study comparing two math curricula implemented in the same school district (2007).
Analysis of covariance (ANOVA) techniques test the significance of differences between groups through more sophisticated methods than the t-test. ANOVA takes into account the spread of variables, and can be used in cases in which more than two groups or variables are examined. There are several classes of ANOVA analyses. Greenberg et al. used ANOVA to differentiate between nutritional practices of American children, English-speaking Hispanic children, and Spanish-speaking Hispanic children in Texas (2007). There are many other ways of measuring differences using statistical analyses, however, the t-test and ANOVA are the most common in educational research.
Viewpoints
Statistical analysis encompasses the whole range of techniques used in quantitative studies, as all such studies are concerned with the examination of discrete data, describing this data using quantifiable measures, and comparing this data to theoretical models or other experimental results. Statistical analysis is prevalent in the field of education research today, specifically in policy research and studies of school management, funding, staffing, and student retention rates. It is less common in studies of curriculum development and analysis (Pinar et al, 2004).
Curriculum researchers, since the mid- to late twentieth century, have been using qualitative techniques to design materials and plans for the classroom. Qualitative techniques account for individual differences, are sensitive to the biases of the researchers, and offer a more nuanced description of the dynamics of teaching than quantifiable measures are able to do (Creswell, 2003). Qualitative studies fall into the broad categories of "critical" and "interpretive" research paradigms. Critical methods such as postmodern, critical theory, and feminist approaches tend to be used in the examination of power structures and struggles, of the effects of institutions on biases and learning gaps, and of techniques for overcoming these biases and gaps. Interpretive methods such as phenomenology emphasize the individual and the relation between individuals as critical to understanding classroom dynamics and curriculum.
Terms & Concepts
ANOVA: Analysis of covariance is a technique used to measure the differences between two or more sets of data and to determine if this difference is statistically significant.
Bias: A personal preference that must be acknowledged and accounted for in quantitative social research.
Cluster Sampling: A sampling method used for large populations that first defines small clusters of related individuals, then chooses at random among the samples.
Dependent Variables: Variables measured by a researcher that depend on and change according to modifications in the independent variable.
Descriptive Statistics: Statistical analyses used to describe features of data, such as spread, mean, mode, median, and distribution type.
Independent Variables: Variables in an experiment that are manipulated by a researcher to affect changes in dependent variables.
Inferential Statistics: Statistical analyses used to study the differences between two or more sets of empirical data or between experimental data and theoretical constructs.
Qualitative Research: Research methods that emphasize qualitative data as opposed to measurable quantitative data; these might include critical or interpretive methods.
Quantitative Research: Research that follows the scientific method, is based on the collection of measurable, quantifiable data, and is expected to be valid, accurate, reliable, and reproducible.
Quota Sampling: A sampling method in which a predetermined quota is used to find participants for use in the study.
Simple Random Sampling: A sampling method in which each individual within a population has an equal chance of being chosen; the sample is chosen randomly from the population.
Stratified Sampling: A sampling technique in which the population is first divided into sub-groups, while further sampling methods are then used to choose among members of these strata.
T-Test: A statistical test used to determine if the mean of a given data set is significantly different from the mean of another data set or from an expected theoretical value.
Bibliography
Adegbesan, S. O. (2013). Effect of principals' leadership style on teachers' attitude to work in Ogun state secondary schools, Nigeria. Turkish Online Journal of Distance Education (TOJDE), 14 , 14-28. Retrieved December 11, 2013 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/ login.aspx?direct=true&db=ehh&AN=89237693&site=ehost-live
Allen-Meares, P., & Lane, B. (1990). Social work practice: Integrating qualitative and quantitative data collection techniques. Social Work, 35 , 452-458. Retrieved October 3, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=9010221401&site=ehost-live
Bertsekas, D., & Tsitsiklis, J. (2002). Introduction to probability. Belmont, MA: Athena Scientific.
Coladarci, T., Cobb, C., Minium, E., & Clarke, R. (2008). Fundamentals of statistical reasoning in education. Hoboken, NJ: John Wiley & Sons.
Creswell, J.(2003) Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications.
Curtice, J., & Sparrow, N. (1997). How accurate are traditional quota opinion polls? Journal of the Market Research Society, 39 , 433-448.
Doganay, A., & Demir, O. (2011). Comparison of the level of using metacognitive strategies during study between high achieving and low achieving prospective teachers. Educational Sciences: Theory & Practice, 11 , 2036-2043. Retrieved December 11, 2013 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/ login.aspx?direct=true&db=ehh&AN=70399601&site=ehost-live
Greenberg, J., Evans, A., Harris, K., Loyo, J., Ray, T., Spaulding, C., & Gottlieb, N. (2007). Preschooler feeding practices and beliefs. Family and Community Health, 30 , 257-270. Retrieved October 3, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=25562912&site=ehost-live
Head Start Act. (1981). 42 U.S.C. . §9801 et seq. Retrieved October 2, 2007, from http://www.acf.hhs.gov/programs/ohs/legislation/HS%5Fact.html.
Heck, R. (2004). Studying educational and social policy. Mahwah, NJ: Lawrence Erlbaum.
Hook, W., Bishop, W., & Hook, J. (2007). A quality math curriculum in support of effective teaching for elementary schools. Educational Studies in Mathematics, 65 , 125-148.
Hsu, T. (2005). Research methods and data analysis procedures used by educational researchers. International Journal of Research & Method in Education, 28 , 109-133. Retrieved October 3, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=18377138&site=ehost-live
Lazarfeld, P., & Oberschall, A. (1965). Max Weber and empirical social research. American Sociological Review, 30 , 185-199. Retrieved October 3, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=12883941&site=ehost-live
Liu, E., & Johnson, S. (2006). New teachers' experiences of hiring: Late, rushed, and information-poor. Education Administration Quarterly, 42 , 324-360. Retrieved October 16, 2007 from EBSCO Online Database Academic Search Complete. http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=22517796&site=ehost-live
Loeb, S., Dynarski, S., McFarland, D., Morris, P., Reardon, S., & Reber, S. (2017). Descriptive analysis in education: A guide for researchers. Washington, DC: US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. Retrieved from https://files.eric.ed.gov/fulltext/ED573325.pdf.
No Child Left Behind Act of 2001, Pub. L. No. 107-110. 115 Stat. 1425. (2001). Retrieved October 2, 2007, from http:// www.ed.gov/policy/elsec/leg/esea02/index.html.
O'Hara Tompkins, N., Zizzi, S., Zedosky, L., Wright, J., & Vitullo, E. (2004). School-based opportunities for physical activity in West Virginia public schools. Preventive Medicine, 39 , 834-840.
Ornstein, A. (2006). The frequency of hands-on experimentation and student attitudes toward science: A statistically significant relation. Journal of Science Education and Technology, 15 , 285-297.
Peltzer, K., & Promtussananon, S. (2003). Black South African children's understanding of health and illness: Colds, chicken pox, broken arms and AIDS. Child: Care, Health, and Development, 29 , 385-393. Retrieved October 3, 2007 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=10531423&site=ehost-live
Pinar, W., Reynolds, W., Slattery, P., & Taubman, P. (2004). Understanding curriculum. New York: Peter Lang Publishing, Inc.
Ramos, J. A. (2011). A comparison of perceived stress levels and coping styles of non-traditional graduate students in distance learning versus on-campus programs. Contemporary Educational Technology, 2 , 282-293. Retrieved December 11, 2013 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=69823670&site=ehost-live
Sax, G. (1985). Quantitative methods: A critique. Paper presented at the Annual Meeting of the American Education Research Association, Chicago, IL. (ERIC Document Reproduction Service No. ED261094).
Scafidi, B., Sjoquist, D., & Stinebrickner, T. (2007). Race, poverty, and teacher mobility. Economics of Education Review, 26 , 145-159.
Schrag, F. (1992). In Defense of Positivist Research Paradigms. Educational Researcher, 21 , 5-8.
Shafaei, A., Salimi, A., & Talebi, Z. (2013). The impact of gender and strategic pre-task planning time on EFL learners' oral performance in terms of accuracy. Journal of Language Teaching & Research, 4 , 746-753. Retrieved December 11, 2013 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=90355542&site=ehost-live
Travers, R. (1983). How research has changed American schools: A history from 1840 to the present. Kalamazoo, MI: Mythos Press.
Twycross, A., & Shields, L. (2004). Validity and reliability-What's it all about?. Paedriatic Nursing, 16 , 28. Retrieved October 3, 2007 from EBSCO Online Academic Search Complete. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15014418&site=ehost-live
Tyler, R. (1949). Basic principles of curriculum and instruction. Chicago: University of Chicago Press.
Walberg, H. (1971). Generalized regression models in educational research. American Educational Research Journal, 8 , 71-91.
Xin, M. (2005). Growth in mathematics achievement: Analysis with classification and regression trees. Journal of Educational Research, 99 , 78-87.
Suggested Reading
Allen-Meares, P., & Lane, B. (1990). Social work practice: Integrating qualitative and quantitative data collection techniques. Social Work, 35 , 452-458. Retrieved October 3, 2007 from EBSCO Online Database Academic Search Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=9010221401&site=ehost-live
Coladarci, T., Cobb, C., Minium, E., & Clarke, R. (2008). Fundamentals of statistical reasoning in education. Hoboken, NJ: John Wiley & Sons.
Creswell, J. (2003). Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications.
Head Start Act. (1981). 42 U.S.C. . §9801 et seq. Retrieved October 2, 2007, from http://www.acf.hhs.gov/programs/ohs/legislation/HS%5Fact.html.
Heck, R. (2004). Studying educational and social policy. Mahwah, NJ: Lawrence Erlbaum.
Hrynkevych, O. S. (2017). Statistical analysis of higher education quality with use of control charts. Advanced Science Letters, 23(10), 10070–10072.
Hsu, T. (2005). Research methods and data analysis procedures used by educational researchers. International Journal of Research & Method in Education, 28 , 109-133. Retrieved October 3, 2007 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=18377138&site=ehost-live
Jian, L., & Lomax, R. G. (2011). Analysis of variance: What is your statistical software actually doing?. Journal of Experimental Education, 79 , 279-294. Retrieved December 11, 2013 from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/ login.aspx?direct=true&db=ehh&AN=60899868&site=ehost-live
No Child Left Behind Act of 2001, Pub. L. No. 107-110. 115 Stat. 1425. (2001). Retrieved October 2, 2007, from http:// www.ed.gov/policy/elsec/leg/esea02/index.html.
Pinar, W., Reynolds, W., Slattery, P., & Taubman, P. (2004). Understanding curriculum. New York: Peter Lang Publishing, Inc.
Sax, G. (1985). Quantitative Methods: a Critique. Paper presented at the Annual Meeting of the American Education Research Association, Chicago, IL. (ERIC Document Reproduction Service No. ED261094).
Travers, R. (1983). How research has changed American schools: A history from 1840 to the present. Kalamazoo, MI: Mythos Press.