Empirical Research

Calfee and Chambliss (2003) define empirical research as "the systematic approach for answering certain types of questions" (p. 152). Empirical research design encompasses a full range of systematic approaches to gathering evidence, resulting from what may be both theoretical and practical questions. Researchers who use empirical research methods have a pragmatic need for investigating a question. Methods are not as clearly defined as in quantitative or qualitative research, as the researcher determines the method based on the questions that need investigating. However, there is a process of activities that are inherent within empirical research. Empirical researchers identify and conceptualize the problem they wish to explore; compose a research question; survey the project to determine how effective their results might be; construct their research plans; select the subjects to be used to answer the research question; collect and analyze data; and interpret and present their findings (Calfee & Chambliss, 2003; Bausell, 1986).

Keywords Confounding; Empirical Data; Empirical Research; Factorial Design; Literature Review; Qualitative Research; Quantitative Research; Random Sample; Research Method; Scientifically Based Research; Validity

Overview

Calfee and Chambliss (2003) define empirical research as "the systematic approach for answering certain types of questions" (p. 152). Empirical research is:

… a collection of evidence under carefully defined replicable conditions, whereby social science researchers seek to discover the influence of factors that affect human thought and action, and to understand when and why these influences occur (p. 152).

This type of research creates and validates theories about how people think and act, as researchers look for answers to practical questions.

The ideas behind empirical research are based upon epistemology. Popper (1963) states that science is involved in explaining why things in nature are the way they are. New knowledge is generated from this pursuit of truth. In order to find answers to questions, scientists pursue empirical research and embark on a process of investigation that leads to an accurate picture of the facts. Dyer (1995) states that from its development in the seventeenth century to today, "the doctrine of empiricism provided a solution to this problem by specifying clearly how a researcher should set about the process of acquiring knowledge" (p. 9). Researchers who delve in empirical research are "able to develop powerful explanations for a wide range of natural phenomena" (p. 10).

Principles of Scientific Knowledge

There are basic principles of empiricism that guide the nature of the pursuit of scientific knowledge. Dyer (1995) states that these principles include that:

• The understanding of natural phenomena can only be constructed from information which has been obtained directly through the senses, that research proceeds by careful observation of the object of inquiry.

• The truth of an idea is able to be measured when the situation described is able to be observed by any competent person, to the exclusion of all bias.

• Observable processes can be objectively verified (p. 10).

Empirical research design encompasses a full range of systematic approaches to gathering evidence, resulting from what may be both theoretical and practical questions (Calfee & Chambliss, 2003). Researchers who use empirical research methods have a pragmatic need for investigating a question. Methods are not as clearly defined as in quantitative or qualitative research, as the researcher determines the method based on the questions that need investigating. However, there is a process of activities that are inherent within empirical research. Empirical researchers identify and conceptualize the problem they wish to explore; compose a research question; survey the project to determine how effective their results might be; construct their research plans; select the subjects to be used to answer the research question; collect and analyze data; and interpret and present their findings (Calfee & Chambliss, 2003; Bausell, 1986).

Major factors for constructing a research project include determining a research question and providing a context for the study. Calfee and Chambliss (2003) state that identifying a problem to research, one that can be conceptualized, is the most important aspect in empirical research. Sugarman (2004) states that "good empirical research is contingent in part upon knowing what questions are worth asking and how to investigate and measure them" (p. 228). Empirical researchers must formulate a question that can be answered, one that can be answered through objective evidence. Value judgments are not a part of empirical research. Calfee and Chambliss (2003) provide a question that a researcher should ask himself or herself:

Assuming that I collect evidence of one sort or another, and obtain a particular set of results, to what degree can I make a convincing argument when I interpret the findings in relation to the original questions? (p. 154).

Defining the Question

Questions can be developed through many channels. The researcher, in the review of previous literature in the area of inquiry, can envision perceived gap or shortcomings of other studies and can base the question on furthering knowledge. Testing existing theory or formulating a new theory is based on what Bausell (1986) calls "a desire or a need either to know if an existing program or practice works or to solve a pressing clinical problem" (p. 12).

While researchers may define their question, they have to be prepared to shift or change elements of the question if they find that their question is faulty and cannot be effectively answered. Research questions can be converted into a working hypothesis, one that defines the study's purpose and "forces the researcher to come to grips with exactly what is being tested" (Bausell, 1986, p. 14). However, once a working question has been defined, then researchers need to use their professional knowledge to determine what prior knowledge they have about the topic that might be helpful in their exploration of the question (Calfee & Chambliss, 2003).

After determining prior knowledge of the subject, researchers do extensive research to determine what studies have been written previously about the topic. They write a literature review that explores the topic and related topics (Krathwohl, 1997; Slavin, 1986; Calfee & Chambliss, 2003). Bausell (1986) has developed commonsense guidelines that can make the researching and writing of a literature review more manageable. He suggests that researchers:

• Search for published literature reviews offered by many journals;

• Employ all the relevant abstracting and citation services available;

• Use computerized retrieval systems;

• Concentrate on retrieving information from major journals;

• Ask for help from reference librarians and those who have published articles;

• Read the key studies in the area;

• Use the reference lists of published studies as sources of additional references; and,

• Record and categorize researched materials systematically (Bausell, 1986, pp. 10-11).

Collecting Data

Part of the research process is the collection of empirical data. Calfee and Chambliss (2003) state that empirical data is inherently qualitative in nature. As with all qualitative studies, empirical researchers triangulate their data, considering different ways to collect data throughout the study. Quantitative research is also empirical in nature, as this form of research relies on evidence. Those using empirical research can rely on both quantitative and qualitative data to inform their study. They may "quantify their observations by using statistical methods to summarize information and conducting inferential analysis" or qualify information to allow the researcher "to delve into underlying processes and explore complex hypotheses" (p. 155).

Calfee and Chambliss (2003) state that "the practical significance of a study depends on the quality of the research rather than the character of the setting" (p. 155). Research settings can be set in "real" classrooms or in orchestrated lab environments. While an orchestrated design, or researcher imposed design, may "eliminate extraneous fluctuations in conditions," classroom-based research is "presumed to be directly applicable" (p. 155). Brown (1992) and Collins (1994) suggest that researchers perform a design experiment. In a design experiment, variations of an experiment, both quantitative and qualitative, are explored in different classrooms. Empirical researchers develop a design, or "the steps in identifying the contextual factors that influence performance, planning the conditions of data collection so that these factors are adequately represented and ensuring that the plans allow defensible generalizability of the findings" (Calfee and Chambliss, 2003, p. 155). A solid research design "increases the chances that the results are trustworthy" (p. 157).

Factorial Design

Another element in a solid research design is the factorial design. A factor is "a variable that the researcher defines and controls in order to evaluate its influence on performance." The empirical researcher must identify factors that "may substantially influence performance or inform your understanding of the phenomenon" (Calfee and Chambliss, 2003, p. 158). Novice researchers run into trouble with their studies when they misidentify factors or limit their study to too few factors. As Calfee and Chambliss (2003) point out, "Pinning down important sources of variability ensures that systematic effects stand out clearly" (p. 158).

After evidence is collected, the empirical researcher makes sense of the evidence. Evidence can confirm researchers' assumptions or they may question these assumptions. The findings must be tested for validity, to ensure that when the findings are scrutinized that the results hold up. As Calfee and Chambliss (2003) question, "Would the same conclusion hold if the student were given a different test at a different time or by a different tester?" (p. 156). The trustworthiness of various interpretations is called construct validity, as a researcher determines "if the findings mean what [the researcher] thinks it means" (p. 157). The researcher's job is to plan a design and arrange conditions in such a way as to assure a solid, valid and reliable study.

Analyzing Data

After collection of data, researchers must now analyze the data. Analysis of the data that result from the research design is the summarization, or the "pulling together [of] trends in the evidence" (Calfee and Chambliss, 2003, p. 164). Behrens and Smith (1996) outline several systematic techniques for analyzing the data:

• Qualitative analysis - The researcher looks for trends in the statistical data.

• Central tendencies - The researcher looks for typical elements in data sets.

• Variability - The researcher looks for deviations from typicalities.

• Correlations - The researcher looks for parallel trends.

• Graphic representations - The researcher uses computerized statistical analysis programs to analyze data (p. 167).

Interpreting the Findings

After researchers analyze the data, they must interpret and generalize the findings. They ask themselves questions about clarity of their findings and extent of treatment effect. Empirical researchers "project basic findings with confidence to other contexts, without modification of the original design" (p. 167). The final written report has basic components that include:

• an introduction to the problem under investigation;

• a description of the procedures used to address the problem;

• a description and presentation of the results; and,

• a discussion of the study findings (Bausell, 1986, p. 302).

Dyer (1995) states that "the importance of empiricism in science is two-fold" (p. 11). Through empirical research, the knowledge about truth can be obtained through active inquiry. As Dyer (1995) points out, "If you want to know the reason why some phenomenon occurs then you have to go out and actually collect information about it" (p. 11). This information must then be verified objectively, with "each independent verification of a given result increases confidence that this is a genuine effect which requires an explanation to be found" (p. 12).

Applications

Control: Evidence needs to be based on a random sample in order for findings to be generalized to other situations (Calfee and Chambliss, 2003). A handy random sample occurs when a researcher has access to teachers and students in a particular school, but the typicality can be gleaned to other schools in the area. In a purposive random sample, the setting is selected "because it meets conditions important to [the researcher's] hypothesis" (p. 157).

Defending interpretations: Empirical researchers need to develop the ability to reflect upon the research, to develop a critical eye so that they can see holes in the research, as well as defend their research against other positions (Calfee and Chambliss, 2003).

Determining construct validity: Construct validity is "the trustworthiness of various interpretations of the evidence" (Calfee and Chambliss, 2003, p. 156). Invalid results occur when researchers fail to adequately think through all problems that might arise during the study. Calfee and Chambliss outline specific questions that researchers can use to guide them in their construction of their study:

• Is the plan of the study adequate?

• To what extent does the context allow generalizations to other situations?

• How does the finding mesh with other studies?

• What are the cost-benefit implications of various decisions springing from the study? (p. 157)

Factors: There are three types of factors. They include:

• Treatment factors: Environmental factors that are directly controlled by the researcher.

• Person factor: Intrinsic characteristics of an individual group, such as age, sex, ability, prior knowledge, and experience.

• Outcome factors: The researcher directs the choice of measures in an investigation, such as standardized tests. A researcher can construct his or her own measure, although it must be submitted to tests of reliability and validity (Calfee & Chambliss, 2003, p. 160).

Maintenance of uniformity: Keeping conditions constant during data collection is critical in any empirical researcher study. Uniformity can be difficult to establish for several reasons. Events or free-floating issues may influence activities; for examples, subjects may not be as engaged in the process as they could be and this impacts conditions.

Research methods: There are four basic methods, or strategies for collecting data.

• In naturalistic observations, observable behaviors occur in the natural settings. There is no interference by the researcher.

• Archival records involve answering questions by using existing records.

• Survey research is the obtaining of data from either oral or written interviews with people.

• Experiments are carefully controlled situations in which a researcher manipulates one or more independent variables to observe their effect on the dependent variable (Kiess, 1996, p. 17-18).

Uncontrolled variables: There can be uncontrolled fluctuations that can obscure the results of a research study. Large differences in results can obscure the effect of the treatment (Calfee & Chambliss, 2003).

Issues

Failure of Treatments

There may be studies that do not reap the results that an empirical researcher predicts. For those treatments that result in little or no effect, a null hypothesis results. In other words, the treatment showed no difference. Several factors may create such results. Calfee and Chambliss (2003) point out that "student performance may vary so widely that random fluctuations swamp the effect" (p. 156). A null hypothesis may force the empirical researcher to review his or her question and revise it or lead the researcher to interpret outcomes in another way.

The No Child Left Behind Act (NCLB)

"Scientifically-based research" is a phrase associated with the No Child Left Behind Act of 2001. The legislation stipulates that federally funded programs and practices must be grounded in scientifically based research. Beghetto (2003) states that "school leaders who depend on federal funding are now required to be aware of the nature of the research that guides their programs and practices" (p. 1).

Scientific research is seen as a means for improving education and developing a knowledge base for what skillful teaching. Whitehurst (2002) claims that

… there is every reason to believe that, if we invest in the education sciences and develop mechanisms to encourage evidence-based practice, we will see progress and transformation of the same order and magnitude as we have been in medicine and agriculture.

Terms & Concepts

Confounding: Confounding occurs when the effect of a primary factor cannot be separated from another factor. A study can be very limiting when an innovative approach is compared to a traditional approach; the study can be considered too narrow. Using qualitative description methods can counteract a study that is too narrow (Cronbach, 1963; Calfee & Chambliss, 2003).

Empirical Data: Empirical data are scores or measurements based on systematic observation or sensory experience (Kiess, 1996, p. 16).

Factorial Design: A research plan includes a factorial design, or the inclusion of all factors to be assessed in a research study to show variations (Calfee & Chambliss, 2003).

Random Sample: A random sample occurs when the researcher gives every member of the population to which he or she has access an equal chance of being in the research sample. The resulting sample would be representative (Bausell, 1986).

Research Method: A research method basically consists of collecting data to test a research hypothesis (Kiess, 1996, p. 17).

Scientifically Based Research: The No Child Left Behind Act of 2001 defines scientifically-based research as "research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs."

Validity: Validity is the essential criteria for judging evidence that is developed in empirical studies. The researcher must be able to defend the researcher's interpretations against other interpretations (Calfee & Chambliss, 2003). Messick (1995) states that validity is "the strength of the argument that a particular test outcome means what the tester says that it means" (p. 742).

Bibliography

Bausell, R. (1986). A Practical Guide to Conducting Empirical Research. New York, NY: Harper and Row.

Behrens, J., & Smith, M. (1996). Data and data analysis. In D. Berliner & R. Calfee (Eds.). Handbook of education psychology (pp. 945-989). New York: Macmillan.

Beghetto, R. (2003-4). Scientifically based research. ERIC #ED474304. ERIC Clearinghouse on Education Management. Eugene, OR. Retrieved on December 17, 2007, from http://www.ericdigests.org/2003-4/empirical-research.html

Brown, A. (1992). Design experiments: Theoretical and methodology challenges in complex interventions in classroom settings. The Journal of Learning Sciences, 2, 141-178.

Calfee, R., & Chambliss, M. (2003). The design of empirical research. In J. Flood, D. Lapp, J. Squire, & J. Jensen (Eds.), Handbook of research on teaching the English language arts (pp. 152-170). Mahwah, NH: Lawrence Erlbaum.

Collins, A. (1994). Toward a design science of education: In E. Scanlon & T. O'Shea (Eds.). New directions in educational technology. NY, NY: Springer-Verlag.

Cronbach, L. (1963). Evaluation for course improvement. Teachers College Record, 64, 97-121.

Dyer, C. (1995). Beginning research in psychology. Cambridge, MA: Blackwell.

Kelly, S., & Majerus, R. (2011). School-to-school variation in disciplined inquiry. Urban Education, 46, 1553-1583. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=66817261&site=ehost-live

Kiess, H. (1996). Statistical concepts for the behavioral sciences (2nd ed.). Boston, MA: Allyn & Bacon.

Krathwohl, D. (1997). Methods of educational and social science research: An integrated approach. Menlo Park, CA: Addison Wesley.

Maleyko, G., & Gawlik, M.A. (2011). No child left behind: What we know and what we need to know. Education, 131, 600-624. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=59789100&site=ehost-live

No Child Left Behind Act of 2001. Pub. I. No. 107-110, 115 Stat. 1425.

Popper, K. (1968). Conjectures and refutations. London: Routledge and Kegan Paul.

Schenker-Wicki, A., & Inauen, M. (2011). The economics of teaching: What lies behind student-faculty ratios?. Higher Education Management & Policy, 23, 31-50. Retrieved December 15, 2013, from EBSCO Online Database Education Research Complete. http://search.ebscohost.com/login.aspx?direct=true&db=ehh&AN=74123837&site=ehost-live

Slavin, R. (1986). Best evidence synthesis: An alternative to meta-analytic and traditional reviews. Educational Research, 15, 5-11.

Sugarman, J. (2004, Summer). The future of empirical research in bioethics. Journal of Law, Medicine and Ethics, 32, 226-231. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=16288889&site=ehost-live

Whitehurst, G. (2002). "Statement of Grover J. Whitehurst, Assistant Secretary for Research and Improvement, before the Senate Committee on Health, Education, Labor and Pensions." Washington, D.C.: U.S. Department of Education. Retrieved December 17, 2007, From http://www.ed.gov/offices/IES/

Suggested Reading

Bartlett, E., (1983, June). Reacting to the findings of empirical research. American Journal of Public Health, 73, 704-706. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=23136130&site=ehost-live

Chambliss, M., & Calfee, R. (1988). Textbooks for learning: Nurturing children's minds. Malden, MA: Blackwell.

Daly, M. & Wilson, D. (2007, Oct.). Relative comparisons and economics: Empirical evidence. FRBSF Economic Letter, 30, 1-3. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=27082218&site=ehost-live

Lund, T. (2005, September). A metamodel of central influences in empirical research. Scandinavian Journal of Educational Research, 49, 385-398. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=17835319&site=ehost-live

Messick, S. (1995). Validity of psychological assessment: Validation of inference from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741- 749.

Miall, D. (2006). Empirical approaches to studying literary readers. Book History, 9, 291-311. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=22901958&site=ehost-live

Newman, M., & Elbourne, D. (2004). Improving the usability of educational research: Guidelines for the REPOrting of primary empirical research studies in education (The REPOSE guidelines). Evaluation and Research in Education, 18, 201-212. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=19527299&site=ehost-live

Nierlich, E. (2005). An "empirical science" of literature. Journal for General Philosophy of Science, 36, 351-376. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=22897260&site=ehost-live

Phillips, D. (2005, Nov.). The contested nature of empirical educational research (and why philosophy of education offers little help). Journal of Philosophy of Education, 39, 577-597. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=19248774&site=ehost-live

Sieber, J. (2004, Oct.). Empirical research on research ethics. Ethics and Behaviors, 14 397-412. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=15805513&site=ehost-live

Simon, H. (1981). The science of the artificial (2nd. Ed). Cambridge, MA: MIT Press.

Varnhagen, C., & Digdon, N. (2002, May). Helping students read reports of empirical research. Teaching of Psychology, 29, 160-165. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=6472500&site=ehost-live

Varga, A. (2006, Oct.). The spatial dimension of innovation and growth: Empirical research methodology and policy analysis. European Planning Studies, 9, 1171-1186. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=22897531&site=ehost-live

Verburgh, A., Elen, J., & Lindblom-Ylanne, S. (2007, Sept.). Investigating the myth of relationship between teaching and research in higher education: A review of empirical research. Studies in Philosophy and Education, 26, 449-465. Retrieved December 15, 2007, from EBSCO online database, Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=26920298&site=ehost-live

Essay by Tricia Smith, Ed.D.

Dr. Tricia Smith is an Assistant Professor of English at Fitchburg State College in Fitchburg, Massachusetts and teaches theory and pedagogy courses in English Education. She has written several articles on on-line instruction, advising, and collaborative learning. Her other areas of interest include linguistics and young adult literature.