Evidence-based Information and Special Education
Evidence-based information in special education refers to the use of research-supported practices to guide interventions and support for students with diverse learning challenges. The shift towards evidence-based approaches emphasizes the importance of measurable outcomes, ensuring that interventions not only exist but are demonstrably effective. Special education encompasses a variety of services aimed at addressing the unique needs of students facing physical, intellectual, or emotional difficulties, often requiring collaboration among educators, parents, and administrators.
Professionals in this field utilize specific research designs, such as experimental group comparisons and single subject experimental designs, to assess the efficacy of interventions. By collecting and analyzing data, they can determine whether the right services are being applied effectively. The Council for Exceptional Children (CEC) provides standards to evaluate whether a practice qualifies as evidence-based, requiring multiple supportive studies and thorough descriptions of the research context and methodology.
Understanding evidence-based practices is crucial for making informed decisions in special education, as these interventions can significantly impact a student's ability to thrive in the classroom. This process fosters a collaborative environment by providing objective data that can mitigate potential conflicts among stakeholders, ultimately aiming to enhance educational outcomes for students with special needs.
Evidence-based Information and Special Education
Abstract
One of the leading buzzwords across many different professional fields is "evidence-based." In the arena of special education, this has also become the norm. Whereas in the past the focus was on providing support services to students with special needs, now the emphasis has shifted to providing effective services, meaning that it is important to have measurable and relevant data about the outcome of services. The goal is to be able to demonstrate that an intervention is both helpful and effective; otherwise, there is little point in providing it.
Overview
Special education is an umbrella term that encompasses many different services, also referred to as interventions. What these all have in common is that they are efforts to provide support and assistance to students who have one or more challenges in learning. These challenges can be physical, intellectual, emotional, or some combination of these (Collins et al., 2017). For example, a single student might have both dyslexia (a condition that makes it difficult to visually process words and letters) and attention deficit disorder, which makes it hard for one to concentrate for extended periods of time. The role of a special education professional is to help diagnose the challenges a student faces, and then work with parents, teachers, schools, and administrators to develop special education services that will support the student. This can be complex work, because there are often differences of opinion between the various stakeholders about what services are needed and can be practically provided; parents may want a high level of support, while administrators may have concerns about what it would cost to provide such support, and how much it would add to teachers' workloads. These conflicts can be made somewhat easier to resolve if there is objective data showing the efficacy of a proposed intervention, because then it is harder to argue against providing it. Conversely, if there is a dearth of evidence supporting an intervention, then it is easier to show that the expense and effort of providing it is not justified by a benefit that is purely speculative (Trader et al., 2017).
There are many different interpretations that can be given to the phrase "evidence-based." As far as special education is concerned, two of these stand out from the crowd. Put simply, the use of evidence-based information in special education raises the following questions: Are the correct services or interventions being applied, and—if so—are they being applied in an effective manner? An analogy to this might be found in the way a doctor prescribes medicine to cure an illness. The concern would be twofold: whether the right medicine is being prescribed, and whether it is being administered in the appropriate way. Special education services therefore collect and analyze data to determine which services are usually effective and how they are to be provided in order for them to produce the maximum benefit (West et al., 2016).
Further Insights
One might reasonably ask how it is determined whether a practice is evidence-based or not. This determination is usually made by reference to a set of professional standards, which are essentially codifications of the expectations of a group of experts in the field. One set of standards that is often referred to is that developed by the Council for Exceptional Children (CEC), an organization based in the United States, with the mission of supporting children with special needs. Under the CEC standards, two different types of research design can be included under the evidence-based practice umbrella: experimental group comparisons, and single subject experimental designs. These two designs are considered acceptable sources of evidence-based information because under each design, it is possible to infer causality if the research has been designed appropriately and then conducted according to established norms.
Inferring causality means that if a research study or experiment shows that there is a correlation between two points of data, such as participation in an intervention and an increase in grade point average, and the correlation is strong enough, it can be interpreted as a causal relationship (Kozleski, 2017). A causal relationship exists when a change in one variable consistently causes a corresponding change in another variable; for example, if a student stopped receiving an intervention (a change in one variable) and subsequently experienced a change in another variable (the student's grade point average).
Causal relationships cannot be assumed every time there is a correlation; in the previous example, it might be that the decline in the student's grade point average was actually due to family disruption caused by divorce, and the fact that this occurred around the same time as the termination of a special education intervention was simply coincidence. Before causality can be inferred, the influence of other factors, such as family conflicts, must first be factored into the equation. Therefore, when a research design is sound enough that causality may be inferred after other factors are allowed for, the findings of the research are likely to be reliable enough that they will meet the needs of evidence-based standards (Spooner, McKissick & Knight, 2017).
Experimental group comparison designs involve the use of two different groups of participants. Those participating in the study are randomly assigned to belong to one of two groups: the control group or the experimental group. The control group does not receive the intervention (e.g., tutoring, mentor support, special flashcards for studying) being studied, because its purpose is to demonstrate what would happen in the general population if no intervention was provided. The experimental group does receive the intervention.
Often there is some type of assessment conducted on each group at the beginning of the experiment, and then the same assessment is given to each group at the end of the experiment, that is, after the experimental group has received the intervention. The before and after scores for the groups are then compared. Presumably, the scores for the control group will be fairly close to one another, with the after scores slightly higher because the group had taken the test once before.
For the experimental group, it may happen that the scores after the intervention are much higher than they were on the test before the intervention; this would be an indication that the intervention might have had a beneficial effect, causing the students in the experimental group to perform better than they would have otherwise, as indicated by the scores of the control group. However, this cannot be assumed—the researchers would have to perform statistical analysis on the data to determine whether the difference in the experimental group's scores was significantly larger than the difference between the control group's scores (Therrien et al., 2016).
Only if the difference is determined to be statistically significant will the researchers conclude that there is a correlation between the intervention and the improvement in scores. This is the type of rigorous testing and analysis that research must be put through before it will satisfy standards for evidence-based information. Single subject experimental designs differ in mechanics, but the principles are the same. Instead of multiple individuals being tested and having their results compared, single individuals are studied repeatedly over time, and their scores at different intervals are compared to determine if statistically significant differences exist.
Issues
Apart from the research design used, there are a number of other quality-related factors that are used to evaluate whether research meets the standards of evidence-based practices. While a study may not score high on all of these factors, it must perform reasonably well on most of them in order to be appropriate for consideration as the basis for special education interventions (Bradley Williams et al., 2017). First, there are several design-related factors that must be considered. These are distinguishable because they are in place before the study is conducted; information about the context of the research must be included in sufficient detail to allow readers to compare that context with their own situation.
For example, if a study was conducted in a religious private school with a ratio of three students per teacher, then the findings might be less applicable to a school where classes have a minimum of thirty students. In addition to the context, there must also be information describing the participants in the research study. As with the context, this will help readers decide if the group being studied in the research is sufficiently similar to their own setting; a study of literacy support services provided to high school seniors with attention deficit hyperactivity disorder would be difficult to apply to a class of kindergarteners with delayed reading ability.
The study also must describe in detail the person or persons delivering the intervention to the group being studied. This individual is sometimes referred to as the agent. It can make a difference to the applicability of a study if the agent is a regular classroom teacher, a special education teacher, a guidance counselor, or some other person, so the more detail about the agent that a study provides, the greater the chances that the study can meet evidence-based criteria. The final design-related aspect that must be described is the study itself. The researchers must explain how the study was carried out. This gives readers the confirmation they need to make sure that they understand the study well enough so that they could replicate it if necessary (Travers, 2017).
In addition to the foregoing factors, there must be sufficient explanation of fidelity, internal validity, outcome measures, and data analysis. These factors are distinguishable because they occur during and after the study. Fidelity simply means that the way the study was conducted conforms to the way the study was meant to be conducted when it was designed—the design and the implementation must be congruent with one another, and this must be explained in the study. Internal validity refers to the soundness and thoroughness of the methodology being used; in the CEC standards, for instance, a study must show the effects of the experiments occurring at a minimum of three points in time—a single instance is not enough to satisfy the standards (Powell, 2015). For outcome measures, there must be an adequate amount of information provided about the results of the study, even concerning results that do not support the focus of the research or which appear to be tangential to it. Finally, the methods of data analysis must be described in detail. These descriptions should show that the appropriate types of analysis were selected, and that they were performed correctly (Guckert, Mastropieri & Scruggs, 2016).
It is important to understand how the various pieces of evaluation fit together to form the larger process of evaluating evidence-based information in special education. For an intervention to be considered as evidence-based, there must be a sufficient number of studies extant that support the efficacy of the intervention; some standards require at least three studies, while others require five or more. In addition, each of the supporting studies must itself meet the criteria for evidence-based information, to even be considered as counting among the needed number of supporting studies. Only after an intervention has enough supporting studies, each of them of sufficient quality, can the intervention properly be considered as meeting the defining criteria of being evidence-based.
When evidence-based information is used to assess special education interventions, the outcome is not a binary choice between information that is adequate or inadequate—there are several degrees of gradation. According to the CEC system, an evaluation of an intervention or practice may conclude that it is evidence-based, is potentially evidence-based, insufficient evidence exists to make a determination, the practice has mixed results (both beneficial and harmful), or has negative results.
The CEC system goes a step further by employing a detailed rubric that is used to grade practices consistently. The rubric lists different requirements or characteristics of each evaluation outcome. This helps evaluators make decisions that are reliable and reproducible, rather than arbitrary or confusing. The provision of special education services is an endeavor that is fraught with opportunities for conflict; it helps, therefore, to have clearly defined criteria supporting the research that has been used to develop evidence-based interventions. Such criteria helps the different parties agree on the appropriateness of a given practice.
Special education services can mean the difference between a child thriving in the classroom and being forced to leave the mainstream environment to receive individual instruction. Students and parents often object to this option because of the social isolation it entails. Schools tend to resist pulling students out of class for special instruction because of the time and money it requires.
Terms & Concepts
Causality: A relationship between two events such that the occurrence of one event is dependent upon the prior occurrence of the other event. Research often seeks to identify causal relationships between special education interventions and positive effects on the students who receive the interventions.
Experimental Group Comparison Design: A research design in which participants are randomly assigned to a control group or an experimental group. The experimental group receives an intervention or treatment, and then the groups are compared to see if there are significant differences in outcomes between the groups.
Internal Validity: A measure of how competently a study is carried out, often made by comparing the conduct of the study to its initial design.
Methodologically Sound: A measure of the quality of a research design that evaluates whether the appropriate methodology was selected for the study, as well as whether that methodology was implemented in a logical fashion.
Response to Intervention: An approach to identifying students with learning disabilities and determining what kind of support to provide them with.
Single Subject Experimental Design: A research design in which the subject being studied serves as his or her own control, rather than a separate group or person functioning as the control.
Bibliography
Bradley Williams, R., Bryant-Mallory, D., Coleman, K., Gotel, D., & Hall, C. (2017). An evidence-based approach to reducing disproportionality in special education and discipline referrals. Children & Schools, 39(4), 248–251. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=125413213&site=ehost-live
Collins, L. W., Sweigart, C. A., Landrum, T. J., & Cook, B. G. (2017). Navigating common challenges and pitfalls in the first years of special education: Solutions for success. TEACHING Exceptional Children, 49(4), 213–222. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=122564660&site=ehost-live
Guckert, M., Mastropieri, M. A., & Scruggs, T. E. (2016). Personalizing research: Special educators' awareness of evidence-based practice. Exceptionality, 24(2), 63–78. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=113739016&site=ehost-live
Kozleski, E. B. (2017). The uses of qualitative research: powerful methods to inform evidence-based practice in education. Research and Practice for Persons with Severe Disabilities, 42(1), 19-32.
Powell, S. R. (2015). Connecting evidence-based practice with implementation opportunities in special education mathematics preparation. Intervention in School and Clinic, 51(2), 90–96. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=110478753&site=ehost-live
Spooner, F., McKissick, B. R., & Knight, V. F. (2017). Establishing the state of affairs for evidence-based practices in students with severe disabilities. Research and Practice for Persons with Severe Disabilities, 42(1), 8–18. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=121560006&site=ehost-live
Therrien, W. J., Mathews, H. M., Hirsch, S. E., & Solis, M. (2016). Progeny review: An alternative approach for examining the replication of intervention studies in special education. Remedial and Special Education, 37(4), 235–243. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=116659277&site=ehost-live
Trader, B., Stonemeier, J., Berg, T., Knowles, C., Massar, M., Monzalve, M., & Horner, R. (2017). Promoting inclusion through evidence-based alternatives to restraint and seclusion. Research and Practice for Persons with Severe Disabilities, 42(2), 75–88. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=122998044&site=ehost-live
Travers, J. C. (2017). Evaluating claims to avoid pseudoscientific and unproven practices in special education. Intervention in School and Clinic, 52(4), 195–203. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=121144317&site=ehost-live
West, E. A., Travers, J. C., Kemper, T. D., Liberty, L. M., Cote, D. L., McCollow, M. M., & Stansberry Brusnahan, L. L. (2016). Racial and ethnic diversity of participants in research supporting evidence-based practices for learners with Autism Spectrum Disorder. Journal of Special Education, 50(3), 151–163. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=118699110&site=ehost-live
Suggested Reading
Hudson, R. F., Davis, C. A., Blum, G., Greenway, R., Hackett, J., Kidwell, J., & ... Peck, C. A. (2016). A socio-cultural analysis of practitioner perspectives on implementation of evidence-based practice in special education. The Journal of Special Education, 50(1), 27–36. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=114092194&site=ehost-live
Ledford, J. R., Barton, E. E., Hardy, J. K., Elam, K., Seabolt, J., Shanks, M., & ... Kaiser, A. (2016). What equivocal data from single case comparison studies reveal about evidence-based practices in early childhood special education. Journal of Early Intervention, 38(2), 79–91. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=116181443&site=ehost-live
Russo-Campisi, J. (2017). Evidence-based practices in special education: Current assumptions and future considerations. Child & Youth Care Forum, 46(2), 193–205. Retrieved January 1, 2018 from EBSCO Online Database Education Source. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=121412481&site=ehost-live