For this Discussion, select one of the articles from the reading list and consider several classifications of group research designs. Post your response to the following: Describe which groups are

For this Discussion, select

one

of the articles from the reading list and consider several classifications of group research designs.


Post

your response to the following: Describe which groups are compared in the research. Then, classify the research design as follows:

  1. By explaining whether the study is pre-experimental (cross-sectional, one-shot case study, and longitudinal), experimental (control group with pretest and posttest, posttest only, or four-group design), or quasi-experimental (comparing one group to itself at different times or comparing two different groups)
  2. By indicating what the researchers report about limitations of the study
  3. By explaining concerns you have regarding internal validity and the ability of the study to draw conclusions about causality
  4. By explaining any concerns you have about the generalizability of the study (external validity) and what aspect of the research design might limit generalizability

Please use the resources to support your answer.


250-300 words


use resources provided


resources;

Bauman, S. (2006). Using comparison groups in school counseling research: A primer. Professional School Counseling, 9(5), 357–366.

Kohl, P. L., Kagotho, J., & Dixon, D. (2011). Parenting practices among depressed mothers in the child welfare system. Social Work Research, 35(4), 215–225.

 For this Discussion, select one of the articles from the reading list and consider several classifications of group research designs. Post your response to the following: Describe which groups are
Sheri Bauman, Ph.D., is an assistant professor in the Department of Educational Psychology at the University of Arizona, Tucson. -E-mail: [email protected] This article describes comparison group research designs and discusses how such designs can be used in school counseling research to demonstrate the effective- ness of school counselors and school counseling inter- ventions. The article includes a review of internal and external validity constructs as they relate to this approach to research. Examples of relevant research using this design are presented. he lack of a sound research base in the field of school counseling has been lamented for many Syears (Allen, 1992; Bauman, 2004; Cramer, Herr, Morris, & Frantz, 1970; Lee & Workman, 1992; Loesch, 1988; Whiston & Sexton, 1998; Wilson, 1985). The recent emphasis on research in No Child Left Behind legislation (2002) and the ASCA National Model® (American School Coun- selor Association, 2005) has moved the need for rig- orous empirical research to the forefront. The ASCA National Model stresses that school counseling pro- grams include learning objectives that are based on measurable student outcomes and that are data-driv- en and accountable for student outcomes. The focus on data and measurement makes clear that school counselors can no longer avoid conducting research and using empirical research to make decisions. The nature and goals of such research are the sub- ject of a recent debate. Brown and Trusty (2005) have contended that research should focus on demonstrating that well-designed and appropriate interventions used by school counselors are effec- tive, and they further argued that research investi- gating whether comprehensive school counseling programs increase student academic achievement is not productive given the presence of numerous con- founding influences. Sink (2005) disagreed, noting that school counselors are expected to contribute to the total educational effort to raise academic achievement. He advised that research to examine how school counselors influence achievement can be conducted using carefully selected methodologies, and while not definitively establishing causality, such research can provide strong evidence of the impact of comprehensive school counseling programs on student achievement. In their review of school counseling outcome research from 1988 to 1995, Whiston and Sexton (1998) found that of the 50 published studies they located, most provided only descriptive data, used convenience samples, lacked control or comparison groups, used outcome measures of questionable reli- ability and validity, and did not monitor adherence to intervention protocol, Such studies do little to add to the knowledge base of the profession, and they do not meet established standards for scientific rigor. In an era of limited resources for education and “accountability” becoming a watchword, counselors must demonstrate how they contribute to the aca- demic success of students. Heartfelt letters of appre- ciation and positive comments by constituents, while sincere, will not convince stakeholders and holders of purse strings of the value of the profes- sion. School counselors, occupied by providing serv- ices in schools, often neglect to demonstrate their importance until their positions are considered for reduction. This reactive approach is less likely to sway opinion than ongoing proactive efforts to use research effectively. Collecting, analyzing, and dis- seminating data that provide evidence of counselors’ effectiveness are consistent with the professional goals and models that define the profession. Under No Child Left Behind, school counselors (along with other education professionals) are called upon to demonstrate their effectiveness using quan- titative data such as evidence of academic achieve- ment, attendance and graduation rates, and meas- ures of school safety (McGannon, Carey, & Dimmitt, 2005). No Child Left Behind and the ASCA National Model both emphasize the impor- tance of scientific, rigorous, well-designed research as an essential component of modern school coun- seling programs. These guidelines indicate that con- ducting research is no longer a peripheral activity that a few counselors might attempt but is a central part of the role of all school counselors. The ASCA 9:5 JUNE 2006 1 ASCA 357 ~~sin Com~p-rs1G~ush Sc~ o Co~s~i a=3 0%P P National Model says the folloxving about data: Data analysis: Counselors analyze student achievement and counseling-program-related data to evaluate the counseling program, con- duct research on activity outcomes and dis- cover gaps that exist between different groups of students that need to be addressed. Data analysis also aids in the continued develop- ment and updating of the school counseling program and resources. School counselors share data and their interpretation with staff and administration to ensure each student has the opportunity to receive an optimal educa- tion. (ASCA, 2005, p. 44) Although most school counselors have had a graduate course in research methods (and perhaps statistics), these introductory courses typically are designed to prepare students to be critical readers of research. Even among those few students who con- duct research in their graduate training programs, it is the rare school counselor who continues to do research once he or she is a practicing school coun- selor. Responsibility for the absence of research from school counselors’ job description lies not only with the counselors, but also with the administrators and district officials who do not require or value research. No Child Left Behind has raised the aware- ness of educators in all fields that accountability is expected; data are the foundation for educational decisions, including decisions about counselors. In this climate, there is greater support (some might say pressure) for research. There are a number of different kinds of research, and a description of all relevant types is beyond the scope of this article. The purpose of this article is to provide a rationale for using control and comparison group designs in school counseling research. I begin by defining some basic terminology and reviewing the concept of validity, which is fundamental to all research. I then briefly discuss single-group pre-post research designs, which often are used in schools because they are relatively easy to conduct. The main focus of the article is comparison group designs, and these will be described in more detail. Finally, I provide a discussion of relevant research using comparison group designs as examples of this research strategy. DEFINITIONS Several technical terms are used in this discussion of research, and it is important that the reader be clear about their meaning. Researchers study variables that can assume different values. The independent variable is the intervention variable, or the variable manipulated by the researcher. The dependent var- able is the outcome variable, the effect. In a study of the effect of participation in extracurricular activities on graduation rates, the independent variable is the participation (which could be defined as the number of activities, or the number of hours per week of involvement, or a yes/no category) and the gradua- tion rate is the dependent variable. Researchers also may refer to moderator variables, which are variables that influence the relationship between the inde- pendent and dependent variables. Parental educa- tion might moderate the relationship between extracurricular participation and graduation rates, and it then would be a moderator variable. Statistical significance means that the obtained results are unlikely to have occurred by chance. If results are statistically significant at p < .05, the results would be obtained by chance in less than 5 out of every 100 cases. The counselor/researcher should keep in mind that with large samples, results might be statistically significant but not practically significant. Let us imagine that a new program for elementary math skills were implemented in several schools in a large district. At the end of a school year, the differ- ence between the achievement scores of those who used the program and those who continued the usual math program was statistically significant. One might conclude that the new approach is better. But what if the difference in scores were only .10 (grade equivalent)? Depending on the cost of the program, one might conclude that although the difference is statistically significant (p < .05), in practice the dif- ference or gain is not substantial enough to justify a large expenditure on the new program. There are ways to describe the practical significance of the findings through the use of effect sizes; these are dis- cussed in Sink and Stroh's article in this issue ("Practical Significance: The IUse of Effect Sizes in School Counseling Research"). VALIDITY Regardless of the design or method of research, school counselors must be concerned with the valid- ity of the research they conduct or read. In general, validity refers to the degree of confidence we can have in the findings of a research study. If a study does not demonstrate adequate validity, the results are of questionable application and should not be the basis for decisions. Internal validity refers to whether the observed change in the dependent (outcome) variable is due to the independent vari- able and only the independent variable. For exam- ple, if we are interested in whether student atten- dance (our dependent variable) improved for high school freshmen when a new orientation program 358 ASCA I PROFESSIONAL SCHOOL COUNSELING The focus on data and measurement makes clear that school counselors can no longer avoid conducting research and using empirical research to make decisions. was conducted by the school counselors (our inde- pendent variable; the orientation program), we want to be sure that no other variables could explain the obtained results. If, in addition to the new orienta- tion program for freshmen, the school employed additional truant officers, we could not be sure that the change in attendance was due only to the new orientation program and not to the truant officers' activities. The internal validity of the study would be compromised. External validity refers to the degree that the results of one study can generalize to (apply to) other people in other places or times. School coun- selors reading the results of research in a journal want to know whether they can reasonably expect that the reported results would apply in their own setting with their own students. Researchers hope that their results will be useful to others in other locations and times. If the new orientation program improved attendance for students in one school or district, the issue of external validity asks the ques- tion of whether other schools are likely to achieve the same results with the same program. The threats to external validity are related to the population from which the sample was selected (was it repre- sentative, did it include members of all groups of interest?) and the context in which the study was conducted (was it in a laboratory or a school, did participants know they were involved in an experi- ment, did the researcher convey the hoped-for out- comes?). These two types of external validity often are referred to as population validity and ecological validity. A study conducted at a private school with European American upper-class students is of ques- tionable validity for an inner-city school with a large percentage of minority students. Another factor in external validity is the nature of the research itself. If the students in the experimen- tal group were aware they were receiving a special program different from that of the control group, their efforts may have been changed by that knowl- edge. In addition, the researcher must incorporate a way to ensure that interventions delivered in a natu- ralistic school setting are faithfud to the protocol of the experiment. If the intervention is a series of les- sons, for example, the researcher must be sure that the lessons are delivered as described in the manual. If each teacher or counselor makes changes in the program, external validity is compromised by the absence of treatment (intervention) fidelity. Researchers can increase the external validity of their work by attending carefully to sample selection and to the conduct of the experiment. Campbell and Stanley (1963) described important threats to internal validity. These are conditions that provide possible alternative explanations for obtained results, or -ways that events or conditions other than the independent variable may explain observed changes in the dependent variable. The following is a brief review of those threats. History In this context, history refers to any event not planned or part of the research that occurs during the research. In the example above regarding the new orientation program, let's imagine that the principal decides to visit each freshman English class during the first week of school. Although not part of the research, this event (history) might be an alter- native explanation for the difference in attendance rates. History is the greatest threat to internal valid- ity when it affects only one group of research partic- ipants. If your research design used a comparison group (last year's freshmen) who had not experi- enced the historical event, your internal validity would be reduced. However, if you were studying whether the attendance of males vs. females increased when the new orientation program was implemented, and both males and females experi- enced the visits by the principal, internal validity would not be affected. When research is being con- ducted in schools, there are often events that occur outside the counselors' control, and the counselor must be alert to these competing explainers of results. Maturation Human beings change and develop over time. This means that some changes will occur independently of any intervention. For example, a middle school counselor might provide a series of guidance lessons on conflict resolution to seventh graders. If the counselor were to measure student attitude toward fighting, or the number of fights before and after the lessons, results might show a decrease in conflict after the lessons. However, maturation might be an alternative explanation for the results; students may be exhibiting less physical conflict because they are developing cognitively and socially, not because of the lessons. In the discussion of comparison group designs later in this article, I suggest designs that minimize the influence of this threat to internal validity. Testing Researchers may want to give participants a pretest to determine the base rate of whatever behavior or attitude is of interest. If a counselor were going to do a series of guidance activities to reduce racial/ethnic stereotyping in a school, he or she may want to get a measure of the degree of stereotyping that students do at the start of the project. However, the pretest may sensitize participants to the issue of stereotyping, and that may influence their scores on 9:5 JUNE 2006 1 ASCA 359 In an era of limited resources for education and "accountability" becoming a watchword, counselors must demonstrate how they contribute to the academic success of students. the posttest. This is called the testing effect. Instrumentation A counselor who is leading an anti-bullying program at her school wants to measure the effect of the pro- gram on student bullying behavior. She knows that much bullying occurs on the playground, and she uses a behavioral observation method to determine the frequency of playground bullying before the program begins, and after the program has been in place for a semester. The behavioral observation method requires several observers, and it may be that some observers are more alert than others. Or, the observers may become more adept with practice. If the observers are not the same at both measure- ment points, instrumentation is a threat to internal validity. The changes may not be a result of the chil- dren's behavior, but of the observers' skill. Regression to the Mean When a counselor is interested in extreme groups (students high or low in a particular characteristic), a pretest-posttest design is vulnerable to this threat. We know that on subsequent testing, both high and low scores tend to become closer to the mean (aver- age score). So observed changes may be due to this tendency rather than any real change in the charac- teristic being measured. Selection In some research, counselors are studying more than one group of students (e.g., classes, genders). If you are trying a new program with one class and using another class as a comparison group, the groups might be different on some other factors (e.g., read- ing level, intelligence) that can affect the results. Mortality This threat to internal validity refers to loss of par- ticipants during the course of the study. In a com- parison group design, this becomes a problem when mortality is greater in one group than in another. For example, in a study where the comparison group is another school, there might be asbestos discov- ered in one of the schools and many students trans- fer out of that school. That group would have greater mortality than the other group. Selection Interaction It is possible that one of the other threats to internal validity combines with selection. This means that one of the comparison groups is affected by those threats (e.g., history, maturation) differently than other groups in the study. For example, an elemen- tary school counselor implements a new program to teach empathy skills to fifth-grade classes. Lessons are given throughout the school year, and a nearby school serves as a control group. On an outcome measure, the counselor finds that at the end of the year, girls show more improvement in empathy than boys do. It might be that those findings are because girls of this age tend to develop these skills naturally at this age, while boys develop them later. The find- ings may reflect a selection (grade level) maturation (girls faster than boys) effect. School counselors doing research must be alert to potential threats to internal validity. While it is impossible to avoid all such threats, especially when doing the research with students in schools (rather than in a laboratory), if results are to be meaningful, the researcher must acknowledge them. In some cases, there are statistical methods to control for the influence of these threats. One of the advantages to publishing the results of research is that school counselors do not have to reinvent the wheel. That is, we read about research in the hope that findings will generalize to other stu- dents, settings, and times. Generally, results should be replicated in other contexts so that it is not just a single study but a body of research that establishes the generalizibility of findings. Let us assume that the original study of the orientation program was conducted in a large, urban high school. Will the same program have the same outcome in a small, rural high school and with a different racial/ethnic composition? SINGLE-GROUP PRETEST-POSTTEST DESIGN School counselors have the advantage of conducting research in settings that reflect real students. Some programs of interest to counselors cannot be effec- tively studied in a laboratory; if they could be, we would question the external validity of the findings. Conducting research in a school also has disadvan- tages, not the least of which is the inability to con- trol many factors in a research process. For example, researchers may not be able to randomly assign stu- dents to classrooms, and they may have to contend with numerous historical events that occur during a research study. Nevertheless, the findings are clearly relevant and applicable to the school of interest. One research strategy that is relatively uncompli- cated to do is a single-group pretest-posttest design. In this design, the school counselor implements a program (a series of guidance lessons or counseling groups to address a particular topic). Prior to start- ing the program, the students take a pretest so their baseline levels can be determined. The program is delivered, and then the students take a posttest. The improvement in scores from pretest to posttest is used to measure the impact of the program. At first glance, that seems to be a logical approach. One 360 ASCA I PROFESSIONAL SCHOOL COUNSELING advantage of the pretest-posttest design is that one does not have to include a control group, and the pretest information allows the counselor to deter- mine differential effects (e.g., the lessons increased t6lerance in boys more than in girls). However, this design is particularly vulnerable to the threats to internal validity described above. How can the counselor demonstrate that it was the pro- gram that caused the change in scores, and not mat- uration, history, testing, or regression to the mean? For example, let us imagine that the lessons were developed to increase tolerance toward physically handicapped students. During_ the time the weekly lessons were being presented, there was a television special on the topic that many students watched. Or perhaps there were classroom disruptions during the time the lessons were presented. Were the observed changes the result of the TV program, the disrup- tions, or the lessons? COMPARISON OR CONTROL GROUP DESIGN A more rigorous design that avoids many of the threats to internal validity inherent in pretest- posttest designs is the control group or comparison group design. A control group is a group of partici- pants who get no intervention; if a group gets a dif- ferent intervention, then we call it a comparison group. History and maturation will affect both the experimental and the comparison groups, so any dif- ferences in the outcome variable cannot be biased by those threats. Testing and regression to the mean also are going to influence both groups, so observed differences can be attributed to the intervention rather than these alternative explanations. Of the 50 school counseling outcome studies published between 1988 and 1995, only 26% used this design (Whiston & Sexton, 1998). The authors of the review concluded that more research of this kind is needed, and they recommended the wait-list control group strategy used often in other counseling research. In the school setting, this means that class- es, schools, or students who do not receive the intervention (program, activity) during the research period will receive it at a later time (the following semester, year, etc.). There are several ways in which comparison groups can be created. The first is random assign- ment. That means that all eligible participants are randomly assigned to one of the experimental con- ditions (intervention, comparison group, control group). When this is not possible, the researcher can use preexisting groups (e.g., already formed or intact classes) that are matched on key variables, such as reading level or socioeconomic status. An investigation of the impact of a new "transition to kindergarten program" might use current students in kindergarten as the experimental group and stu- dents from a previous year (now first graders) at the same school as the comparison group. The assump- tion in this case is that previous students resemble current students on the relevant characteristics. A final method would be to use pretest scores to ensure that the groups are matched on key variables prior to the introduction of the intervention. After creating matched groups, the researcher then can randomly assign the groups to the intervention con- ditions. If the intervention might have a differential effect based on levels of test anxiety, the researcher can administer a pretest of test anxiety; create groups of high-, average-, and low-anxiety students; and create two groups with equal representation from the different levels of anxiety. Once the groups are created, a random procedure can be used to assign one group to receive the intervention (e.g., instruc- tion in progressive relaxation) and the other group to serve as the control group. Random Assignment The most rigorous comparison group design utilizes random assignment to condition (experimental group or control/comparison group). Random assignment means that every participant has an equal chance of being in the experimental condition. Most statistical software packages include features that allow the researcher to randomize assignment in a scientific manner. There are also sites on the Internet that a counselor might be able to locate and use if such software is not readily available. In most edu- cational settings, it is usually not possible to ran- domly assign students to one or another class or program. However, random assignment can be accomplished by using classes or schools as sampling units. For example, in a study evaluating the effects of a new drug prevention curriculum for middle school students, if there is more than one school interested in participating, the schools can be ran- domly divided into two groups (using a number of randomization procedures), with one group desig- nated as the experimental group (the schools receiv- ing the curriculum) and the other as the comparison or control group (which will not receive the cur- riculum at this time). If only one school is going to participate, the same procedure can be applied to classrooms. In some cases, it may not be possible to random- ly assign classrooms to the intervention or non- intervention groups. If there are two schools or two classrooms that are potential participants, and only one is interested in testing the curriculum, the other can become the comparison group. The problem with this method is that the two groups (schools, classes) may be different prior to the curriculum 9:5 JUNE 2006 1 ASCA 361 No Child Left Behind and the ASCA National Model both emphasize the importance of scientific, rigorous, well-designed research as an essential component of modern school counseling programs. implementation in ways that affect the outcome (e.g., intelligence, reading level). If, however, the researcher is able to administer the pretests and posttests to both groups (or obtain data on both groups), these differences can be identified and con- trolled for statistically. That means that the analyses can determine whether the obtained differences would exist over and above the influence of these potentially influential variables. If reading level were a possible confounding variable, the researcher can use a statistical analysis called analysis of covariance, in which reading level is designated the covariate. The results of this analysis xvill reveal whether observed differences in the dependent variable (the outcome) are significant when differences in reading level have been taken into account. The researcher can include more than one covariate if several attrib- utes are potential competing explainers of results. Measurement Concerns How are variables measured? This is a basic question that researchers must address when designing the study. For the results to be valid, the measures must be reliable and valid. Reliability refers to the consis- tency of the scores, and validity relates to whether the measure measures what it purports to measure and for whom it does so. Researchers need to use considerable care in the selection of the instruments to be used. While researcher-designed question- naires may be used, it is essential to establish the reli- ability and validity of such measures. Published measures generally will report such data so that the researcher can make informed decisions. If other methods of assessment are used (such as observa- tion), those also must be evaluated prior to use, and they must meet the same standards of psychometric adequacy as paper-and-pencil measures. Some stud- ies in the school counseling field have used self- reported student grades as outcome variables. A more precise measure would be to use actual record- ed grades from student records. To take this a step further, grades may be influenced by the grading practices and standards of different teachers; achievement test scores might be a more valid meas- ure to use as a dependent variable. Data Analysis The word statistics invokes fear and anxiety in many for whom research is not a frequent activity. Counselors need to know that in the age of com- puters, the task of analyzing data is far less daunting. Even without the specialized statistical programs used by most researchers, school counselors can uti- lize statistical features of Microsoft Excel and EZAnalyze (Poynton, 2005), an add-in for Excel. Using these tools, the school counselor can easily obtain descriptive data about the sample (including means and standard deviations) and can disaggre- gate data by group (e.g., by gender or ethnicity). In addition, the school counselor can perform several analyses, including correlations (to assess the strength of relationships between two variables such as math and reading test scores), t tests (to test the significance of differences between two groups or pretests and posttests for the same group), and analyses of variance (to test differences among more than two groups). The counselor also can obtain tables and graphs directly from the program, allow- ing for visual presentation of results. For more complex analyses, many school districts have research departments that can help. And many counselor educators at universities are eager to assist, and can do so even when located at a distance, using e-mail to receive data. Consulting with university researchers is a good idea throughout a research project when school counselors are novice research- ers, but it can be especially important during the data analysis and interpretation step. A very useful review of the various analysis options for comparison group designs can be found in Gliner, Morgan, and Harmon (2003). Reporting Results Whether the purpose of the research is for program improvement or to comply with mandatory report- ing regulations, it is important to present the results clearly and accurately. Much of the audience will be unfamiliar with the terms used, so definitions are essential. It makes sense to begin by stating the research question at the outset, and then describing how you went about answering that question. For example, if the question was, "What is the effect of the new study skills group on student achievement?" you would begin by describing the

Needs help with similar assignment?

We are available 24x7 to deliver the best services and assignment ready within 3-12 hours? PAY FOR YOUR FIRST ORDER AFTER COMPLETION..

Get Answer Over WhatsApp Order Paper Now

Do you have an upcoming essay or assignment due?

Order a custom-written, plagiarism-free paper

If yes Order Paper Now