Methods in Behavioral Research, Ch. 4
Which of the listed variables does a researcher mainly use in his or her approach with intent to establish validity – specifically, construct validity?
·
independent variable
·
confounding variable
·
dependent variable
·
participant variable
·
extraneous variable
·
third variable
Instructions: With our engagement in this discussion we will likely touch upon learning objectives/competencies 2.2 and 2.4 as dependent upon the various variable selections and replies. (a) Please briefly define the two concepts (variable type(s) selected from the list and construct validity) using our text by Cozby and Bates (2015) or another credible source. (b) Then, support your view of the two concepts as regards to my question with an example from research related to your topic of interest within psychology. The participation rubric is in the Instructor Policy document. Word count range: 1000 excluding citations of your sources.
Methods in Behavioral Research, Ch. 4
Chapter 4 Fundamental Research Issues LEARNING OBJECTIVES Define variable and describe the operational definition of a variable. Describe the different relationships between variables: positive, negative, curvilinear, and no relationship. Compare and contrast nonexperimental and experimental research methods. Distinguish between an independent variable and a dependent variable. Discuss the limitations of laboratory experiments and the advantage of using multiple methods of research. Distinguish between construct validity, internal validity, and external validity. Page 73IN THIS CHAPTER, WE EXPLORE SOME OF THE BASIC ISSUES AND CONCEPTS THAT ARE NECESSARY FOR UNDERSTANDING THE SCIENTIFIC STUDY OF BEHAVIOR. We will focus on the nature of variables and the relationships between variables. We also examine general methods for studying these relationships. Most important, we introduce the concept of validity in research. VALIDITY: AN INTRODUCTION You are likely aware of the concept of validity. You use the term when asking whether the information that you found on a website is valid. A juror must decide whether the testimony given in a trial is valid. Someone on a diet may wonder if the weight shown on the bathroom scale is valid. After a first date, your friend may try to decide whether her positive impressions of the date are valid. Validity refers to truth or accuracy. Is the information on the website true? Does the testimony reflect what actually happened? Is the scale really showing my actual weight? Should my friend believe that her positive impression is accurate? In all these cases, someone is confronted with information and must make a decision about the extent to which the information is valid. Scientists are also concerned about the validity of their research findings. In this chapter, we introduce three key types of validity: Construct validity concerns whether our methods of studying variables are accurate. Internal validity refers to the accuracy of conclusions about cause and effect. External validity concerns whether we can generalize the findings of a study to other populations and settings. These issues will be described in greater depth in this and subsequent chapters. Before exploring issues of validity, we need to have a fundamental understanding of variables and the operational definition of variables. VARIABLES A variable is any event, situation, behavior, or individual characteristic that varies. Any variable must have two or more levels or values. Consider the following examples of variables that you might encounter in research and your own life. As you read a book, you encounter the variable of word length, with values defined by the number of letters of each word. You can take this one step further and think of the average word length used in paragraphs in the book. One book you read may use a longer average word length than another. Page 74When you think about yourself and your friends, you might categorize the people on a variable such as extraversion. Some people can be considered relatively low on the extraversion variable (or introverted); others are high on extraversion. You might volunteer at an assisted living facility and notice that the residents differ in their subjective well-being: Some of the people seem much more satisfied with their lives than others. When you are driving and the car in front of you brakes to slow down or stop, the period of time before you apply the brakes in your own car is called response time. You might wonder if response time varies depending on the driver’s age or whether the driver is talking to someone using a cell phone. In your biology class, you are studying for a final exam that is very important and you notice that you are experiencing the variable of test anxiety. Because the test is important, everyone in your study group says that they are very anxious about it. You might remember that you never felt anxious when studying for quizzes earlier in the course. As you can see, we all encounter variables continuously in our lives even though we do not formally use the term. Researchers, however, systematically study variables. Examples of variables a psychologist might study include cognitive task performance, depression, intelligence, reaction time, rate of forgetting, aggression, speaker credibility, attitude change, anger, stress, age, and self-esteem. For some variables, the values will have true numeric, or quantitative, properties. Values for the number of free throws made, number of words correctly recalled, and the number of symptoms of major depression would all range from 0 to an actual value. The values of other variables are not numeric, but instead simply identify different categories. An example is gender; the values for gender are male and female. These are different, but they do not differ in amount or quantity. OPERATIONAL DEFINITIONS OF VARIABLES A variable such as aggression, cognitive task performance, pain, self-esteem, or even word length must be defined in terms of the specific method used to measure or manipulate it. The operational definition of a variable is the set of procedures used to measure or manipulate it. A variable must have an operational definition to be studied empirically. The variable bowling skill could be operationalized as a person’s average bowling score over the past 20 games, or it could be operationalized as the number of pins knocked down in a single roll. Such a variable is concrete and easily operationalized in terms of score or number of pins. But things become more complicated when studying behavior. For example, a variable such as pain is very general and more abstract. Pain is a subjective state that cannot be directly observed, but that does not mean that we cannot create measures to infer how much pain someone is experiencing. A common pain measurement instrument in both clinical and research settings is the McGill Pain Questionnaire, which has both a long form and a short form (Melzack, 2005). The short form Page 75includes a 0 to 5 scale with descriptors no pain, mild, discomforting, distressing, horrible, excruciating. There is also a line with end points of no pain and worst possible pain; the person responds by making a mark at the appropriate place on the line. In addition, the questionnaire offers sensory descriptors such as throbbing, shooting, and stabbing; each of these descriptors has a rating of none, mild, moderate, or severe. This is a relatively complex set of questions and is targeted for use with adults. When working with children over 3, a better measurement instrument would be the Wong-Baker FACES Pain Rating Scale: Using the FACES scale, a researcher could ask a child, How much pain do you feel? Point to how much it hurts. These examples illustrate that the same variable of pain can be studied using different operational definitions. There are two important benefits in operationally defining a variable. First, the task of developing an operational definition of a variable forces scientists to discuss abstract concepts in concrete terms. The process can result in the realization that the variable is too vague to study. This realization does not necessarily indicate that the concept is meaningless, but rather that systematic research is not possible until the concept can be operationally defined. In addition, operational definitions also help researchers communicate their ideas with others. If someone wishes to tell me about aggression, I need to know exactly what is meant by this term because there are many ways of operationally defining it. For example, aggression could be defined as (1) the number and duration of shocks delivered to another person, (2) the number of times a child punches an inflated toy clown, (3) the number of times a child fights with other children during recess, (4) homicide statistics gathered from police records, (5) a score on a personality measure of aggressiveness, or even (6) the number of times a batter is hit with a pitch during baseball games. Communication with another person will be easier if we agree on exactly what we mean when we use the term aggression in the context of our research. Of course, a very important question arises once a variable is operationally defined: How good is the operational definition? How well does it match up with reality? How well does my average bowling score really represent my skill? Construct Validity Construct validity refers to the adequacy of the operational definition of variables: Does the operational definition of a variable actually reflect the true Page 76theoretical meaning of the variable? If you wish to scientifically study the variable of extraversion, you need some way to measure extraversion. Psychologists have developed measures that ask people whether they like to socialize with strangers or whether they prefer to avoid such situations. Do the answers on such a measure provide a good or true indication of the underlying variable of extraversion? If you are studying anger, will telling male college students that females had rated them unattractive create feelings of anger? Researchers are able to address these questions when they design their studies and examine the results. RELATIONSHIPS BETWEEN VARIABLES Many research studies investigate the relationship between two variables: Do the levels of the two variables vary systematically together? For example, does playing violent video games result in greater aggressiveness? Is physical attractiveness related to a speaker’s credibility? As age increases, does the amount of cooperative play increase as well? Recall that some variables have true numeric values whereas the levels of other variables are simply different categories (e.g., female versus male; being a student-athlete versus not being a student-athlete). This distinction will be expanded upon in the chapter on Measurement Concepts in Chapter 5. For the purposes of describing relationships among variables, we will begin by discussing relationships in which both variables have true numeric properties. When both variables have values along a numeric scale, many different shapes can describe their relationship. We begin by focusing on the four most common relationships found in research: the positive linear relationship, the negative linear relationship, the curvilinear relationship, and, of course, the situation in which there is no relationship between the variables. These relationships are best illustrated by line graphs that show the way changes in one variable are accompanied by changes in a second variable. The four graphs in Figure 4.1 show these four types of relationships. Positive Linear Relationship In a positive linear relationship, increases in the values of one variable are accompanied by increases in the values of the second variable. In Chapter 1, Scientific Understanding of Behavior, we described a positive relationship between communicator credibility and persuasion; higher levels of credibility are associated with greater attitude change. Consider another communicator variable, rate of speech. Are fast talkers more persuasive? In a study conducted by Smith and Shaffer (1991), students listened to a speech delivered at a slow (144 words per minute), intermediate (162 wpm), or fast (214 wpm) speech rate. The speaker advocated a position favoring legislation to raise the legal drinking age; the students initially disagreed with this position. Graph A in Figure 4.1 shows the positive linear relationship between speech rate and attitude change that was found in this study. That is, as rate of speech increased, so did the amount of attitude change. In a graph like this, we see a horizontal and a vertical axis, termed the x axis and y axis, respectively. Values of the first variable are placed on the horizontal axis, labeled from low to high. Values of the second variable are placed on the vertical axis. Graph A shows that higher speech rates are associated with greater amounts of attitude change. Page 77 FIGURE 4.1 Four types of relationships between variables Negative Linear Relationship Variables can also be negatively related. In a negative linear relationship, increases in the values of one variable are accompanied by decreases in the values of the other variable. Latané, Williams, and Harkins (1979) were intrigued with reports that increasing the number of people working on a task may actually reduce group effort and productivity. The researchers designed Page 78an experiment to study this phenomenon, which they termed social loafing (which you may have observed in group projects!). The researchers asked participants to clap and shout to make as much noise as possible. They did this alone or in groups of two, four, or six people. Graph B in Figure 4.1 illustrates the negative relationship between number of people in the group and the amount of noise made by each person. As the size of the group increased, the amount of noise made by each individual decreased. The two variables are systematically related, just as in a positive relationship; only the direction of the relationship is reversed. Curvilinear Relationship In a curvilinear relationship, increases in the values of one variable are accompanied by systematic increases and decreases in the values of the other variable. In other words, the direction of the relationship changes at least once. This type of relationship is sometimes referred to as a nonmonotonic function. Graph C in Figure 4.1 shows a curvilinear relationship. This particular relationship is called an inverted-U. A number of inverted-U relationships are described by Grant and Schwartz (2011). Graph C depicts the relationship between number of extraverts in a group and performance. Having more extraverts in a team is associated with higher performance, but only up to a point. With too many extraverts, the relationship becomes negative as there is less and less task focus and a resulting effect on team performance. Of course, it is also possible to have a U-shaped relationship. Research on the relationship between age and happiness indicates that adults in their 40s are less happy than younger and older adults (Blanchflower & Oswald, 2008). A U-shaped curve results when this relationship is graphed. No Relationship When there is no relationship between the two variables, the graph is simply a flat line. Graph D in Figure 4.1 illustrates the relationship between crowding and task performance found in a series of studies by Freedman, Klevansky, and Ehrlich (1971). Unrelated variables vary independently of one another. Increases in crowding were not associated with any particular changes in performance; thus, a flat line describes the lack of relationship between the two variables. You rarely hear about variables that are not related. In large U.S. surveys, the size of the community in which a person lives is not related to number of reported poor mental health days or amount of Internet use. Usually, such findings are just not as interesting as results that do show a relationship so there is little reason to focus on them (although research that does not support a widely assumed relationship may receive attention, e.g., a common medical procedure is found to be ineffective). We will examine no relationship findings in greater detail in Chapter 13. Page 79Remember that these are general patterns. Even if, in general, a positive linear relationship exists, it does not necessarily mean that everyone who scores high on one variable will also score high on the second variable. Individual deviations from the general pattern are likely. In addition to knowing the general type of relationship between two variables, it is also necessary to know the strength of the relationship. That is, we need to know the size of the correlation between the variables. Sometimes two variables are strongly related to each other and show little deviation from the general pattern. Other times the two variables are not highly correlated because many individuals deviate from the general pattern. A numerical index of the strength of relationship between variables is called a correlation coefficient. Correlation coefficients are very important because we need to know how strongly variables are related to one another. Correlation coefficients are discussed in detail in Chapter 12. Table 4.1 provides an opportunity to review types of relationshipsfor each example, identify the shape of the relationship as positive, negative, or curvilinear. TABLE 4.1 Identify the type of relationship Page 80 Relationships and Reduction of Uncertainty When we detect a relationship between variables, we reduce uncertainty by increasing our understanding of the variables we are examining. The term uncertainty implies that there is randomness in events; scientists refer to this as random variability in events that occur. Research can reduce random variability by identifying systematic relationships between variables. Identifying relationships between variables seems complex but is much easier to see in a simple example. For this example, the variables will have no quantitative propertieswe will not describe increases in the values of variables but only differences in valuesin this case whether a person is an active user of Facebook. Suppose you ask 200 students at your school to tell you whether they are active Facebook users. Now suppose that 100 students said Yes and the remaining 100 said No. What do you do with this information? You know only that there is variability in people’s use of Facebooksome people are regular users and some are not. This variability is called random variability. If you walked up to anyone at your school and tried to guess whether the person was a Facebook user, you would have to make a random guessyou would be right about half the time and wrong half the time (because we know that 50% of the people are regular users and 50% are not, any guess you make will be right about half the time). However, if we could explain the variability, it would no longer be random. How can the random variability be reduced? The answer is to see if we can identify variables that are related to Facebook use. Suppose you also asked people to indicate their genderwhether they are male or female. Now let’s look at what happens when you examine whether gender is related to Facebook use. Table 4.2 shows one possible outcome. Note that there are 100 males and 100 females in the study. The important thing, though, is that 40 of the males say they regularly use Facebook compared to 60 females. Have we reduced the random variability? We clearly have. Before you had this information, there would be no way of predicting whether a given person would be a regular Facebook user. Now that you have the research finding, you can predict the likelihood that any female would use Facebook and any male would not use Facebook. Now you will be right about 60% of the time; this is a big increase from the 50% when everything was random. TABLE 4.2 Gender and Facebook use Page 81Is there still random variability? The answer is clearly yes. You will be wrong about 40% of the time, and you do not know when you will be wrong. For unknown reasons, some males will say they are regular users of Facebook and some females will not. Can you reduce this remaining uncertainty? The quest to do so motivates additional research. With further studies, you may be able to identify other variables that are also related to Facebook use. For example, variables such as income and age may also be related to Facebook use. This discussion underscores once again that relationships between variables are rarely perfect: There are males and females who do not fit the general pattern. The relationship between the variables is stronger when there is less random variabilityfor example, if 90% of females and 10% of males were Facebook users, the relationship would be much stronger (with less uncertainty or randomness). NONEXPERIMENTAL VERSUS EXPERIMENTAL METHODS How can we determine whether variables are related? There are two general approaches to the study of relationships among variables, the nonexperimental method and the experimental method. With the nonexperimental method, relationships are studied by making observations or measures of the variables of interest. This may be done by asking people to describe their behavior, directly observing behavior, recording physiological responses, or even examining various public records such as census data. In all these cases, variables are observed as they occur naturally. A relationship between variables is established when the two variables vary together. For example, the relationship between class attendance and course grades can be investigated by obtaining measures of these variables in college classes. A review of many studies that did this concluded that attendance is indeed related to grades (Credé, Roch, & Kieszczynka, 2010). The second approach to the study of relationships, the experimental method, involves direct manipulation and control of variables. The researcher manipulates the first variable of interest and then observes the response. For example, Ramirez and Beilock (2011) were interested in the anxiety produced by important high-stakes examinations. Because such anxiety may impair performance, it is important to find ways to reduce the anxiety. In their research, Ramirez and Beilock tested the hypothesis that writing about testing worries would improve performance on the exam. In their study, they used the experimental method. All students took a math test and were then given an opportunity to take the test again. To make this a high-stakes test, students were led to believe that the monetary payout to themselves and their partner was dependent on their performance. The writing variable was then manipulated. Some students spent 10 minutes before taking the test writing about what Page 82they were thinking and feeling about the test. The other students constituted a control group; these students simply sat quietly for 10 minutes prior to taking the test. Next, the new, important test was then administered. The researchers found that students in the writing condition improved their scores; the control group’s scores actually decreased. With the experimental method, the two variables do not merely vary together; one variable is introduced first to determine whether it affects the second variable. Nonexperimental Method Suppose a researcher is interested in the relationship between exercise and anxiety. How could this topic be studied? Using the nonexperimental method, the researcher would devise operational definitions to measure both the amount of exercise that people engage in and their level of anxiety. There could be a variety of ways of operationally defining either of these variables; for example, the researcher might simply ask people to provide self-reports of their exercise patterns and current anxiety level. Now suppose that the researcher collects data on exercise and anxiety from a number of people and finds that exercise is negatively related to anxietythat is, the people who exercise more also have lower levels of anxiety. The two variables covary, or correlate, with each other: Observed differences in exercise are associated with amount of anxiety. Because the nonexperimental method allows us to observe covariation between variables, another term that is frequently used to describe this procedure is the correlational method. With this method, we examine whether the variables correlate or vary together. The nonexperimental method seems to be a reasonable approach to studying relationships between variables such as exercise and anxiety. A relationship is established by finding that the two variables vary togetherthe variables covary or correlate with each other. However, this method is not ideal when we ask questions about cause and effect. We know the two variables are related, but what can we say about the causal impact of one variable on the other? There are two problems with making causal statements when the nonexperimental method is used: (1) it can be difficult to determine the direction of cause and effect and (2) researchers face the third-variable problemthat is, extraneous variables may be causing an observed relationship. These problems are illustrated in Figure 4.2. The arrows linking variables depict direction of causation. Direction of cause and effect The first problem involves direction of cause and effect. With the nonexperimental method, it is difficult to determine which variable causes the other. In other words, it cannot really be said that exercise causes a reduction in anxiety. Although there are plausible reasons for this particular pattern of cause and effect, there are also reasons why the opposite pattern might occur (both causal directions are shown in Figure 4.2). Perhaps high anxiety causes people to reduce exercise. The issue here is one of temporal precedence; it is very important in making causal inferences (see Chapter 1). Knowledge of the correct direction of cause and effect in turn has implications for applications of research findings: If exercise reduces anxiety, then undertaking an exercise program would be a reasonable way to lower one’s anxiety. However, if anxiety causes people to stop exercising, simply forcing someone to exercise is not likely to reduce the person’s anxiety level. Page 83 FIGURE 4.2 Causal possibilities in a nonexperimental study The problem of direction of cause and effect is not the most serious drawback to the nonexperimental method, however. Scientists have pointed out, for example, that astronomers can make accurate predictions even though they cannot manipulate variables in an experiment. In addition, the direction of cause and effect is often not crucial because, for some pairs of variables, the causal pattern may operate in both directions. For instance, there seem to be two causal patterns in the relationship between the variables of similarity and liking: (1) Similarity causes people to like each other, and (2) liking causes people to become more similar. In general, the third-variable problem is a much more serious fault of the nonexperimental method. The third-variable problem When the nonexperimental method is used, there is the danger that no direct causal relationship exists between the two variables. Exercise may not influence anxiety, and anxiety may have no causal effect on exercise; this would be known as a spurious relationship. Instead, there may be a relationship between the two variables because some other variable causes both exercise and anxiety. This is known as the third-variable problem. A third variable is any variable that is extraneous to the two variables being studied. Any number of other third variables may be responsible for an Page 84observed relationship between two variables. In the exercise and anxiety example, one such third variable could be income level. Perhaps high income allows people more free time to exercise (and the ability to afford a health club membership!) and also lowers anxiety. Income acting as a third variable is illustrated in Figure 4.2. If income is the determining variable, there is no direct cause-and-effect relationship between exercise and anxiety; the relationship was caused by the third variable, income level. The third variable is an alternative explanation for the observed relationship between the variables. Recall from Chapter 1 that the ability to rule out alternative explanations for the observed relationship between two variables is another important factor when we try to infer that one variable causes another. The fact that third variables could be operating is a serious problem, because third variables introduce alternative explanations that reduce the overall validity of a study. The fact that income could be related to exercise means that income level is an alternative explanation for an observed relationship between exercise and anxiety. The alternative explanation is that high income reduces anxiety level, so exercise has nothing to do with it. When we actually know that an uncontrolled third variable is operating, we can call the third variable a confounding variable. If two variables are confounded, they are intertwined so you cannot determine which of the variables is operating in a given situation. If income is confounded with exercise, income level will be an alternative explanation whenever you study exercise. Fortunately, there is a solution to this problem: the experimental method provides us with a way of controlling for the effects of third variables. As you can see, direction of cause and effect and potential third variables represent serious limitations of the nonexperimental method. Often, they are not considered in media reports of research results. For instance, a newspaper may report the results of a nonexperimental study that found a positive relationship between amount of coffee consumed and likelihood of a heart attack. Obviously, there is not necessarily a cause-and-effect relationship between the two variables. Numerous third variables (e.g., occupation, personality, or genetic predisposition) could cause both a person’s coffee-drinking behavior and the likelihood of a heart attack. In sum, the results of such studies are ambiguous and should be viewed with skepticism. Experimental Method The experimental method reduces ambiguity in the interpretation of results. With the experimental method, one variable is manipulated and the other is then measured. The manipulated variable is called the independent variable and the variable that is measured is termed the dependent variable. If a researcher used the experimental method to study whether exercise reduces anxiety, exercise would be manipulatedperhaps by having one group of people exercise each day for a week and another group refrain from exercise for a week. Anxiety would then be measured. Suppose that people in the exercise group have Page 85less anxiety than the people in the no-exercise group. The researcher could now say something about the direction of cause and effect: In the experiment, exercise came first in the sequence of events. Thus, anxiety level could not influence the amount of exercise that the people engaged in. Another characteristic of the experimental method is that it attempts to eliminate the influence of all potential confounding third variables on the dependent variable. This is generally referred to as control of extraneous variables. Such control is usually achieved by making sure that every feature of the environment except the manipulated variable is held constant. Any variable that cannot be held constant is controlled by making sure that the effects of the variable are random. Through randomization, the influence of any extraneous variables is equal in the experimental conditions. Both procedures are used to ensure that any differences between the groups are due to the manipulated variable. Experimental control In an expe
Needs help with similar assignment?
We are available 24x7 to deliver the best services and assignment ready within 3-12 hours? PAY FOR YOUR FIRST ORDER AFTER COMPLETION..

