[Ethopia]

Ads Top

EXPERIMENTAL RESEARCH BY Ifedayo Akinwalere and Stella Oyeniran






EXPERIMENTAL RESEARCH
November, 2017
 Introduction
Some people will say that they research different websites to find the best place to buy goods or services they want. Television news channels also conduct research in the form of viewers’ polls on topics of public interest such as forthcoming elections or government-funded projects. Undergraduate students research the Internet to find the information they need to complete assigned projects or term papers. Graduate students working on research projects may see research as collecting or analyzing data related to their project. Businesses and consultants research different potential solutions to solve organizational problems or to identify customer purchase patterns. However, none can be considered “scientific research” unless it contributes to knowledge and follows the scientific method.
Science refers to a systematic and organized body of knowledge in any area of inquiry that is acquired using “the scientific method” (the scientific method is described further below). Science can be grouped into two broad categories: natural science and social science. Social science is the science of people or collections of people, such as groups, firms, societies, or economies, and their individual or collective behaviours (Galantucci, 2005).
The natural sciences are different from the social sciences in several respects. The natural sciences are very precise, accurate, deterministic, and independent of the person making the scientific observations. For instance, a scientific experiment in physics, such as measuring the speed of sound through a certain media or the refractive index of water, should always yield the exact same results, irrespective of the time or place of the experiment, or the person conducting the experiment. If two students conducting the same physics experiment obtain two different values of these physical properties, then it generally means that one or both of those students must be in error. However, the same cannot be said for the social sciences, which tends to be less accurate, deterministic, or unambiguous. For instance, if a researcher measures a person’s happiness using a hypothetical instrument, he may find that the same person is more happy or less happy (or sad) on different days and sometimes, at different times on the same day. One’s happiness may vary depending on the news that person received that day or on the events that transpired earlier during that day. Furthermore, there is not a single instrument or metric that can accurately measure a person’s happiness. Hence, one instrument may find a person as being “more happy” while a second instrument may find that the same person is “less happy” at the same instant in time (Underwood, 1966).
Depending on a researcher’s training and interest, scientific inquiry may be inductive where the goal of a researcher is to infer theoretical concepts and patterns from observed data or deductive where the goal of the researcher is to test concepts and patterns known from theory using new empirical data. Hence, inductive research is also called theory-building research, and deductive research is theory-testing research. Note here that the goal of theory-testing is not just to test a theory, but possibly to refine, improve, and extend it. It is equally important to understand that theory-building (inductive research) and theory-testing (deductive research) are both critical for the advancement of knowledge (Gliner & Morgan, 2000).
Experimental Research
Experimental research is often considered to be the “gold standard” in research designs. It is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. Experiments are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the “treatment group”) but not to another group (“control group”), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a dummy (e.g., a sugar pill with no medicinal value) to the control group. In a true experimental design, subjects must be randomly assigned between each group. If random assignment is not followed, then the design becomes quasi-experimental (Galantucci, 2005).
Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organization where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analyzed using quantitative statistical techniques.
The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalizability since real life is often more complex (i.e., involve more extraneous variables) than contrived lab settings. The researcher’s task is to identify specific causal factors and delineate the range of their relevant attributes and the unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation (Gliner & Morgan, 2000).
Basic Concepts in Experimental Research
To Sobowale (2009), some basic concepts in experimental research are:
·                     Treatment and control groups: In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group) while other subjects are not given such a stimulus (the control group). The treatment may be considered successful if subjects in the treatment group rate more favorably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group.
·                     Treatment manipulation: Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the “cause” in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures, while those conducted after the treatments are posttest measures.
·                     Random selection and assignment: Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and assures that each unit in the population has a positive chance of being selected into the sample. Random assignment is however a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration. Random selection is related to sampling, and is therefore, more closely related to the external validity (generalizability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.
·                     Threats to internal validity: Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats.
Experimental Design in Communication
An experiment as a methodology cuts across the physical, biological, social sciences and communication. It is a research method in which an independent variable is manipulated and its effects on the dependent variable are observed. When an experiment is conducted scientifically, the researcher is able to attribute any change in the dependent variable directly to the independent variable and not to extraneous variables or factors unrelated to the study. In fact, the controlled experiment has been described as the most powerful method available for finding out what causes what (Westley, 1981 cited in Creswell, 2002).
Berger (2000) cited in Wilson, Esiri & Onwubere (2008), an experiment is a procedure or kind of tests that:
1. Demonstrates that something is true
2. Examines the validity of an hypothesis or theory or
3. Attempts to discover new information.
Continuing, Wilson, Esiri & Onwubere (2008), still citing Berger (2000) explain the three procedures thus: In the first case, the researcher tries to show that what is held to be true about something is actually true. For instance, it is possible to transmit sound (speech, music, etc) from one point to another via radio receivers. This might involve replicating an experiment to see whether the findings in the first experiment are true. In the second instance, the researcher tests a hypothesis or theory to determine whether it is valid. For instance, the researcher might postulate that there is a relationship between heavy television viewing and violent behaviour in young people. In the third case, the researcher will want to discover something we did not already know. For instance, the researcher might try to find out whether one sided messages are more effective with people of lower education than those with higher education.
Attributes of Well-Executed Experiments
Thyer (1994) stated that some of the attributes of well executed experiments include:
·         Clarification of the Theory Being Tested and Explanation of How the Posited Relations Among Independent, Dependent, Moderator, Mediator, And Control Variables Relate to That Theory
Researchers should check that posited independent–dependent variable links (including moderators, mediators, and controls, if applicable) are rationalized within a compelling theory. A central problem for the communication field is that there are often many theoretical approaches that can prove useful. For example, there are many theories that have been used to explain why aggressive behavior shown in messages like television programs and movies would cause aggressive behavior in young viewers (e.g., theories like catharsis, social learning, priming, cultivation, etc.). In reports of experiments, it is necessary to explicate what theory is being tested and then ensure that all hypotheses tested in the study link to that theory. Sometimes competing theories may predict different outcomes that can be juxtaposed in hypothesis form.
·         Clarification of How the Experimental Design Will Demonstrate Causal Relations between Independent and Dependent Variables
Researchers should look for clear statements about how experimental conditions will be used to show that independent variables actually affect dependent variables. For example, to establish a causal relationship between exposure to mediated aggressive behavior (e.g., actions depicted in video clips) and aggressive behavior by children exposed to the video clips, the researcher must demonstrate that when children are exposed to the mediated aggressive behavior, they exhibit similar aggressive behavior to that depicted, and when they are exposed to mediated content that does not contain aggressive behavior, they do not exhibit the aggressive behavior. Video content without aggressive behavior provides the control stimulus against which the experimental condition is compared. If children have been randomly assigned to the two conditions, then it can be said that the aggressive content causes aggressive behavior. However, without other variables that serve as mediators or moderators, it will not be possible to say why the aggressive behaviors in the video clips caused the children’s aggressive behaviors (e.g., that they were aroused by the behaviors show in the video clips).
·         Clarity in Conceptualizing Media Stimuli
Researchers should look for theoretical and operational clarity about how media stimuli are defined. Especially when the goal of an experiment is to link psychological responses to physical attributes of media stimuli, manipulations of those physical attributes of the stimuli are hypothesized to cause changes in the dependent variable (e.g., the number of violent acts in videos would be an example of a physical message variable).
There is controversy about how best to characterize the structure of media stimuli. For example, many scholars treat media stimuli in “industry units” (e.g., commercials, news stories). Others suggest it is more useful to describe them in terms of variables more closely related to psychological processing (e.g., visual complexity, brightness, contrast, movement of objects on the screen). The bottom line is that it is important to select the physical stimulus features considered important in the tested theory, treat them as the independent variables, and then measure psychological responses, which may include mediators or moderators as well as dependent variables.
·         Clear Identification of Hypotheses and Research Questions
Researchers should look for clear statements of hypotheses and research questions. Predictions about how independent variables are expected to be related to dependent variables are generally provided in the form of either a directional hypothesis or a research question. The distinction between the two depends on how specific one’s theory is and/or how much prior evidence is available. If a number of relevant prior studies have found or suggested a specific direction of independent–dependent variable relationships, or if the tested theory leads to specifically deduced predictions, hypotheses are used to state the posited relationship between independent and dependent variables. If neither theory nor prior research leads to specific predictions about the relationships between independent and dependent variables, research questions should be used. Hypotheses derived from theory will posit not just “differences” between conditions but directionality of the differences (e.g., there will be more aggressive behavior when children watch longer programs depicting violent events than when they watch shorter programs depicting violent events).
·         Clear Specification of the Sample and Acknowledgment of Its Limitations
There has been considerable discussion of whether it is necessary to randomly sample experimental participants from a targeted population. When researchers randomly sample from a population, they can infer from findings in the sample to a particular population. For example, if a researcher obtains a random sample of people in a county with phone lines in their home, then the mean number of those sampled who also have cell phones can be used as a parameter estimate of how many people in the county (the population) have cell phones as well as land lines. And the percentage of people who respond positively (dependent variable) to an offer of broadband at a particular price (independent variable) can be used to estimate the relationship between that offer and a positive response in the county population. Random sampling thus enables statistical generalization from sample features to population features.
Samples for experiments, however, rarely involve random samples. In fact, one does not have to read many reports of experiments in mass communication to recognize the fact that nearly none employs samples randomly selected from populations. Instead, experimental researchers typically acquire convenience samples (like second graders from several school districts in town, college students enrolled in large communication classes, or adults who agree to participate in an experiment for a chance to win a digital music player). These individuals are then randomly assigned to the conditions in the experiment. Because there is no random sampling of participants, inferences cannot be applied to the likelihood that values found in the experiment are representative of values that would be found in the population as a whole. Instead, logical inferences are made about the multivariate relationships among the variables in the experiment. In all experimental reports, sample characteristics and selection methods should be included so the reader can evaluate all claims to generalizability.

·         Correct Specification of Effect Size, Power, Number of Participants, and Alpha Levels
Researchers should look for appropriate specification of effect size, power, number of participants, and alpha. There are complex interrelationships among effect size, the power of a study, the size of the effects sought or found, the number of participants tested in an experiment, and the statistical criterion chosen for rejecting the null hypothesis. The power of a study refers to the probability of the study to detect a certain size effect. A power analysis should be conducted prior to executing a study to determine the number of participants the experimenter should include. To compute an a priori power analysis, the research design must be specified (e.g., the number of between-subjects factors, the number of repeated measures, the correlation among repeated measures), the type of statistical analyses to be conducted (e.g., ANOVA, regression), Type I error rate (α; convention sets it a .05), and the size of effect sought. The a priori effect size is generally estimated by either prior literature or “rules of thumb.” Power tables can be used to compute an a priori power analysis, as can computer software.
Effect size can help the reader determine whether the observed relationship among variables is valuable. Statistical software packages include various effect size statistics, some of which may not be the most appropriate for a particular analysis. For example, SPSS provides partial η2 in ANOVA output when η2 may be more appropriate.
·         Consideration and Empirical Assessment of Alternative Explanations of Experimental Findings
A significant challenge to the validity of experimental findings is the possibility that the independent variables did not really cause the observed changes in the dependent variable, but rather the effect was the result of some unrecognized source of influence.
Processing and Analyzing Experimental Data:
According to Poindexter & McCombs (2000), the following could suffice as the process for analyzing data from experimental research:
1. The researcher processes the result from the questionnaire that measured the dependent variable in the same manner that the survey questionnaire was processed.
2. A codebook, which specifies variable and columns as well as codes for the open-ended questions, would be developed.
3. After coding on a coding spreadsheet, the researcher runs data using a statistical program such as SPSS.
4. The relevant analysis for the experiment is a comparison of the responses on the subjects who were in the experimental group and those in the control group. Means are calculated and compared for the experimental Group that saw the variable under test and the control group that did not see the variable.
·         The t-test
According to Poindexter & McCombs (2000), “a t-test, a significance test that is often used in two-group comparisons, is used to determine whether or not the mean in the Experimental group is significantly different from the mean in the Control group. If you use the t-test formula, you can calculate a t-value from the scores of the Experimental group and control group.” SPSS could be used to also calculate the t-value and determine its significance. When the t-value that is calculated from the data from the experiment is compared to the t-value at the appropriate degree of freedom and significance level in a t-distribution table, you can determine whether the Independent variable had any effect on the Dependent variable. The t-test is usually employed for small samples (Poindexter & McCombs, 2000).
·         Analysis of Variance (ANOVA)
When more than two means are compared, ANOVA can be used and SPSS could help do the Multivariable analysis. ANOVA is used to analyze between groups’ and within groups’ variance to determine if different presentations contributed to significantly different levels of recall among the experiment subjects. According to Wimmer & Dominick (2011), ANOVA is essentially an extension of t-test. The advantage of ANOVA is that it can be used to simultaneously investigate several independent variables called factors. Continuing, Wimmer & Dominick (2011:315) note that “a one-way.
ANOVA investigates one independent variable; a two-way ANOVA investigates two independent variables, and so on.” Kerlinger (1973), as cited in Poindexter and McCombs (2000:232), however, emphasized that ANOVA is not a statistic but an approach – a way of thinking. ANOVA calculates a ratio of variance between groups and variance within groups.
The resulting ratio is called the F-ratio. The F-ratio is compared to an F-ratio that can be found in an F-table in a comprehensive statistic text to determine if the results are statistically significant (Kerlinger 1973).

Written Report for an Experiment
The following constitutes the components of the method section for a written Report of an
Experiment:
1. Description of experimental design, including subjects, setting and independent and dependent variable measures
2. Description of procedures
3. Description of data processing and analysis
Examples of Communication Studies that Involve Experimental Research
¡  Do violent media make people aggressive, or do aggressive people prefer violent media? This research was carried out by Bandura, Ross, and Ross in 1961
¡  Does intergroup contact reduce or exacerbate intergroup conflict? This research was carried out by Sherif in 1958
¡  Does the gender/race/age/criminal record/other characteristic of a job applicant affect the likelihood of being hired? This research was carried out by Pager in 2003
¡  Does lack of control over our environment turn us into conspiracy theorists? This research was carried out by Whitson and Galinsky in 2008
¡  Does the status of an author’s institution affect their chances of having an article accepted for publication? This research was carried out by Peters and Ceci in 1982
Conclusion
Experiments are excellent for answering questions about causality, exploring alternative explanations and examining rare or hard to observe events. There are many different types and approaches to experiments, but all must be tailored to the research question(s). Experimental research can either facilitate systematic replication or theory development and can be used to complement other research methods.

References
Creswell, J. W. (2002). Educational research. Upper Saddle River, NJ: Pearson Education
Galantucci, B. (2005). An experimental study of the emergence of human communication
systems. Cognitive Science, 29(5), 737
Gliner, J. A., & Morgan, G. A. (2000). Research methods in applied settings: An integrated
approach to design and analysis. Mahwah, NJ: Lawrence Erlbaum Associates
Poindexter, P. M. and McCombs, M. E. (2000). Research in Mass Communication (A Practical
Guide). Boston, USA: Bedford
Sobowale, I. A. (2009). Scientific Journalism (2nd Ed.). Lagos: Idosa Konsult
Thyer, B. A. (1994). Successful publishing in scholarly journals. Thousand Oaks, CA: Sage
Underwood, B. J. (1966). Experimental psychology. New York: Appleton–Ceatury–Crofts
Wilson, D., Esiri, M., & Onwubere, C. H. (2008). Communication Research. Unpublished
Lecture developed for the National Open University of Nigeria (NOUN)
Wimmer, R. D. and Dominick, J. R. (2011). Mass Media Research. An Introduction. Belmont,
CA: Thomson/Wadsworth

EXPERIMENTAL RESEARCH BY Ifedayo Akinwalere and Stella Oyeniran EXPERIMENTAL RESEARCH BY  Ifedayo Akinwalere and  Stella Oyeniran Reviewed by IFEDAYO AKINWALERE on 7:45:00 am Rating: 5

Fashion