The Significance of Research Methods
Science aims to provide answers to questions by observing phenomena and gathering data. By doing so, it enhances our understanding of ourselves and the world we live in, enabling us to make predictions about future events and behaviors. However, this process requires a systematic and universal approach to collecting and interpreting data, as without it, chaos would ensue.
On a practical level, research methodology allows us to comprehend and assess the validity of the information we encounter on a daily basis. For instance, consider the following studies:
- A study suggests that the lifespan of left-handed individuals is significantly shorter than that of right-handed individuals.
- A study indicates a correlation between smoking and poor academic performance.
To evaluate the accuracy of these findings, several aspects of the studies need to be considered. However, most people tend to overlook the crucial details that are key to understanding the studies and solely focus on the outcomes, even if the conclusions are entirely flawed.
Research methods also hold practical value in various professional settings:
- Mental Health Profession: Research is essential for developing new therapies and determining which treatments are suitable and effective for different types of issues and individuals.
- Business World: Research methods are instrumental in devising marketing strategies, making informed hiring decisions, and improving employee productivity, among other applications.
Various Categories of Research Methods
Basic Research
- Basic research aims to address fundamental questions about the nature of human behavior. It is driven by the pursuit of knowledge for its own sake rather than immediate practical application.
For example, consider the titles of these publications:
- A comparison of the effects of information overload and relatedness on short and long-term memory retrieval.
- Emotionality and stress ulcers in rats: Electrophysiological activity in the central nucleus of the amygdala.
Some individuals mistakenly perceive basic research as insignificant. However, it serves as the groundwork upon which applications and solutions can be developed. While basic research may not seem immediately applicable in the real world, it guides us towards practical applications in various fields, including but not limited to:
- B.F. Skinner's work on training animals to respond to reinforcement, which has implications in areas such as industrial-organizational psychology, therapy, and education.
- The study of therapeutic techniques to determine their effectiveness for specific situations, individuals, and problems, benefiting clinical psychologists and other therapists.
Applied Research
Applied research focuses on finding solutions to practical problems and implementing those solutions to help others.
Examples of publication titles in this category include:
- Effects of exercise, relaxation, and management skills training on physiological stress indicators.
- Promoting automobile safety belt use among young children.
There is currently a growing emphasis on applied research, partly driven by the perspective in the United States where there is a desire for immediate solutions. However, it is crucial to maintain a perspective that recognizes the need for basic research.
Program Evaluation
- Program evaluation examines existing programs in areas such as government, education, and criminal justice to determine their effectiveness. It seeks to answer the question: "Does the program work?"
- For instance, consider the evaluation of capital punishment. Analyzing its effectiveness presents numerous challenges, including defining the purpose and "effectiveness" of capital punishment. If the purpose is to prevent convicted criminals from committing the same or any other crime, then capital punishment can be considered 100% effective. However, if the goal is to deter potential offenders, the assessment takes on a different perspective.
As individuals, we constantly observe the world around us and draw conclusions. How do we typically go about this process?
- Relying on Authority Figures: One common approach is to seek information from authority figures, such as teachers, who provide us with facts and knowledge. However, it is important to question whether this is always a reliable method.
- For instance, consider a scenario where your teacher asserts that there is a substantial body of evidence suggesting that larger brains correlate with greater intelligence.
- Trusting Intuition: Another method involves relying on our intuition, as discussed in a previous chapter. We might form opinions based on intuitive beliefs, such as whether women are more romantic than men or whether cramming for an exam is the best study strategy. However, it is crucial to ask ourselves whether we have supporting data or evidence for these opinions.
Fortunately, there exists a more robust path to uncovering the truth: the Scientific Method.
The Scientific Method
How do we discover scientific truths? While the scientific method is not flawless, it remains the best available method today.
To employ the scientific method, all subjects of study must meet the following criteria:
- Testability: Can the topic be tested? (e.g., Can you test the existence of God?)
- Falsifiability: While proving something to be true can vary depending on the situation, systematically demonstrating the falsehood of a subject matter is considerably challenging. (e.g., Can you prove that God does not exist?)
A. Objectives of the Scientific Method
Description, Prediction, Method Selection, Control, Data Collection, Analysis, Explanation
- Description: This stage involves identifying the observable characteristics of an event, object, or individual. Description allows for systematic and consistent examination.
At this stage, we refine our topic of study from a general concept or idea into a specific, testable construct.
a) Operational Definitions: Defining behaviors or qualities in terms of how they will be measured. It involves specifying the actions or operations to be undertaken to measure or control a variable.
For instance, how can we define "life change"? One option is to use the score on the Social Readjustment Rating Scale.
Prediction: This step involves formulating testable predictions or hypotheses about behavior, specifically regarding our variables. A hypothesis is a tentative statement about the relationship between two or more variables. For example, one hypothesis might suggest that increased alcohol consumption leads to a decrease in driving ability.
Hypotheses are typically based on theories, which summarize and explain research findings.
- Methodology and Design Selection: Choosing the most appropriate research strategy to empirically address our hypotheses.
- Control: Employing methods to eliminate unwanted factors that may influence the phenomenon under study (this will be discussed in more detail later).
- Data Collection: Execution and implementation of the research design, as outlined in the previous steps.
- Data Analysis and Interpretation: Utilizing statistical procedures to determine the mathematical and scientific significance (not the "actual" importance or meaningfulness) of the data. It involves assessing whether the differences between groups/conditions are significant enough to be meaningful (not due to chance).
Moreover, it involves interpreting the underlying causes of behavior, cognition, and physiological processes.
Reporting/Communicating the Findings: Psychology, as a science, is founded on the principles of sharing. Discovering answers to questions holds little value (except to the scientist) unless that information can be shared with others. This is accomplished through scientific journal publications, books, presentations, lectures, etc.
B. Approaches to Conducting Scientific Research
Naturalistic Observation: Allowing behavior to unfold without interference or intervention from the researcher. This is something we all do, such as people-watching.
- Weaknesses: Often challenging to observe without being intrusive.
- Strengths: Enables the study of behavior in authentic settings, rather than in a laboratory.
- Case Study: Conducting an in-depth investigation into an individual's life to reconstruct significant aspects. The aim is to understand the events that led to their current situation.
Typically involves interviews, observation, examination of records, and psychological testing.
- Weaknesses: Subjective in nature, resembling piecing together a puzzle, often with gaps that rely on the individual's memory, medical records, etc.
- Strengths: Valuable for assessing psychological disorders, as it allows for the examination of personal history and development.
- Survey: Utilizing written questionnaires, verbal interviews, or a combination of both to gather information about specific aspects of behavior.
Example: [Provide an example]
- Weaknesses: Relies on self-report data, which may raise questions about honesty.
- Strengths: Enables the collection
V. KEY TERMINOLOGY (thorough understanding of these terms is essential for success in Psychology. You can also refer to the provided glossary for these and other important terms):
- Variable - Any measurable condition, event, characteristic, or behavior that can be controlled or observed in a study.
- Independent Variable (IV) - The variable manipulated by the researcher to assess its impact on the dependent variable.
- Dependent Variable (DV) - The measured behavior or response outcome that researchers hope to see affected by the independent variable.
- Control - Any method used to manage extraneous variables that may influence a study.
- Extraneous Variable - Any variable, apart from the independent variable, that might affect the dependent variable in a specific manner.
Example - How quickly rats can learn a maze (two groups). What aspects need to be controlled?
Groups (of subjects/participants) in an Experiment - Experimental vs Control
- Experimental Group - The group exposed to the independent variable in an experiment.
- Control Group - The group not exposed to the independent variable. However, this does not mean the control group is not exposed to anything. For instance, in a drug study, it is advisable to have an experimental group (receives the drug), a placebo control group (receives a drug identical to the experimental drug but without active ingredients), and a no-placebo control group (receives no drug).
- Both groups must be treated IDENTICALLY except for the independent variable.
- Confound - Occurs when any variable, except the independent variable, systematically affects the dependent variable (extraneous variable). In such cases, it becomes unclear what is causing the effect on the dependent variable.
Example - Vitamin X vs Vitamin Y. Group 1 runs in the morning, Group 2 in the afternoon. Can you identify the problem with this? (I hope so)
Many factors can lead to confounds (here are just two examples):
- Experimenter Bias - If the researcher or any member of the research team behaves differently towards participants in one group, it may influence their behavior and consequently impact the findings. Usually, this bias is unintentional, but simply knowing the group to which a participant belongs can be sufficient to alter the way researchers interact with them.
- Participant Bias (Demand Characteristics) - Participants may modify their behavior to align with what they believe the researcher is expecting. Consequently, their actions may not reflect their natural behavior.
Types of Experimental Designs: True experiment, quasi-experiment, & correlation.
a) True Experiment: Aims to establish cause and effect.
To qualify as a true experiment, two components are necessary: manipulation of the independent variable and random assignment (RA) of participants to groups.
- Manipulation of the IV - Refers to researchers having control over the variable itself and making adjustments to it. For instance, while examining the effects of Advil on headaches, researchers can manipulate factors such as dosage, pill strength, and timing. However, gender cannot be manipulated to determine the effect of Advil on headaches in males vs. females. Is gender a true independent variable?
- Random Assignment - Involves randomly assigning participants to different groups/conditions to ensure an equal chance of being assigned to any condition.
b) Quasi-Experimental Designs: Similar to true experiments, but lack random assignment of participants to groups. One group receives the independent variable, while another does not, but the assignment is not random.
Various types of quasi designs exist (too many to discuss in detail here). What is crucial to understand is that random assignment is absent in all of them.
c) Correlation: Aims to determine the extent of the relationship between variables. However, it cannot establish cause and effect.
Correlation Coefficient (r) is used to indicate the strength of a relationship.
The coefficient ranges from -1.0 to +1.0:
-1.0 = perfect negative/inverse correlation
+1.0 = perfect positive correlation
0.0 = no relationship
Positive correlation - As one variable increases or decreases, the other variable follows suit. For example, studying and test scores.
Negative correlation - As one variable increases or decreases, the other variable moves in the opposite direction. For example, as food intake decreases, hunger increases.
The Between vs Within Subjects Design
- Between-Subjects Design: In this type of design, each participant is assigned to only one group. The results from each group are then compared to identify differences and assess the effect of the independent variable. For example, a study examining the effect of Bayer aspirin vs. Tylenol on headaches may have two groups (one receiving Bayer and the other receiving Tylenol). Participants receive either Bayer OR Tylenol, but not both.
- Within-Subjects Design: In this design, participants receive all treatments/conditions. For instance, in the aforementioned Bayer vs. Tylenol study, each participant would receive Bayer, the effectiveness measured, and then Tylenol, with effectiveness measured again. Do you notice the differences?
Validity vs. Reliability
Validity - Refers to whether a test measures what it intends to measure. If it does, then it is considered valid.
- Example - Does a stress inventory/test genuinely measure the amount of stress in a person's life and not something else?
Reliability - Indicates the consistency of a test. If similar results are obtained repeatedly, the test is considered reliable.
- Example - An IQ test is unlikely to change significantly upon repeated administration. If it consistently produces the same or very similar results, it is deemed reliable.
However, a test can be reliable without being valid, so caution must be exercised.
Example - The heavier a person's head, the smarter they are. If I weigh your head at the same time each day, once a day, for a week, the weight will likely be nearly the same each day. This demonstrates test reliability. However, do you think this test is valid in measuring your level of "smartness"? Most likely not, therefore, it lacks validity.