Problems with Experiments
- Created by: JuliaMabiza
- Created on: 15-04-19 22:39
View mindmap
- Problems With Experiments
- Extraneous Variables
- Participant Variables
- i.e characteristics of a person e.g age, intelligence, gender. they only act as EV if an independent groups design is used.
- How can these be controlled?-use the same pps in both conditions(repeated measures), match pps on relevant characteristics e.g age, gender(matched pairs design), random allocation
- Situational Variables
- i.e features of research situation that may influence participants behaviour e.g time of day, temperature, noise
- How can these be controlled?-use standardised instructions/proedures, use counterbalancing for order effects experienced.
- Investigator Effects
- anything the investigator does that has an effect on a participant's performance in a study other than what was intended. direct effects(as a consequence of the investigator interacting with pp) and indirect(as a consequence of the investigator designing the study)
- a) Expectancy effects- what the researcher expects to find in the investigation can be communicated to the pps through subtle cues e,g tone of voice, smiling, nodding, frowning etc. they may unconsciously encourage participants e.g by spending more time with one group or more positive.
- Indirect Investigator effects
- investigator experimental design effect e.g deliberately giving one group more time to increase the chances of a noticeable effect
- Investigator loose procedure effect- not clearly specifying the instructions or procedure increasing the chances of investigator influencing research
- Investigator fudging effect-investigator deliberately fakes data
- Indirect Investigator effects
- a) Expectancy effects- what the researcher expects to find in the investigation can be communicated to the pps through subtle cues e,g tone of voice, smiling, nodding, frowning etc. they may unconsciously encourage participants e.g by spending more time with one group or more positive.
- How can these be controlled?-standardised procedure/instructions, double blind i.e pps and researcher unaware of true aims.
- anything the investigator does that has an effect on a participant's performance in a study other than what was intended. direct effects(as a consequence of the investigator interacting with pp) and indirect(as a consequence of the investigator designing the study)
- Demand Characteristics
- a cue that makes pps unconsciously aware of the aims of the study or helps them work out what the researcher expects to find.
- How can these be controlled?-single blind i.e pps not told the true aims of the study
- do not vary systematically with the IV and therefore do not act as an alternative IV but may have an effect on the DV. Nuisancd variables which make it more difficult to detect significant effect.
- Participant Variables
- Participant Effects
- The Hawthorne Effect
- How can this be controlled?-experimental realism i.e sufficiently engaging so participants pay attention o the task rather than the fact that they are being observed.
- Social Desirability Bias
- a tendency for respondents to answer questions in a way that will present them in a better light
- How can this be controlled?-confidentiality
- refers to the situation in which the independent variable has an effect on behaviour because pps know they are being studied
- The Hawthorne Effect
- Pilot Study
- a small scale trial run of a study to test any aspects of the design, with a view to making improvements.allows researcher to check standardisation, any problems van be adjusted
- Sources of bias reduced-indentification of participants, situational variables, participant reactivity.
- a small scale trial run of a study to test any aspects of the design, with a view to making improvements.allows researcher to check standardisation, any problems van be adjusted
- Extraneous Variables
Comments
No comments have yet been made