Variables, control, demand characteristics, reliability and validity (RM VI)

?
  • Created by: asusre
  • Created on: 19-04-21 19:21
What is an extraneous variable?
Extraneous variables are any variable other than the independent variable that may affect the dependent variable.
1 of 56
What are confounding variables?
Confounding variables are a kind of unforseen, uncontrolled extraneous variable that prevents us from establishing a cause and effect relationship as they affect one condition more than another.
2 of 56
What are situational variables?
Situational variables are aspects of the situation which impact the performance of the participants e.g., noise, temperature, time of day, weather, instructions.
3 of 56
How do you control situational variables?
Situational variables are controlled using standardisation.
4 of 56
What is standardisation?
Standardisation involves using exactly the same formalised procedures and instructions for all participants. Includes standardised instructions which are read to each participant.
5 of 56
What are order effects?
Order effects are differences in participants’ performance that result from the order in which they take part in the conditions e.g., improving with practice or declining as they get tired/bored.
6 of 56
How do you control order effects?
Order effects are controlled using counterbalancing.
7 of 56
What is counterbalancing?
Counterbalancing means that all participants take part in all the conditions of the experiment but in different orders, e.g. half the participants experience the conditions in one order, and the other half in the other order.
8 of 56
What are participant variables?
Participant variables are individual differences between people which can impact the DV, e.g., gender, age, motivation, personality, intelligence, concentration.
9 of 56
How do you control participant variables?
Participant variables are controlled using random allocation.
10 of 56
What is random allocation?
Participants are randomly allocated to the different experimental conditions in an attempt to evenly distribute participant characteristics across the conditions of the experiment using lottery techniques.
11 of 56
What are investigator effects?
Investigator effects refer to conscious or unconscious behaviour of the researcher which may impact the DV. This includes the design of the study, and the selection of and interaction with participants.
12 of 56
How do you control investigator effects?
Investigator effects are controlled using double-blind procedure and randomisation.
13 of 56
What is double-blind procedure?
Double-blind procedure means that neither the participants nor the researcher are aware of the aims of the study (often a third party conducts the investigation).
14 of 56
What is randomisation?
Randomisation is the use of chance methods to control for investigator effects when designing materials and deciding the order of experimental conditions.
15 of 56
What are demand characteristics?
Demand characteristics are clues about the aim of the study that may lead to participants changing their behaviour. Participants may over-perform to please the experimenter or deliberately under-perform to sabotage the study.
16 of 56
How do you control demand characteristics?
Demand characteristics can be controlled using independent groups design.
17 of 56
What is validity?
Validity is the extent to which something measures what it is supposed to measure.
18 of 56
What is internal validity?
Internal validity is the extent to which the study has measured what it aimed to measure, so the results are a result of the manipulation of the IV and not some other factor.
19 of 56
What is external validity?
External validity is the extent to which the results of a study can be generalised to other settings.
20 of 56
What is ecological validity?
Ecological validity is the extent to which findings from a research study can be generalised to other settings and situations/to everyday life.
21 of 56
What is mundane realism?
Mundane realism is the extent to which the experimental task mirrors the equivalent situation in real life.
22 of 56
What is population validity?
Population validity is the extent to which results from research can be generalised to the target population.
23 of 56
What is temporal validity?
Temporal validity is the extent to which findings from a research study can be generalised to other historical times.
24 of 56
What are the different ways to test validity?
Concurrent validity, predictive validity, face validity and triangulation are ways of assessing validity.
25 of 56
What is triangulation?
Triangulation means comparing the results of a variety of research studies using different methodologies to check if they are similar and thus valid.
26 of 56
What is concurrent validity?
Concurrent validity is the extent to which a new psychological measure relates to an existing measure.
27 of 56
How do you find concurrent validity?
This involves comparing a participant’s scores from a measure of unknown validity (the new test) and another from a measure of established validity (an old test). A significant positive correlation indicates the new measure is valid.
28 of 56
What is predictive validity?
Predictive validity measures the extent to which a test can predict performance on future tests.
29 of 56
How do you find predictive validity?
Predictive validity involves two sets of scores being gathered at two different points in time and compared. A significant positive correlation indicates the new measure is valid.
30 of 56
What is face validity?
Face validity is a simple way of assessing whether or not something measures what it is supposed to measure on the face of it, e.g. does an IQ test look like it tests intelligence?
31 of 56
How do you find face validity?
Face validity involves an “eyeball test”, where independent experts assess whether the measuring instrument appears to be appropriate and may make suggestions for improvement to the researcher.
32 of 56
How do you improve the validity of an experiment?
Validity can be improved using control groups. Standardised procedures and single/double-blind procedures also minimise the impact of participant reactivity and investigator effects
33 of 56
How do you improve the validity of a questionnaire?
A lie scale in a questionnaire assesses the consistency of a respondent’s response and controls for the effects of social desirability bias. Validity can be enhanced further by assuring respondents that all data submitted will remain anonymous.
34 of 56
How do you improve the validity of an observation?
Observations produce data high in ecological validity as there is minimal intervention by researcher, especially covert observations. Behavioural categories that are too broad, overlapping or ambiguous may reduce validity.
35 of 56
What are is the validity of qualitative research?
Qualitative research has higher ecological validity than quantitative research because detail better reflects the participants reality.
However, the researcher may still have to demonstrate the interpretive validity.
36 of 56
What is interpretive validity?
Interpretive validity is the extent to which the researcher’s interpretation of events matches that of their participants.
37 of 56
How can you demonstrate interpretive validity?
Interpretive validity can be demonstrated through such things as the coherence of the researcher’s narrative and the inclusion of direct quotes from participants within the report. Validity is increased using triangulation – the use of a number of differe
38 of 56
What is reliability?
Reliability is the extent to which something produces consistent results when replicated.
39 of 56
How do you assess internal reliability of a psychological test or questionnaire?
The internal validity of a test or questionnaire can be measured using the split-half method.
40 of 56
What is the split-half method?
The split-half method measures the extent to which all parts of the test contribute equally to what is being measured.
41 of 56
How do you carry out the split-half method?
This is done by comparing the results of one half of a test with the results from the other half. A test can be split in half in several ways, e.g. first half and second half, or by odd and even numbers. If the two halves of the test provide similar resul
42 of 56
What is one strength of the split-half method?
One strength of the split-half method is that it is a quick and easy way to establish reliability.
43 of 56
What is one limitation of the split-half method?
The split-half method is only effective with large sets of questions which all measure the same construct.
44 of 56
How do you measure the external reliability of a psychological test, questionnaire or interview?
The test-retest method measures the external reliability of a test or questionnaire.
45 of 56
What is the test-retest method?
The test-retest method measures the reliability of a test over time.
46 of 56
How do you carry out the test-retest method?
The test-retest method involves giving participants the same test on two separate occasions. If they give the same or similar results, then it is reliable.
47 of 56
When would you retest the participants?
There must be significant time between test and retest. Too soon means they may write the same answers again through memory. On the other hand, the retest should not be so long after that their attitudes/abilities have changed.
48 of 56
How would you meaure the external reliability of an observation or interview?
Inter-rater reliability measures the external reliability of an observation or interviewwhere there is risk of subjectivity and bias.
49 of 56
What is inter-rater reliabilty?
Inter-rater reliability is the degree to which different raters give consistent estimates of the same behaviour. It tests that the observers are applying behavioural categories in the same way.
50 of 56
How would you find inter-rater reliability?
Inter-rater reliability involves two researchers observing the same behaviour independently (to avoid bias) and compare their data. They can observe on-site or watch a recording.
51 of 56
How is reliability measured in all these methods?
Reliability is measured using a correlational analysis. The two sets of scores are correlated, using a scatter graph. A significant positive correlation indicates that the items on the test are reliable.
52 of 56
How can the reliability of questionnaires be improved?
Reliability of questionnaires is tested using the test-retest method. Unreliable questionnaires may require ambiguous questions to be rewritten, or open-ended questions to be replaced with close-ended questions.
53 of 56
How can the reliability of interviews be improved?
The best way of ensuring reliability of interviews is to use the same interviewer each time. If this is not possible, the interviewers must all be properly trained, for example in not asking ambiguous or leading questions. This is easier in structured in
54 of 56
How can the reliability of observations be improved?
Reliability of observations can be improved by operationalising behavioural categories so they are measurable and self-evident. Categories should not overlap and all possible behaviour should be covered on the checklist.
55 of 56
How can the reliability of experiments be improved?
Reliability of experiments can be improved by using standardised procedures.
56 of 56

Other cards in this set

Card 2

Front

What are confounding variables?

Back

Confounding variables are a kind of unforseen, uncontrolled extraneous variable that prevents us from establishing a cause and effect relationship as they affect one condition more than another.

Card 3

Front

What are situational variables?

Back

Preview of the front of card 3

Card 4

Front

How do you control situational variables?

Back

Preview of the front of card 4

Card 5

Front

What is standardisation?

Back

Preview of the front of card 5
View more cards

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »