Reliability

?
  • Created by: Phoenix_
  • Created on: 28-11-17 21:50

Definition

  • Reliability:
    • A measure of consistency 
    • Refers of how much we can depend on a given measurement 
    • (linked to replicability)
  • External Reliability:
    • The ability to produce the same results every time the test is carried out
  • Internal Reliability:
    • The consistency within a test 
    • e.g. Attitude tests / psychometric tests such as personality tests, or observing specific behaviour in observation (2 observers may class same behaviour differently)
1 of 7

Experiment - Improving Reliability

  • Standardisation:
    • same procedures are repeated for different participants 
    • important that procedures are same each time otherwise we can't compare
    • operationalise is essential
  • Repeats:
    • Better to take more thsn 1 measurment from each participant
    • e.g. Reaction time study - catch ruler 3 times instead of 1
  • Pilot Studies:
    • are done to discover any problems with research design
    • e.g. participants misunderstnading instructions or whether timings are adequate
2 of 7

Observation - Improving Reliability

  • Behavioural Categories:
    • Observational categories need to be fully operationalised
  • Pilot Studies:
    • Are done to discover any problems with research design 
    • e.g. poorly defined behavioural categories or inadequate training
  • Standardisation:
    • When more than 1 investigator is used - method of collecting data should be standardised
    • May require training -> clear criteria been established to look & record same info (filmimg behaviour & practising categorisation)
3 of 7

Self-report - Improving Reliability

  • Reduce Ambiguity :
    • Low reliability due to test being ambiguous (questions)
    • Solution -> Replace some open questions with closed questions to reduce ambiguity 
  • Pilot Studies:
    • To discover any problems such as: leading questions in questionnaires/ interviews
  • Standardisation:
    • All participants should be subject to same enviornment, informations and questions
    • Use same researcher i  interviews (same approach to asking body language, tone etc.)
    • [structured interviews are prefered as it is controlled by fixed questions] 
4 of 7

Experiment - Assessing Reliability

  • Test-Retest Reliability:
    • Results from same test/procedure repeated by same participants after a short interval (e.g. a week - so they dont remeber their answers) are compared by a researcher 
    • The researcher compares the degree of reliability using statistical tests (comparison of critical value and correlation coeffecient)
    • If there is a strong positive correlation between the 2 sets of data then it is reliable
  • Reliability of Observational Techniques:
    • Observations are a form of measurement 
    • Reseacher will keep a record of events using behaviour categories
    • Score for inter-rater reliability (if similiar score is low, improvement is needed) 
5 of 7

Observation - Assessing Reliability

  • Inter-Observer Reliability : (a.k.a Inter-rater reliability)
    • Definition: The extent to which the observers agree on the obversations they record 
    • 2 or more psychologists device a set of behavioural categories to code behaviour
    • Each reseacher carry out obversation independently (tally each behaviour category when observed)
    • At the end compare / correlate total with other observers
    • Strength of correlation determined by appropriate statistical test 
    • Strong positive correlation shows it is reliable
6 of 7

Self-Report - Assessing Reliability

  • Test-Retest Reliablilty:
    • Results from a repeated questionnaire/ interview by same participants after a short interval (e.g. a week - so they don't remeber their answers) are compared 
    • The researcher compares the degree of reliability using statistical tests (comparison of critical value and correlation coeffecient)
    • If there is a strong positive correlation between the 2 sets of data then it is reliable
  • Split-Half Method:
    • Each participant's scores on 1 half of a test should be correlated with their scores on the other half of the test
    • (usually a psychometric test e.g. IQ or Personality test)
    • Strength of correlation determined by appropriate statistical test
    • Correlation coeffecient and critical value compared a strong positive between the sets of scores means it is reliable 
7 of 7

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »