Reliability and validity

research ,methods

?
  • Created by: Natasha
  • Created on: 12-01-10 15:49

Reliability

reliability - is when the method of assessment should give consistent results when replicated (e.g. questionnaires)

internal reliability - refers to how consistently a method measure within itself (e.g. are the intervals on the ruler the same?) - checked by split half

external reliability - refers to how consistently a method is over a number of applications ( e.g do some results occur when repeated on same people)

1 of 4

ways of ensuring reliability

1. test re-test: pps completes the same test twice, with a gap in between. if they score similar results on both occasions the method is reliable

2. split-half: this involves splitting the test into 2 halves. a correltation onthe 2 halves is carried out in order to ensure that both halves of the test are of equal difficulty, if they are, then the test is reliable.

3. observer (inter-rater): is sued when carrying out an observation. to ensure observer reliability, you must have at least 2 observers watching and recording what they see in the same way, using the same forms. if all observers record the same things, then their observations are reliable

2 of 4

Validity

validity - is when a test should measure what it claims to be measuring and nothing else, e.g. is a IQ test measuring intelligence or social background?

internal validity - refers to whether a study's results are due to the variables suggested by the researchers (did the IV really cause the change in the DV) - checked by using face validity

external validity - refers to whether the results can be applied in different environments or different pps (e.g. ecological or population val)

3 of 4

ways of ensuring validity

1. consent validity : this involves an independent expert examining the content of the research method to see if it looks like it is measuring what it is suppose to measure. if they agree that it is measuring what it is suppose to, then the test/method has good validity.

2. concurrent validity : this involves comparing a new test with an already established test designed to measure the samething . if scores are similar, then the new test is valid.

3. predictive validity: this involves checking validity by seeing if future behaviour is consistent with what we could predict based on out test, e.g. if someone is diagnosed as schizophrenic we would expect them to show further symptoms.

4 of 4

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »