Some research method revision

HideShow resource information

Peer Review

Peer review: The assessment of scientific work by others who are experts in the same field. The intention is to ensure that any research conducted/published is of high quality.

Serves 3 main purposes:

1) Allocation of research funding

  • Government/charitable bodies who fund research need to decide which research is likely to be worthwhile

2) Publication of research in scientific journals/books

  • Peer review aims to correct incorrect or faulty data entering the public domain

3) Assessing the research rating of university departments

  • All university science departments are expected to conduct research, which is assessed in terms of quality 
  • Future funding for the departments depends on receiving good ratings for peer review
1 of 30

AO2: Peer review

Unachievable ideal

  • It isn't always possible to find an appropriate expert with the same specialist interest to review research


  • Anonymity allows a reviewer to feel they can be honest, but may result in dishonesty (e.g. if a person wishes to settle an old score) Some journals now use open reviewing.

Publication bias

  • Peer review tends to favour the publication of positive results, possible because editors want research with important implications to increase their journal's standing

Preserve the status quo

  • Peer review results in a preference for research that doesn't challenge existing theory
2 of 30

AO2: Peer review2

Can't deal with already published research

  • Once research has been published, it continues to be used even if later found to be fraudulent

An alternative

  • Online blogs/journals invite comments from any reader as a means of peer reviewing
3 of 30

Lab experiments

  • IV is manipulated by an experimenter to observe its effect on the DV
  • These are highly controlled


  • Can draw causal conclusions
  • Extraneous variables are minimised
  • Can be easily replicated


  • Contrived, tends to lack mundane realism
  • Investigator bias
  • Participant effects (e.g. demand characteristics)
4 of 30

Field experiments

  • More natural surroundings
  • IV is directly manipulated by the experimenter
  • These are less controlled


  • Can draw causal conclusions
  • Higher ecological validity
  • Reduced experimenter effects


  • Less control
  • May have demand characteristics
  • Difficult to control extraneous variables
5 of 30

Natural experiments

  • IV is not directly manipulated, participants not randomly allocated
  • Makes use of existing IVs


  • Allows research where the IV can't be manipulated for ethical/practical reasons
  • Enables psychologists to study 'real problems'


  • Can't demonstrate causal relationships
  • Many extraneous variables
  • Investigator/participant effects
  • Participants not being randomly allocated reduces validity
6 of 30

Experimental designs: Repeated Measures

Same participants are used in both conditions


  • Less participants are needed


  • Order effects may occur
  • Tiredness could affect results
7 of 30

Experimental designs: Independent groups

Participants are randomly allocated to different groups which represent the different conditions


  • No order effects will occur


  • More participants are needed
  • Individual differences between participants
8 of 30

Experimental designs: Matched Pairs

Pairs of participants are closely matched and are then randomly allocated to one of the experimental conditions


  • Individual differences are taken into account
  • No order effects will occur


  • More participants are needed
9 of 30

Observational techniques: Naturalistic

- Everything is left as normal
- All variables free to vary


1) Can study behaviour where it isn't possible to manipulate variables
2) High ecological validity


1) Poor control of extraneous variables
2) Observer bias
3) Low inter-observer reliability

10 of 30

Observational techniques: Controlled

- Some variables are controlled by the researcher


1) Can manipulate variables to observe effects


1) Less natural, reduced ecological validity
2) Investigator/participant effects
3) Observer bias
4) Low inter-observer reliability

11 of 30

Observational techniques: Content Analysis

Content analysis:
- Indirect observation of behaviour
- Based on written/verbal material such as interviews/TV


- High ecological validity


- Observer bias

12 of 30

Self-report: Questionnaires

Set of written questions


1) Easily repeated because lots of people can be questioned- bigger sample
2) Respondent may be more willing to reveal personal information
3) Doesn't require a specialist


1) Social desirability bias
2) Biased samples
3) Investigator effects/leading questions
4) Demand characteristics

13 of 30

Self-report: Interviews


1) More detailed information
2) Can access unexpected information
3) Can gain high quantitative/qualitative data depending on questions


1) Time-consuming
2) Social-desirability bias
3) Interviewer bias
4) Leading questions
5) Requires well trained personnel

14 of 30

Correlational techniques

Co-vairables are examined for positive, negative or zero correlation


1) Can be used when it isn't possible to manipulate variables
2) Can rule out causal relationships


1) People often misinterpret correlations
2) There may be other unknown variables

15 of 30

Case studies

Detailed study of a single individual, institution or event
Involves many different techniques


1) Rich, in depth data collected
2) Used to investigate unusual instances of behaviour
3) Complex interactions studied


1) Can't generalise to public
2) May involve unreliable, retrospective recall
3) Researcher may lack objectivity

16 of 30

Reliability: Internal & external

Internal reliability:
A measure of the extent to which something is consistent within itself
- E.G. All questions on an IQ test should be measuring the same thing

External reliability:
A measure of consistency over several different occasions

17 of 30

Reliability: experimental research

Refers to the ability to repeat a study and obtain the same results (replication)

- Replications are conducted to test the reliability/validity of the original result
- If the same result is obtained the second time = result is more likely to be legitimate/valid
- It is essential that all conditions are the same when conducting a replication
- Otherwise, if the results are now different, this may be due to the changed conditions rather than a lack of validity

18 of 30

Reliability: Observational Techniques

1) Observers should be consistent
- Ideally 2 or more observers should produce the same results/records

2) Assessing reliability
- Inter-rate/inter-observer reliability: The extent to which 2 or more observers agree
- Calculated by: Dividing total agreements / total number of observations
- It should be at least +80

3) Reliability can be improved through training observers in the use of (e.g. coding systems)

19 of 30

Reliability: Self-report techniques

Internal reliability

Can be assessed using the split-half method:
- Compare a person's performance on 2 halves of a questionnaire or test
- There should be a close correlation in the score from both halves

External reliability

Can be assessed using the test-retest method:
- Person is given a questionnaire/interview/test on one occasion
- This is repeated again after a reasonable interval
- If the measure is reliable, the outcome should be the same

20 of 30

Validity: Internal & External

Internal validity

Concerns what goes on inside a study
Whether the researcher did test what they intended to test

External validity

Concerns things outside a study
Extent to which the results of a study can be generalised to other situations/people

21 of 30

Validity: Experimental research

Internal Validity

- Affected by extraneous variables that act as an alternative IV
- If changes in the DV are due to EV's rather than the IV, then conclusions about the effect of the IV on the DVs are incorrect

External Validity

- Can be affected by the contrived/artificial nature of lab experiments
- Should consider issues such as:
1) Hawthorne effect
2) Participant effects
3) Whether the task was low in mundane realism
- Therefore can't generalise

22 of 30

Validity: Observational Techniques

Internal validity

1) Observations will not be valid if the coding system/behaviour checklist is flawed
- E.g. some observations may belong in more than 1 category
- Some behaviours might not be codeable

2) Observer bias
- What someone observes is influenced by their expectations
- Reduces objectivity of observations

3) Observational studies
- Likely to have high ecological validity
- Involve more natural behaviours

23 of 30

Validity: Self-report Techniques

Face validity

- Does the test look like it is measuring what the researcher intended to measure?
- E.G. Are the questions obviously related to the topic?

Concurrent validity

- Can be established by comparing performance on a new questionnaire/test with a previously established test on the same topic

External validity

- Likely to be affected by biased sampling strategies

24 of 30


Sampling techniques aim to select a representative sample from a target population in order to be able to generalise from the sample to the target population

A sample that is not representative is described as biased.

A biased sample means that any generalisations lack external validity.

25 of 30

Sampling: Volunteer


- Sample is obtained through advertising
- Individuals actively choose/contact the researcher to take part
- Could involve incentive (e.g. money)


- No researcher bias (researcher will not know who volunteered)
- More representative


- Individual differences (same type of person- e.g. outgoing- might be more likely to volunteer)

26 of 30

Sampling: Opportunity


Sample is taken at a place that the researcher is at, and they would just take who is available to them.


- Requires little effort- can obtain a mass sample quickly
- Most practical/easiest method


- Can't generalise findings- unrepresentative of target population

27 of 30

Sampling: Random


Participants are selected from a target population using a random number technique (e.g. number generator, names in a hat)


- No researcher bias (all people from target audience have equal chance of being chosen)


- May end up with a biased sample
- May be biased if people refuse to take part
- Difficult when you have a large target population

28 of 30

Sampling: Stratified


- Involves classifying the population into categories and then choosing a sample which consists of participants from each category
- Need to choose the same proportions from each category as they are in the population
- E.g. Target population is 75% women and 25% men, so a sample of 20 would have 15 women and 5 men


- More representative than other methods; there is proportional representation of subgroups


- More time-consuming

29 of 30

Sampling: Snowball


Start with one or two people, who then direct you to other similar people


- Useful when conducting research with participants who are not easy to identify (e.g. drug users)


- Prone to bias- only a limited section of the population is contacted

30 of 30


No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »