Research methods for test

  • Created by: Ikra Amin
  • Created on: 23-10-14 13:11

The major features of science

Key features of scientific method:

  •   Empiricism - information is gained through direct observation or experiment, rather than by reasoned argument or unfounded beliefs.
  • Objectivity - scientists try be objective in their observations and measurements, so that their own expectations and pre-conceptions do not affect findings.
  •  Replicability - one way to discover if findings can be verified is to repeat an investigation. If the outcome is the same, this confirms the original results. In order for replication to be possible procedures have to be carefully controlled/standardised and recorded.
  •  Control - try establish causal relationships (show that 1 thing causes something else to happen) to help them to predict and control aspects of the world. The experimental method is the only way to do this - vary one factor (IV) and measure the effect of the change in this factor on another variable (DV). In order for this to be a fair test, all variables must be controlled (kept constant)
  •   Theory construction - to record facts, but an additional aim is to use these facts to construct theories to help us understand and predict things around us.
  •  Hypothesis testing - clearly operationalised variables are developed to create unambiguously phased, testable predictions, which can then be empirically tested.
1 of 58

Scientific or nah


·         empircally based, ie it concentrates on direct observation or experiment

·         collecing scientific information is systematic and controlled

·         the reporting of scientific information aims to be unbiased and objective

·         ideas and hypotheses can be tested using scientific methods

Non science

·         frequently intuitiv, i it is based on reasoned argument or beliefs

·         collecting non scientific information is often done in a random and uncontrolled manner

·         the reporting of non scientific information can be biased and subjective

·         non scientfic ideas and hypotheses cannot, by their very nature, be tested scientifically

2 of 58

Scientific or nah

Bordens and Abbott (2008) suggests there's 3 types of explanations which are separate from scientific explanations:


BELIEF-BASED ARGUMENT. this refers to the tendency:

·         to only accept ideas consistent with a current framework of beliefs

·         to place the burden of proof with sceptics

·         to ignore contradictory evidence

PSEUDOSCIENCE. this involves evidence which:

·         is not falsifiable, ie cannot be disproved

·         is not based on replicable research

3 of 58

Scientific or nah

Bordens and Abbott (2008) suggests there's 3 types of explanations which are separate from scientific explanations:


BELIEF-BASED ARGUMENT. this refers to the tendency:

  •  to only accept ideas consistent with a current framework of beliefs
  • to place the burden of proof with sceptics
  • to ignore contradictory evidence

PSEUDOSCIENCE. this involves evidence which:

  • is not falsifiable, ie cannot be disproved
  • is not based on replicable research
  • lacks underlying theory
  • lacks the ability to change

Scientific research attemps to falsify theories, if they cannot be falsified hey can't be investigated scientifically

4 of 58

Key words for science

Objectivity - A view that suggests that experience is central to the development and formation of knowledge, and thus central to the scientific method. Experience or evidence arises out of experiments not intuition and revelation. 

Empiricism - A term that is used to refer to views being based on observable phenomena and not on personal opinion, prejudice or emotion.

Replicability - The ability for procedures and/or findings to be reproduced or repeated.

Rational - Explanations based on experience, which seem to make sense in the light of that experience. 

Common sense - Based on reason.

Pseudoscience - False science.

Belief-based argument - Explanations which tend to be based on a framework of belief.

Subjectivity - Views based on personal opinion or limited experience.

5 of 58

The scientific process

1 - Develop theory from previous research/observations - e.g. the way we process information will influence how well we remember that information. 

2 - Form hypothesis - E.g. visual images create better recall than words.

3 - Test hypothesis - E.g. test peoples recall on 15 images or words.

4 - Analysis - E.g. Which methods produced best recall? Was the difference in recall large?

Analysis of the esults may support the hypothesis and thus support the theory. If the results don't support the hypothesis we may have to improve the experiment or change the theory/hypothesis. 

When carrying out experiments it is also important that we try to REPLICATE then as to ensure that our finding was not just a freak result.

Clear operationalisation of all variables is necessary to ensure objectivity, control and replicability.

6 of 58

Can Psychology claim to be a science

Kuhn's views - Thomas Kuhn (1962) claims that psychology could not be a science because, unlike other sciences, there's no single PARADIGM (shared set of assumptions). Psychology has a number of paradigsm or approaches - cognitive, physiological, behaviourist etc. Therefore, Kuhn argued that pscyhology was a 'pre-science'

Lack of objectivity and control - psychologists differ in the extent to which they consider human behaviour can be measured objectively. Problems, such as experimenter bias, demand characteristics, social desirability effect etc, may all compromise the validity of findings. However, even the 'hard' sciences are also subject to such problems. Heisenberg (1927) claimed that it's not even possible to measure a subatomic particle without altering its 'behaviour' in doing the measurement.

Are the goals of science appropriate for psychology? Some psychologists don't see the study of behaviour as scientific pursuit. E.g. Laing (1960), in discussing the causes of SZ, claimed that it was inappropriate to view a person experiencing distress as a complex psysical-chemical system that had gone wrong. Laing claimed that treatment could only succeed if each patient was treated as an individual case (ie adopting an idiographic approach), whereas science adopts a nomothetic approach, looking for trends and patterns between people. Mentall illness is perhaps the area of psyc which least fits the scientific approach

7 of 58

Can Psychology claim to be a science

What about qualitative research? Some psychologists advocate more subjective qualitative methods of carrying out research. However, these methods are still scientific in that they aim to be valid. For example, data can be collected from interviews, discourse analysis, observations, etc and triangulated - the findings from these different methods are compared with each other as a means of verifying them and making them objective.

Back to the major features of science 

If asked in an exam q about carrying out scientific research, use the 6 major features of science as a way of structuring the answer, considering how each can be made relevant to psychological research

8 of 58

issues raised by use of scientific method

  • Reductionism refers to something complex being simplified to something very straight forward and simple.
  • Determinism refers to people behaving the way they do and carry out actions because of their biology/genes or upbringing and behaviours predictable.
  • The scientific approach is both reductionist and determinist.  It is reductionist because complex phenomena are reduced to simple variables in order to study the causal relationships between them.  It is also reductionist in the development of theories.  Occam’s razor states that ‘Of two competing theories of explanations, all other things being equal, the simpler one is to be preferred’.
  • Science is also determinist in its search for causal relationships, ie seeking to discover if X determines Y.  If we don’t take a determinist view of behaviour, this rules out scientific research as a means of understanding behaviour.
  • Reductionism and determinism are mixed blessings
9 of 58

The scientific method

FOR psychology being a science

  • Biological approach - objective data as MRI scans are used. The data obtained is quantitative so is objective, not subjective. Also it's empirical as all of it is based off research observations.
  • Psychological studies and approaches have many theories which they base their hypotheses off, there;s also empirical, replicable evidence supporting explanations such as the cognitive explanation for schizophrenia.
  • Lab experiments - controlled and standardised so can be replicated. Best theory construction also comes from lab experiments.  Quantitative data so is objective and also hypothesis testing is possible.
10 of 58

The scientific method

AGAINST psychology being a science

  • Psychodynamic approach - Devised by Freud who focused on childhood only and how this could cause SZ, Defence mechanisms and different parts of the brain/personalities. Approach is open to interpretation and has no empirical evidence to support it, nor is it replicable thus making it subjective and open to interpretation. It's based on Freud's beliefs only. Self reports were also used to ask about childhood.
  • Case studies - are not replicable as each one is different. E.g. KF.
  • Self report in case studies doesn't allow you to generalise.
  • Naturalistic and participant observations have no control, also observations have no IV.
  • Many theories, studies and approaches yield qualitative data which is subjective and possibly biased, so may be based on personal opinion or limited evidence from the person who is devising a theory/conducting the experiment. 
11 of 58

Validating new knowledge and the role of peer revi

Research has to be published in a journal or be presented at a conference so that others can study the rationale behind the hypothesis, identify possible flaws in the method, or question the results and conclusion.

Bartholomew (1982) states that 'publication is an indespensable part of science'. This is because it ensures the research has been closely examined and validated by others, as well as disseminating the findings. 

Describe what peer review is/role of peer review:

The assessment of scientific work by others who are experts in the same field (peers) is known as peer review. This is done before publication, and involves considering the research in terms of its validity, significance and originality. Only when a research paper has successfully gone through a process of peer review will it be published. 

12 of 58

Validating new knowledge and the role of peer revi

The aim of peer review is to ensure that any research conducted and published is of high quality. Specifically it concentrates on:

  • Checking the validity of the research
  • Making a judgement about the credibility of the research (is it theirs? & if results are made up)
  • Assessing the quality and appropriateness of the design an methodology 
  • Judging the importance or significance of the research in a wider context (is the research worth knowing/beneficial)
  • Judging the originality of the work (plagerism)
  • Checking for reference to relevant, established research

The peer review process leads to a recommendation as to whether the research paper should be published in its originial form, rejected or revised in some way.

The Parliamentary Office of Science and Technology (2002) also suggests that peer review is necessary to inform funding, to facilitate the sharing of knowledge and to assess the work of universities. 

13 of 58

Validating new knowledge and the role of peer revi

Peer review and the internet

The internet allows material to be published quickly and without barriers. To a large extent information published on the internet is policed by the 'wisdom of crowds' approach - readers decide whether it's valid or not, and post comments and/or edit entries accordingly. Hence, on the internet peer is starting to mean everyone.

Problems of validation

The UK Parliamentary Office of Science and Technology (2002) has identified ways in which research may be fraudulent:

  • Fabrication, ie the data has been made up
  • Falsification, ie the data exists but has been altered
  • Plagiarism, ie the work has been copied from others

Such fraud is not always detected by the system of peer review.

14 of 58

Problems of validation

Consistency with previous knowledge – there is a tendency to assume that since most findings build on previous knowledge or theory, new knowledge will ‘fit’ what we already know.  This can lead to fraudulent findings being identified, but, on the other hand, it may simply lead to a preservation of the status quo, ie demonstrating a preference for research that goes with existing theory rather than dissenting or unconventional work.  Hence, according to Kuhn’s view of science, peer review may slow down change in scientific theories 

1.    Values in science – although psychologists try to be objective, many philosophers of science suggest it is impossible to separate research from cultural, political or personal values.  If authors and reviewer share these values then they may be published as objective science, when in fact the findings are subjectively interpreted.

15 of 58

Problems of validation

1.    Bias in peer review – the reviewer’s theoretical view may differ from that in the manuscript, eg someone reviewing work on intelligence who strongly supports the view that intelligence is innate might reject findings which suggest that upbringing is fundamentally important in determining how intelligent someone is.  There is also evidence of ‘institution bias’, the tendency to favour research from prestigious institutions; and gender bias, the tendency to favour male researchers.

2.    File drawer phenomenon.  This term refers to the tendency to favour positive results (ie ones which support the hypothesis).  Negative findings end up in the researcher’s file drawer.  This can lead to distortion in our understanding of a topic area.


16 of 58

Designing psychological investigations

Self-report Method – Any method which involves asking a participant about their feelings attitudes and so on. Eg self-reports are: questionnaires, interviews and psychometric tests but note that self-reports are often used as a way of gaining participants responses in an experiment.

  • Case Study – A detailed study of an individual or small group of people.
  • Observational study – observational studies are those where the researcher observes a situation and records what happens but dies not manipulate the situation.

Correlation ­­– This method investigates how strongly two or more variables are related to each other.

Experiment – A research method used by psychologists which involves the manipulation of variables in order to discover cause and effect

  • Field Experiment – An experiment which is carried out in ‘the field’. That is, in a real world situation.

  • Laboratory experiment – An experiment which is conducted under highly controlled conditions

17 of 58

Experimental methods

Carrying out an experiment is the most objective way of obtaining data.

  • Experiments involve the manipulation or change of an independent variable, and the control of all other variables so that the effect of changing an IV can be assessed in terms of a change in the dependent variable.  In other words it allows us to test cause/effect relationships.
  • Experiments are the best method for hypothesis testing, ie researchers form a theory about behaviour from either observation or previous research. 
  • A prediction is then made on the basis of this theory (the hypothesis), and this is tested to support or challenge the theory. 
  • Degree of support or lack of support can be measured statistically, giving some idea of how likely it is that the results achieved were the result of chance or random variability.
18 of 58

Experimental methods

Types of experiments:

  • LAB: Experiment carried out under controlled conditions. Experimenter controls IV to measure effects on the DV.
  • FIELD: Experiments carried out in the natural social setting of the participant. Research is manipulating the IV to measure effects on the DV. Carried out in natural setting (e.g. cinema) and participants are not aware they're taking part.
  • NATURAL: Studies of natural occuring events. Researcher has no control over variables. Researcher uses natural differences in the IV as their natural experimental condition. AKA quasi experiment when the researcher has control of the research setting.
19 of 58

Experimental methods

Characteristics of an experimental study

An experiment involves the manipulation of variables. A variable is anything that can change/vary.

The Independent Variable (IV) – this is manipulated (changed) in the experiment.

The Dependent Variable (DV) – this is measured in the experiment.

Research believes that the independent variable influences the dependent variable.

Other variables, known as Extraneous Variables, may influence the Dependent Variable also. If this occurs the Extraneous Variable will mess up your results and this is then known as a Confounding (confusing) Variable.  Because of all this psychologists try to control all extraneous variables so any change in the DV is due to the IV. 

The main advantage of the experimental method is that it allows us to show cause and effect. This means that we can state that the IV has a direct effect upon the DV.  Other methods cannot show cause and effect, this is because they do not have as much control of extraneous variables. 

20 of 58

Experimental methods

Advantages and disadvantages of different types of experiments

Lab experiments


  • Show cause and effect: Controlling variables incl. EV's is easier in a lab. If all are controlled it allows the experimenter to see that the IV causes the DV.
  • Replication: If results are repeatable they're reliable and the controlled setting allows control. 
  • Yield quantitative data: Objective data. 


  • Low ecological validity: Artificial environment so participants may change their behaviour which would affect results and also cannot generalise to real life due to artificial setting.
  • Demand characteristics: Participants try make sense of the research they're in so they may behave in a certain way which will again effect results. 
  • Ethical issues: Informed consent.
21 of 58

Experimental methods

Field experiments:


  • Improved external validity: More valid results when applied to difference places and settings because it's taken place under natural conditions.
  • Reduced demand characteristics: Participants won't change their behaviour as the setting is natural to them. 


  • Less control: Carried out in natural setting so hard to control any EV's. They may confound results so DV caused by EV rather than the IV. This also makes it harder to replicate.
  • Costly: A natural setting needs to be used.
  • Harder to replicate: Lack of control and so cannot be generalised to other situations.
22 of 58

Experimental methods

Natural experiment:


  • Reduction of demand characteristics: The setting is natural of the participants so they will not respond to any demand characteristics.
  • Lack of direct intervention by experimenter: Reduction of experimenter effects.
  • Less chance of experimenter bias.
  • More ecologically valid as IV is naturally occuring.


  • IV not controlled: Something else may effect the DV (such as EV's/CV's.
  • No control over allocations of groups: Sample may be biased and restricted thus cannot generalise.
  • Harder to replicate: Conditions won't ever be the same for participants.
  • Harder to control EV's: these may become CV's so contaminate the results.
23 of 58

Experimental design

Participants can be allocated to conditions in one of three different ways:

 Repeated measures design - Same participant taking part in each condition of the IV.

 Independent groups design - Use different participants for each condition of the investigation.

 Matched participants design - Participants matched on a factor important to the experiment.

24 of 58

Experimental design

Repeated measures design


  • Participant variables won't confound results
  • Fewer participants needed


  • Participants may be 'lost' between conditions
  • Results may be influenced by order effects - ie practice may improve/damage performance in condition 2
  • Participants are likely to guess aim of study and respond to demand charactertistics 
25 of 58

Experimental design

Independent groups design


  • Quicker to carry out and commitment from participants is reduced
  • Less likely to respond to demand characteristics
  • The same task can be used in each condition


  • Twice as many participants are needed
  • Participant variables may confound results 
  • May be difficult to keep variables constant across conditions
26 of 58

Experimental design

Matched pairs design


  • Reduces effect of some key participant variables
  • No order effects to confound results
  • Less likely to guess the aim of the study and won't respond to demand characteristics


  • It may be practically difficult to establish matches, and this matching takes time
  • Requires twice as many participants 
27 of 58

Correlational analysis

Correlation is not necessarily a separate design; it is simply a specific way of manipulating data, which could be obtained from any quantitative method (eg experimentation, observation using ranked categories, questionnaires). 

Correlational analysis is carried out to test for an association between two variables. 

If there is a positive correlation this means that as one variable increases so does the other, eg the taller the person, the bigger their feet. 

If there is a negative correlation this means that as one variable increases the other decreases, eg the more expensive the car, the fewer that are built.

As well as using scattergrams we can express the degree of correlation between two variables by using a statistic called a correlation coefficient.  This may fall anywhere on a scale between 0 and 1. 

If two variables have a perfect correlation, then the correlation coefficient would be +1.  If two variables have a perfect negative correlation, then the correlation coefficient would be -1.  A correlation coefficient of 0 would indicate no relationship at all.

28 of 58

Correlational analysis


Correlational analysis can demonstrate an association/relationship, or lack of one, between variables.

It can also demonstrate the type of relationship, ie positive or negative.

It is a technique of statistical analysis which can be applied to data obtained by other means.


No matter how strong the correlation, it does not indicate causation.

 Such analysis can only measure straight line relationships. 

The technique is subject to any problems associated with the method by which the data is obtained

29 of 58

Sampling strategies

The aim of psychological research is to be able to make valid generalisations about behaviour.  However, it is only ever possible to research a relatively small number of participants (a sample) from any chosen population. 

The population is the group of people from whom the sample is drawn, and impose limits on generalisability.  

It's not usually possible to test everyone in the target population so psychologists use sampling techniques to choose people who are representative (typical) of the population as a whole.

If your sample is representative then you can generalise the results of your study to the wider pop

It is also important to gather a large enough sample in relation to the target population.  

Small samples are easily biased by atypical individuals. 

30 of 58

Opportunity sampling

The sampling technique most used by psychology students. It consists of taking the sample from people who are available at the time the study is carried out and fit into the criteria you are looking for. E.g. choosing the first 20 students in your college canteen to fill in your questionnaire.


Quick and easy to find participants


Biased due to small selection and unrepresentative (so can't generalise)

31 of 58

Random sampling

sample in which every member of the population has an equal chance of being chosen. This involved identifying everyone in the target population and then selecting the number of participants you need in a way that gives everyone in the population an equal chance of being chosen. For example, you could put all the names of the students at your college in a hat and pick out however many you need. 


  • Everyone has an equal chance of being chosen
  • Unbiased


  • Very difficult to conduct if size of population is large
  • Time consuming and costly
32 of 58

Stratified sampling

Involves classifying the population into categories and then choosing a sample which consists

of the participants from each category in the same proportions as they are in the population.

E.g. if you wanted to carry out stratified sampling on a group of students from a sixth form college

you might decide that important variables are sex, 1st an 2nd yrs, age, etc. You could then 

identify how many participants there are in each of these categories and choose the same 

proportion of participants in these categories for your study.

Advantages: sample should be highly representative of the target population and therefore we can generalize from the results obtained.


  • Gathering such a sample would be extremely time consuming and difficult to do 
33 of 58

Self selected sampling

Self selected sampling (or volunteer sampling) consists of participants becoming part of a study because they volunteer when asked or in response to an advert. This sampling technique is used in a number of the core studies, for example Milgram (1963)


Quick and easy to get participants


Sample is biased and unrepresentative of population so can't generalise

34 of 58

Snowball sampling

In some studies it is difficult to access participants.  For example, if you want to investigate how partially sighted people cope in education a useful technique is to start with one or two partially sighted students and ask them to put you in touch with other students who are partially sighted.


Possible to include members of groups where no lists or identifiable clusters even exist (e.g., drug abusers, criminals)


No way of knowing whether the sample is representative of the population

35 of 58

Issues of sampling

 The key issue when choosing a sample is whether it has population validity (this is part of external validity). Population validity is increased when the sample is representative of the target population.  The more representative the sample, the more the results can be generalised to other members of the population. 

Random sampling is said to be the most representative type of sample, and which therefore has the greatest population validity.  However, as identified above, there are other (practical) problems associated with such sampling. 

 A lot of research has been criticised because it uses opportunity or volunteer samples.  For example, Banyard and Hunt (2000) reviewed all of the studies in two UK journals over a 2-year period.  They found that in 71% of the studies the sample was university students, a convenient opportunity sample for researchers, who may not represent the broader population.

Research has also shown that people who volunteer are not representative of the whole population.  For example, Lonnqvist et al (2007) found that people who volunteer are more stable and outgoing than those who do not.

 Finally, people cannot be forced to take part in research, which inevitably leads to some degree of bias in selecting participants.

36 of 58

Validity - how truthful something is

The issue of validity raises two questions:  are the conclusions drawn from data justified, and can we trust the data to represent what we intended it to?  You can think of validity as the truthfulness of the measure; a valid measure is one that measures what it claims to measure.  There are many types of validity but it is crucial to consider two types when designing a study.  These are internal and external validity.

Internal validity (also referred to as experimental validity) - is it measuring what it's supposed to measure

 This deals with the issue of whether research measures what it set out to. 

For example, even when an apparent difference is found in research findings, researchers must ask whether it really was the independent variable that produced the change in the dependent variable.  In other words, is it a genuine ‘effect’?

37 of 58


Coolican (1994) identified threats to internal validity, ie other factors that could have caused the effect on the DV:

 Confounding variables:  situational and participant variables could be responsible for the changes in the DV rather than the IV.

· Unreliable measures:  measures that are inconsistent, rating scales lack reliability and validity as there is no ‘true measure’.

· Standardisation:  a lack of standardisation means participants do not experience the same research process and so findings are not comparable

Randomisation:  bias in allocation due to lack of randomisation can systematically distort the results and so reduce internal validity.  

 Demand characteristics:  these can lead to unusual/unnatural participant reactions and behaviour, thus reducing internal validity.

 Participant reactivity:  evaluation apprehension and social desirability can also lead to behaviour that is not the participants’ natural behaviour

Experimenter bias:  the conscious or unconscious impact of the experimenter on the way data is collected

Good research design increases internal validity:  accounting for the above in the research design will increase internal validity.

38 of 58

Checking internal validity

If internal validity is high then replication should be possible; if it is low then replication will be difficult.  Thus validity and reliability are interlinked:  if the research has truth (validity) it should be consistent (reliable) and so replication is possible.  Reliability is also an indicator of validity.

Face validity is the simplest form of validity and merely involves a decision as to whether a measure looks as if  it’s measuring what it set out to measure.  Criterion validity is more objective and looks at whether a test of a particular construct relates to other measures of it.  There are two types of criterion validity:  concurrent and predictive. 

A test shows concurrent validity if it shows similar findings to another existing measure.  For example, if a test of neuroticism matched the judgement of experienced psychiatrists, it would have concurrent validity. 

Predictive validity is measured by how well a test predicts future performance.  For example, if an IQ test given at the age of 14 predicts grades achieved at A level, it would have predictive validity.

39 of 58

External validity

Coolican (1994) identified four main aspects to external validity:

·         Populations:  findings have population validity if they generalise to other populations, particularly to the target population.  Population validity is questionable if a restricted sample is used.

·         Locations:  findings have ecological validity if they generalise to other settings, particularly real-life situations.  A lack of mundane realism is a key weakness of artificial research. Though researchers are often more concerned with experimental realism than mundane realism.

·         Measures or constructs:  findings have construct validity if the measures generalise to other measures of the same variable, eg does a measure of recall of word lists generalise to everyday memory?

·         Times:  findings have temporal validity if they generalise to other time periods, eg do findings from the past generalise to the current context or do current findings generalise to the past or future?  This is difficult to achieve as, to some extent, all research is dependent on era and context.

40 of 58

Checking external validity

A meta-analysis involves the comparison of findings from many studies that have investigated the same hypothesis. 

Findings that are consistent (reliable) across populations, locations and periods in time indicate validity (eg Van IJzendoorn and Kroonenberg’s (1988) meta-analysis of the cross-cultrual Strange Situation studies). 

Thus, if a study has validity then it is likely to replicate, and reliability in the meta-analysis is used as an indicator of validity. 

Predictive validity is another means of checking external validity.  It involves using the data from a study to predict behaviour at some point in the future. 

If the prediction is correct, then this suggests that the original data did generalise to a future context and so has external validity.

41 of 58

Internal vs external validity

There is a trade-off between internal and external validity and this must be considered when designing experiments. 

The greater the control of EVs, the higher the internal validity, but the lower the external validity. 

Conversely, if external validity is high, control of confounding variables may be difficult. 

Deciding on whether high internal or high external validity is more important depends on the purpose of the study. 

If the study is designed to test the detail of a theory, then high internal validity is important; if the study is designed with the intention of applying results to the real world, then it becomes more important to have high external validity.

42 of 58

Ethical considerations in design&conduct of resear


CONSENT - need informed consent. Under 16's have local parentis do it on their behalf

PROTECTION FROM DISTRESS - must leave in same or better state, not worse

DECEPTION - shouldn't be lied to unless there's a really good reason

DEBRIEFING - told about everything involved at the end and any q's answered

RIGHT TO WITHDRAW - p's can leave ANY TIME they want

CONFIDENTIALITY - p's have the right to expect their info will be treated confidentially

OBSERVATIONAL RESEARCH - public places only, can't invade privacy

GIVING ADVICE - only give it if psychologist is specialised in that sector, if not they refer 

COLLEAGUES - if colleageus believe ethics are being broken they can take action

43 of 58

Ethical issues

Ethical issues can arise in the implementation of research, when there is a conflict between how the research should be carried out, (eg no deception) and the methodological consequences of observing this (eg reduced validity of findings). Major issues remembered as DIP.

  • Deception

    Researchers may often choose to deceive their Ps about the true aim of their research so that findings are not influenced by the effects of demand characteristics.

  • Informed consent

    Inevitably, deceiving Ps means that the guideline regarding informed consent is breached.  Often Ps consent to take part in research, but their consent is not informed consent.  Zimbardo’s research is a good example of research where Ps may not truly have realised exactly what they were consenting to.

  • Protection of participants

    The key test of whether or not a participant has been harmed is to ask whether the risk of harm was greater than in everyday life.

44 of 58


very common in psychological research.  Menges (1973) in a review of psychological research studies completed in America found that of the 1,000 studies reviewed, 80% involved not giving the participants full information about the study

Methods for dealing with ethical issues:

·         Debriefing:  this is where on completion of the research the true aim of the research is revealed to the participant.  The aim of the debriefing is to restore the participant to the state s/he was in prior to the research.   A participant should leave a research study in the same state as when they entered.

·         Retrospective informed consent:  once the true nature of the research has been revealed, the participant should be given the right to withdraw their data.

Role play:  this approach eliminates many of the ethical problems of deception studies, but there is a danger than the behaviour displayed by role-playing participants is not the same as the behaviour would be if they had been deceived.

45 of 58

Informed consent

Informed consent:  a further issue raised by research that involves children under the age of 16.  The age of the participant may mean that the child does not fully understand what they are participating in, thus impacting on their ability to give informed consent.

·         Prior general consent:  this solution involves obtaining the prior consent of participants to be involved in research that involves deception.  If the participant agrees that they would not object to being deceived in future research studies, then in later studies where they participate it is assumed that they have agreed to being deceived.

·         Presumptive consent:  this involves taking a random sample of the population and introducing them to the research, including any deception involved.  If they agree that they would have still given their consent to the research then we can generalise from this and assume that the remainder of the general population would also have agreed.

·         Right to withhold data and retrospective consent:  when participants are debriefed they should be offered the chance to withdraw their data.

Children as participants:  this is resolved by gaining the consent of the parent or those in loco parentis, eg the head teacher of the school that the child attends.

46 of 58

Protection of p's

·         The researcher should remind participants of their right to withdraw if at any point during the research the level of stress is higher than anticipated.

·         The researcher is responsible for terminating any research that results in psychological or physical harm that is higher than expected.  For example, Zimbardo terminated his research after 6 days although it was intended to run for 2 week.

·         Debriefing is an important part of protection of participants.  According to Aronson (1988) participants should leave the research situation in “a frame of mind that is at least as sound as it was when they entered”. 

Confidentiality:  This should be assured to prevent any potential future embarrassment.

47 of 58

Ethical considerations in design&conduct of resear

Carrying out a cost-benefit analysis when considering ethical issues

The cost-benefit analysis is a safeguard that should precede all research.  It involves weighing up whether the ends justify the means.  A cost-benefit analysis raises a double obligation dilemma, because researchers have an obligation both to their participants and to society.  Costs to the individual must be balanced against the benefits to society in terms of understanding and potential applications.

Evaluation of cost-benefit analysis

Cost-benefit analysis has a number of weaknesses.  It is difficult to predict outcomes, because these are not always clear at the outset.  Furthermore, such costs and benefits are not objective and so can be difficult to measure and weigh up.  The analysis process may also be open to researcher bias and value judgements.  It seems that the outcome of such analysis will differ across different researchers and across time, meaning the analysis will be era dependent and context bound.

48 of 58

Measures of central tendancy

Descriptive statistics give us a way to summarise and describe our data but do not allow us to make a conclusion related to our hypothesis.  Descriptive statistics include graphs, tables, measures of central tendency and measures of dispersion.  In order to decide which summaries are most suited to our data we need to understand the four commonly used levels of data:

Nominal:  the data consist of numbers of participants falling into various categories (eg number of students who consider themselves to be overweight, underweight or just right).

Ordinal:  the data can be placed in order of size, ie can be ranked from lowest to highest.  Ordinal data are often measured on scales of unequal intervals, eg scores allocated on Strictly Come Dancing allow participants to be placed in a rank order, but it cannot be assumed that the interval between each test score is equal.

Interval:  data are measured on a scale of equal intervals, eg distance, time, temperature.

Ratio:  data have the same characteristics as interval data, except that they have a meaning zero point, ie an absolute zero.  For example, time measurements provide ratio data because the notion of zero time is meaningful.  Interval and ratio data are sometimes combined and referred to as interval/ratio data.

49 of 58

Measures of central tendancy

Measures of central tendency are averages, and so involve the calculation of a single number that is representative of the other numbers in a set of scores.  The appropriate measure to use depends on the level of data.

The mean is calculated by adding all the scores together in each condition and then dividing the number of scores. This is a useful statistic as it takes all of the scores into account but can be misleading if there are extreme values. For example if the scores on a memory test were 2, 4, 5, 7, 42, the mean would be 10 which is not typical or representative of the data.  The mean is the most appropriate measure for interval data.

The median is calculated by finding the mid point in on ordered list.  The median is calculated by placing all the values of one condition in order and finding the mid-point.  This is a more useful measure than the mean when there are extreme values; it is also the most appropriate measure for interval data.

The mode is the most common value in a set of values.  This is probably the least valuable measure of central tendency as sometimes there may be more than one mode or no mode at all.  It is the only measure of central tendency that can be used with nominal data.

50 of 58

Measures of central tendancy

Mean: appropriate to use with sets of data with no outliers in one direction. not appropriate when there's outliers

Advantages: uses all data points so provides a good estimate for the central score of a data set

Disadvantages: may not represent best the general trend in a set of scores

Median: appropriate to use when there's outliers. not appropriate to use when there's no outliers or small number of scores

Advantages: unaffected by outliers in one direction (skewed data)

Disadvantages: doesn't take every score into account so less sensitive than the mean and less representative when there's a small data set

Mode: appropriate to use with data where there's a high number of repeated scores. not appropriate to use with data where there's several scores that occur as frequently as each other (especially when data sets small)

Advantages: shows what most p's scored; easy to calculate; can use on nominal data

Disadvantages: can be unrepresentative if the most frequently occuring data point is very high or very low

51 of 58

conventions of reporting psychological investigati

Once psychological research has been carried out it has to be shared with other psychologists in order to be useful.

  • Title: to tell the reader what the report is about
  • Abstract: to provide the reader with a brief summary of the study amd whole report (intro,method,discussion&results)
  • Introduction: to introduce the background theory, previous research etc.
  • Method (design,participants,apparatus/materials,procedure):to describe how the study was done
  • Results: to summarise the findings
  • References: to inform the reader about the sources of information
  • Appendix: can be used for detailed information not in the report
52 of 58

conventions of reporting psychological investigati

Abstract: brief summary of your study and has to cover:

  • aim of study
  • method
  • sample (opportunity or self selected) 
  • results
  • conclusion drawn from your results
  • the significance level achieved 

Having abstracts at the beginning of research reports helps researchers who have to read many reports each week; often a researcher will just read the abstract, and then look at the rest of the report if the abstract leads them to believe the report is very relevant to their current/next piece of research.

53 of 58

conventions of reporting psychological investigati

INTRO: Sets the context for an investigation. this involves:

  • an identification of the area of psyc being investigated
  • reference to the relevant empirical evidence, ie studies which have already been carried out in this area
  • rationale for your study (theory), ie how and why it has been developed and is to be carried out.

Hypotheses: the null and alternative hypotheses must be fully operationalised, ie both IV and DV for an experiment, or 2 variables for a correlation, must be clearly identified within the hypotheses.

54 of 58


Should be divided into the following sub headings:

  • Method: lab/natural/field experiment. identify IV&DV or 2 variables in a correlation
  • Design: identify whether repeated measures, independent groups or matched pairs design is used
  • Participants: state the target population for your research, which sampling techniques used and then give a precise description of the same, ie number of p's, age range, gende breakdown, etc. It may also be appropriate to say how p's are allocated to conditions. this helps readers make a decision about population validity.

Procedure: should incl. enough detail for replication to be possible

  • materials - how were they prepared? what did they look like?
  • measure of behaviour - how? why?
  • p's approached - how/where?
  • controls used- consider choice of p's (eg deliberate restriction of age range), behaviour of experimenters, random allocation of p's to conditions, control over time, control over environment, etc
  • standardised instructions ( note ethical guidelines followed)
55 of 58


Should be presented in a number of different ways:

  • A verbal summary: should be given of results, so they're explained fully
  • Descriptive stats: appropriate visual displays, which present info clearly should be used. this section often includes a summar table of measure of central tendency and dispersion;bar chart
  • Inferential stats: allow researchers to infer the role of chance or random variability in any apparent difference they find between conditions. A difference needs to be big enough and consistent enough for the researcher to be usre that it represents more than just fluctuations that could be expected by chance. The tests available allow the researcher to check their own results to determine whether the scores from one condition consistently differ from those produced in the other condition.
  • This section should state reasons for the choice of statistical test, before giving the results of the calculations. 
  • The level of significance applied should also be stated, and comparing of the observed and critical values will lead to a statement regarding whether the alternate hypothesis is supported or not. if there is a likelihood that the kind of difference found would appear more than 5 times in 100 trials, even without manipulation of the IV, then the experimental (or alternate) hypothesis is rejected & null is accepted (p>0.05). if results are likely to appear by chance less than 5 times in 100, null hypothesis is rejected & experimental is accepted at p<0.05
56 of 58


involves weighing up your research, evaluatively. there are 2 aspects to this evaluation but both concern the meaning of your results.

1st - what has been uncovered about human behaviour? what are the implications, ie how do the findings link to the background info (what do findings show/tell us?) and should any further research be conducted to illuminate behaviour further? do results support a particular theory, or lead to any practical applications?

2nd - are there any flaws in the methodology or study that means the results should be treated with caution? here there's often consideration of strengths and weaknesses of the method, design, sample & measure. Also consideration of the ethics of the research and the validity and reliability of findings should be included.

57 of 58

What's the purpose of a report?

A report is used for psychologists to share their findings and knowledge about a piece of research that they have carried out. It should address the following to ensure that society can benefit:

what was done

why it was done

what was found

what it means

58 of 58


No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »