Experimental Designs

Samples

Questionnaires

Corelation Research

Experiments (Lab/Field/Natural)

Reliability

Validity

Hypotheses

Pilot Studies

Experimental designs

Variables

Observations


HideShow resource information
  • Created by: Ellie
  • Created on: 15-12-14 16:34

Samples

  • have to be representative of the target population
  • It cant be biased in anyway (Gender, age ect) as this means it cant't be generalised.

5 different types of samples:

  1. Random - everyone in the target population has an equal chance of being selected.
  2. Systematic - Taking every Nth name from a sampling frame (e.g a list)
  3. Opertunity - Studying whoever is avialble at the time
  4. Volunteer - Where the participants volunteer after seeing an ad or post about the experimnet.
  5. Stratified - Where all the sub groups are included in a proportiantate number but are still randomly picked.
1 of 14

Self Report Techniques

elf reporting method:

  • Involves asking participants their feelings, beliefs and attitudes, ect.
  • 3 types: Questionnaires, Interviews and case Studies

Question Types:

  • Open - when there is more than one answer possible
  • Closed - when there is a set of answers to choose from e.g. Yes/No
  • Fillers - quetions taht distract the participants from the aims of the study in order to reduce demand charachteristics.

Questionnaires

  • Involves a set of questions designed to get information about a topic or topics.

Designing a questionnaire:

When writing a questionnaire there are three things to consider

  • Clarity; participants need to be able to understand.
  • Bias; ensure their are no leading questions. Also be careful of ocial desirability bias.
  • Analysis; Questions need to be written so answers can be anaysised.

Interviews

  • Structured - where the questions are decided in advance of the interview
  • Unstructured - The interview is based around the answers given by the interviewee.
  • Semi -Structured (CLINICAL INTERVIEW)- combines both types of interview by starting off with predetermind questions and then using questions based on your answers.

Case Studies

  • A detailed study of a single individual; e.g genie
  • Hard to generalize.
2 of 14

Correlation Research/Analysis

Correlation - Where two of the variables appear to be connected.

DOESN'T NESECARILY MEAN THAT ONE VARIBLE CAUSES A CHANGE IN THE OTHER. (E.G: AS AGE INCREASES SO DOES STRESS BUT AGEING DOESNT NECESSARILY CAUSE STRESS.)

Three types:

  • Positive - where the two variables increase together (e.g. height and shoe size)
  • Negative - As one variable increase the otehr decreases (e.g Happiness and sadness)
  • No correlation - where tehre is no relationship between variables. (e.g hunger and eye colour)

A correlation can be illistrated with a scattergram. A line of best fit is then plotted and the correlation coefficant can be found. A correlation coefficiant is a number with a maximum value of +1 (positive correlation) and a minimum value of -1 (negative correlation). This number e.g +0.68 tells us how closely the co-variables are linked (e.g the correlation coefficiant of height and shoe size might be +0.86 which is a storng positive correlation). The fewer the participants you have the higher the correlation coefficiant has to be in order to make a concluison on how significant the correlation is. For example if you have 4 participants the correlation coefficant would need to be higher to show it's significance than if you have 28 participants.


3 of 14

Lab, Field and Natural Experiments.

There are 3 types of experiments

  • Laboartory Experiments
  • field Experiments
  • Natural Experiments

Laboratory Experiments

  • conducted in a special environment e.g a science lab.
  • The variables are tightly controlled
  • Participants know they are being studied but may not know what for (deception)

Field Experiments

  • conducted in a more natural setting than a lab
  • participants are soemtimes unaware they are being observed.
  • The independant variable is still manipulated by the researcher.

Natural Experiments

  • conducted in a completely natural setting
  • the independant variable can't be manipulated
4 of 14

Reliability

Reliability refers to how consistent or dependable a test is. A reliable test can be carried out in the same circumstances on the same participants and the same results will be obtained.

there are three types of reliability:

  • Internal Reliability - different parts of the test should give consistent results. All parts of a test should measure the same thing
  • External Reliability - The test should be repeatable and still give the same results.
  • Inter-interviewer reliability - The test should have consistent results despite who delivers the test.

Assesing reliability:

  • Internal Reliability:Split half method - test answers are split in half, they should both have the same correlation coeffcient.
  • External Reliability: Test & Retest Method - The participants take a test and then have to retake a test sometime after and then compare the results using a correlation coeffcient. They should have the same results.

Improving reliability:

  • Take out the inconcistent items to improve internal reliability. We can only do this by trial and error.
5 of 14

Validity

Validity refers to how well a test measures what it claims to do. For example an IQ test with only maths questions would not be a valid way of measuring general intelligence.

There are three types of validity:

  • Internal - the extent to which the results of the test are caused by the variable being measured rather than extrenous variables. (Face Validity - is it measuring what it's menat to? / Concurrent Validity - comparing the test to previous tests and seing if the results are similar)
  • External - How well the test can be generalised.
  • Ecological - how well the results and the test reflect real life.

Assesing validity

  • Look at the experiment and see if it measures what it claims to.
  • Compare the results of the test with other experiments which have been proved valid. If there is a similar correlation it is liekly that the test is valid.

Improving Validity

  • Make sure that the sample is represenatitve so a generalisation can be made.
  • Use similar ideas or information as previous valid experiments to gain valid results which correlate to previous experiments.
6 of 14

Experiments & Variables

An experiment is a way of conducting reserch in which

  • one variable is made to change (by the experimenter) This is called the independant variable or IV.
  • The effects of the IV on another variable are observed or measured this is the dependant variable or DV.

JUST REMEMBER THAT:

the INDEPENDANT is CHANGED

the DEPENDANT is MEASURED

"An INDEPENDANT man walks into an experiment, he gets changed and leaves a DEPENDANT man who is measured."

7 of 14

Hypotheses

A Hypothesis stateswhat you believe to be true. It is a precise and testable statement (NOT A QUESTION) of the relationship between two variables.

In order to conduct an experimnet you have to have 2 or more conditions or levels of the IV in order to create a comparison. E.G : Students learn more during short lessons compared to longer lessons.

The hypothesis also has to be opertaionalised. This means that we have to be clear about what we mean regarding our IV's, for example a short lesson, does that mean a 30 minute lesson, an hour lesson? What is a long lesosn? 3 hours, 24 hours? . We can opertaionalise the DV by deciding how to assess what we learn, mabe using recall or a memory test. This would mean that our final hypothesis would be:

Students are able to recall more correct information from a short lesson (30 minutes) compared to a long lesson (2 hours).

There are two types of hypothesis

  • directional - states the direction of your results
  • non -directional - predicts their will be a difference between the two conditions.

What hypothesis when?

  • When there is past evidence or research we use a directional hypothesis
8 of 14

Pilot Studies & Confederates

Pilot Studies

A pilot study is a small scale trial run of a research design which is conducted before the actual experiment. It helps to eliminate errors and therefore also shows you what you need to work and what is working well.

Pilot studies prevent lots of money being invested into experiments which then don't work.

Confederates

People that play a role in an experiment or investigation. The researcher may want to observe how we react to different dress codes. The researcher would then hire a confederate to wear different clothes.

9 of 14

Experimental Design

Experimental design: A set of procedures used to control the influence of factors such as partcipant variables in an experiment.

Three types:

  • Repeated Measures Design - Each partcipant takes part in every condition under a test
  • Independant Groups Design - Participants are allocated to a group representing different conditions. Allocation is done randomly.
  • Matched pairs Design - Pairs of partcipants are matched in terms of key variables such as age and IQ. one member is put in the control group the other is put in the experimental group.

Counterbalnacing

An experimental technique to overcome order effects in repeated methods designs. It ensures that each condition is tested first or second in equal amounts. This can be done by ABBA - participants take part in each condition twice. e.g A - Morning & B - afternoon then B - Morning and A - afternoon. trials 1 + 4 are complained as are trials 2+3. counterbalancing can also happen with the 'AB or BA' method. In this method particpants are divided into two and the first group does trial 1 then 2 and group two does trials 2 then 1.

10 of 14

Variables

Independant - Changed

  • Dependant - Measured
  • Contol - Kept the same

There are also extrenous variables. Extraneous variables are variables that effect the results of the test, because of where the test occured or what the test involved. If EV's arent controlled they can cause a change in the DV and therefore reduce the validity of the test. There are different types or EV's

  • Partcipant Variables - Age, Intelligence, Motivation, Experiance, Gender, likes and dislikes (where applicable. For example the test may be on how reluctant we are to eat chillies and how much we like spice or how fearless we are may affect our redicness to eat a chilli.)
  • Situational variables - order effects, time of day, noise, tempreature.

Participant variables only occur in an independant groups design as the participant variables arent controlled. Situational variables are variables that may influence a participants behaviour.

11 of 14

Effects

partcipant effects

  • caused by partciapnts acting like they should because they think they know the nature of teh experiment
  • e.g. the Hawthorne effect : "the tendency for particpants to alter tehir behaviour merely asa result of knowing that they are being observed.
  • e.g. Social desiraability bias : a tendency for respondents to answer a question how they feel tehy should so tehy are presented in a better light.

Investigator effects

  • clues given to particpants by the investigator which effect how the participant replies or behaves.
  • E.g Leading questions to get the answer the investiagtor wants.
  • e.g. demand charachteristics : a cue that makes partcipants aware of what the researcher expeects to find and therefore change the partcipants behaviour.
  • Also the way the investigator act scan influence the partcipant
  • the more enthused the investiagtor the more inthused teh partcipant.
12 of 14

Overcoming Variables & Effects

Blind tests

  • Single blind designs - used by researchers to prevent the particpants from knowing what the true aims of the study are.
  • double blind design - When neither the partcipants or the investigator are aware of important details and therefore aren't searching for clues about how to behave.

Other

  • Make the tasks really engaging in order to prevent the particpants from thinking about how to act.
13 of 14

Observations

In an observational study the participants are observed engaged in whatever behaviour is being studied. There are three types

  • Systamatic - where the researcher uses systems to observe behaviour, e.g behavioural catogries.
  • Natural - behaviour is studied in a completely natural setting, where everything is left alone. (observing a worm in your back garden.
  • Controlled - Some variables are controlled by the researcher which reduces the naturalness of the behaviour being studied. (Strange Situation)

Observations can also be made in experiments which makes the observation a research technique and not a method. Observations are hard to make because you can gain so much information from them so observational techniques are used. In a structured observation the researcher uses systems to help organise their findings.

  • behavioural catogries - how to record the behaviour you are interested in.
  • Sampling procedures - who you observe and when

There are two types of sampling procedures:

  • event - counting teh number of times a certain behaviour appears within a targeet population (eg the amount of times a person falls over in a club)
  • time - recording behaviours within a certain time frame. e.g recording partcipants behaviour every 30 seconds and ticking a checklist.

Unstructured observations however dont ahve this structure and so researchers just note down what they deem relevant. one problem with this maybe taht there is too much to record and so important information might be missed.

14 of 14

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »