Research methods

?

The experimental method

Aims - a statement of what the researcher(s) intend to find out in a research study.

Debriefing - a post-research intevriew designed to inform participants of the true nature of the study and to restore them to the state they were in at the start of the study. 

Ethical issues - conern questions of right and wrong. 

Experiment - a research method where causal conclusions can be drawn because an IV has been deliberately manipulated to observe the causal effect on the DV.

Extraneous variable - do not vary systematically with the IV and therefore do not act as an alternative IV, but may have an effect on the DV. They make it difficult to detect a significant effect.

Hypothesis - a precise and testable statement about the assumed relationship between variables.

Independent variable (IV) - some event that is directly manipulated by an experimenter in order to test its efefct on another variable; the dependent variable (DV).

1 of 59

The experimental method

Informed concent - participants must be given comprehensive information concerning the nature and purpose of the research and their role in it, in order that they can make an informed decision about whether to participate. 

Operationalise - ensuring that variables are in a form that can be easily tested. 

Standardised procedures - a set of procedures that are the same for all participants in order to be able to repeat the study. 

2 of 59

Control of variables

Confounding variables - a variable under study that is not the IV, but which varies systematically with the IV. Changes in the DV may be due to the confounding variable rather than the IV, and therefore the outcome is meaningless.

External validity - the degree to which a research finding can be generalised: to other settings (ecological); to other groups of people (population); over time (historical).

Extraneous variables - do not vary systematically with the IV and therefore do not act as an alternative IV, but may have an effect on the DV. They make it more difficult to detect a significant effect.

Internal validity - the degree to which an observed effect was due to the experimental manipulation rather than other factors.

Mundane realism -  refers to how a study mirrors the real world. The environment is realistic to the degree to which experiences encountered in the research environment will occur in the real world.

Validity - refers to whether an observed effect is a genuine one.

3 of 59

Hypotheses & aims

Hypothesis - a precise and testable statement about the assumed relationship between variables.

Directional hypothesis - states the direction of the predicted difference between two conditions or two groups of participants.

Example: people who do homework without the TV on produce better results from those who do homework with the TV on.

Non-directional hypothesis - predicts simply that there is a difference between two conditions or two groups of participants, without stating the direction of the difference.

Example: people who do homework with the TV on produce different results from those who do homework with no TV on.

Aim - the purpose of the study.

4 of 59

Pilot study

Pilot study - a small-scale trial run of a study to test any aspects of the design, with a view to making improvements.

Example: participants may not understand the instructions or they may guess what an experiment is about. They may also get bored because there are too many tasks or too many questions. 

If a researcher tries out the deisgn using a few typical participants, they can see what needs to be adjusted without having invested a large amount of time and money in a full-scale study.

Results of a pilot study are irrelevant - the researcher is not interested in what results are produced, they want to see to what extent the procedures need fine tuning. 

5 of 59

Confederates

Confederate - an individual in a study who is not a real participant and has been instructed how to behave by the investigator.

Sometimes a researcher has to use another person to play a role in an experiment or other investigation. 

Example: you might want to find out if people respond differently to orders from someone wearing a suit compared with someone dressed in casual clothes.

6 of 59

Experimental design

Independent groups design - participants are placed in separate groups. Each group does one level of the IV.

Limitations:

  • Researcher cannot control the effects of participant variables. Instead, randomly allocate participants to conditions.

Repeated measures design - all participants receive all levels of the IV.

Limitations:

  • The order of the conditions may affect performance. Instead, they could counterbalance the order.

Counterbalancing - an experimental technique used to overcome order effects when using a repeated measures design. It ensures that each condition is tested first or second in equal amounts.

7 of 59

Experimental design

Matched pairs design - pairs of participants are matched in terms of key variables, such as age and IQ. One member of each pair is allocated to one of the conditions under test and the second person is allocated to the other condition.

Limitations:

  • It is very time-consuming and difficult to match participants on key variables. Instead, the researcher could restrict the number of variables to match on to make it easier.
8 of 59

Experiments

Laboratory experiment:

  • An experiment carried out in a controlled setting.
  • High internal validity - good control over variables.
  • Low ecological vaidity - participants are alway aware.

Evaluation:

+ Reliable - it is possible to control the environment closely, making replication easer and increasing reliability.

+ Consent - as participants are in an artificially controlled setting, they are more likely to know they are being studied, and therefore be able to consent.

- Demand characteristics - as participants know they are being observed, demand characteristics are more likely to influence their behaviour, so internal validity may be low.

9 of 59

Experiments

Field experiments:

  • A controlled experiment conducted outside alaboratory.
  • IV is manipuated by the researcher so causal relatioships can be demonstrated.
  • Low internal validity and high external validity.
  • Participants are unaware that they are being studied.

Evaluation:

+ Ecological validity - as participants are in their natural environment, the behaviour seen is more likely to be realistic, increasing ecological validity.

+ Demand characteristics - participants don't know they are being observed, so they would be less prone to demand characteristics, improving experimental validity.

- Extraneous variables - the environment is less controlled so there is more chance of extraneous variables influencing the results.

10 of 59

Experiments

Natural experiments:

  • Experimenter has not manipulated the IV directly.
  • Researcher records the effect of the IV on a DV.

Evaluation:

+ Experimenter bias - the change or difference being investigated in the IV is not being ontrolled by theexpeimenter, meaning they are less likely to influence the data due to experimenter bias, increasing validity.

- Reliable - as the experimenter cannot directly control the IV, they do not know how reliable the change is and therefore cannot infer cause and effect.

- Confounding variables - the lack of control in changing the IV means that there is more chance of confounding variables influencing results.

11 of 59

Experiments

Quasi experiments:

  • 'Almost' experiments.
  • IV is not somethig that varies at all - it is a condition that exists.
  • Researcher records the effect of this 'quasi-IV' on a DV.
  • Causal reltionships can be drawn.

Evaluation:

+ Realistic changes - the IV is naturally occurring difference between people, meaning changes in the DV may have more realism than if the IV was artificially created.

- Mundane realism - the task sused to gather data for the DV may still be unrealistic, meaning that the data itself has little mundane realism.

- Diffiult to set up - can only be used where a naturally occurring difference between people can easily be identified, so they are difficult to set up.

12 of 59

Observational techniques

Naturalistic observation - an observation carried out in an everyday setting, in which the investigator does not interfere in any way but merely observes the behaviour(s) in question.

Controlled observation - a form of investigation in which behaviour is observed but under conditions where certain variables have been organised by the researcher.

Evaluation:

+ Reliability - as the amount of control increases, so does the reliaility of the data measurement because the research is set up to watch for specific behaviour.

- Ecological validity - the participants are not in their natural environment and therefore, their behaviour might be affected.

13 of 59

Observational techniques

Participant observation - observations made by someone who is also participating in the activity being observed, which may affect their objectivity.

Non-participant observation - the observer is separate from the people being observed.

Evaluation:

- Subjective bias - where different researchers might see the same behaviour, but interpret it differently, affecting the reliability of the data.

- Data - accurate recording of data is difficult especially in a participant observation, when you are among the people being watched and cannot easily take notes.

14 of 59

Observational techniques

Covert observation - observing people without their knowledge.

Overt observation - observational studies where participants are aware that their behaviour is being studied.

Evaluation:

+ Unaware - in covert observations, behaviour is likely to be more natural as participants are unaware that they are being watched.

- Informed consent - covert observations cannot gian informed consent, therefore coud be regarded as unethical.

- Validity - in overt observations, consent can be obtained, but this could affect the validity of the data as people then know they are being watched and may change their bebehaviour.

15 of 59

Observational design

Unstructured observations:

  • Researcher records all reevant behaviour, but has no sytem.
  • There may be too much to record.
  • May only record eye-catching behaviour.

Structured observations:

  • Researcher uses various systems to organise observations, such as behavioural categories and sampling procedures.

Behavioural categories - dividing a target behaviour into a subset of specific and operationalised behaviours.

  • Be objective.
  • Cover all possible component behaviours.
  • Be mutually exclusive.
16 of 59

Observational design

Sampling procedures:

Sampling - the method used to select participants, such as random, opportunity and volunteer sampling, or to select behaviours in an observation.

Event sampling - an observational technique in which a count is kept of the nuber of times a certain behaviour.

Time sampling - an observational technique in which the observer records behaviours in a given time frame.

17 of 59

Self-report techniques

Interviews:

  • A research method or technique that involves a face-to-face, 'real-time' interaction with another individual and results in the collection of data.
  • Structured - any interview in which the questions are decided in advance.
  • Unstructured - the interview starts out with some general aims and possibly some questions, and lets the interviewees answers guide subsequent questions.

Evaluation:

+ In-depth - it is possible to get in-depth data from interviews that allow a meaningful exploration of individual views.

- Time-consuming - it can be very time-consuming to gather the data as each participant will usually be interviewed on their own. This means that the sample size is likely to be small and therefore, less generalisable to the population.

18 of 59

Self-report techniques

Questionnaires:

  • Data is collected through the use of written questions.
  • Open questions allow the participant to answer in any way they choose, so they do not limit the possible responses. Qualitative data is collected.
  • Closed questions limit the ossible responses by providing tick boxes or offering a scale to indiciate agreement. Quantitative data is collected.

Evaluation:

+ Large sample - large amounts of data can be gathered quickly by issuing a postal or online survey.

+ Analysis - data from closed questions can be analysed quickly and comparisons made between variables.

- Social desirability bias - can affect the validity of the data as participants answer the questions in such a way as to make them look good.

19 of 59

Self-report design

Questionnaire construction:

Writing good questions:

  • Clarity.
  • Bias.
  • Analysis.

Writing good questionnaires:

  • Filler questions.
  • Sequence for the questions.
  • Sampling technique.
  • Pilot study.
20 of 59

Self-report design

Design of interviews:

Recording the interview:

  • Researcher may take notes.

The effect of the interviewer (interviewer needs to be aware of):

  • Non-verbal communication - various behaviours.
  • Listening skills - need to know when and how to speak.

Questioning skills in an unstructured interview:

  • Know what follow-up questions to use.
  • Aware of what has already been asked.
  • Avoid probing to much.
21 of 59

Correlations

Correlation - determining the extent of an association between two variables; co-variables may not be linked at all (no correlation), they may both increase together (positive correlation), or as one-variable increases,the other decreases (negative correlation).

Correlation coefficient - a number betwen -1 and +1 that tells us how closely the co-variables in a correlational analysis are associated.

Curvilinear correlation - a non-linear relationship between variables.

Co-variable - the two measured variables in a correlational analysis.

Continuous variable - a variable that can take on any value within a certain range.

Intervening variable - a variable that comes between two other variables, which is used to explain the association between those two variables.

Linear correlation - a systematic relationship between co-variables that is defined by a straight line.

22 of 59

Correlations

Scattergram - a graphical representation of the association between two sets of scores.

Significance - a statistical term indicating that the research findings are sufficiently strong for us to accept the research hypothesis under test.

Evaluation:

+ Secondary data - correlations can be conducted quickly using secondary data as a way to investigate whether there may be a relationhip between two variables worthy of further study using another research method.

- Cause & effect - correlations cannot give you information about cause and effect - they can only tell you a relationship exists, but not how or why.

23 of 59

Case studies & content analysis

Case study - a research investigation that involves a detailed study of a single individual, institution or event. Data can be qualitative or quantitative.

Evaluation:

+ Detailed - the data gathered is detailed and in-depth, so more valid conclusions can be drawn.

- Sample - the sample is limited and not representative of the wider population.

Content analysis - a kind of observational study in which behaviour is observed indirectly in written or verbal material. Can be done by thematic analysis.

Evaluation:

+ Ecological validity - high in ecological validity as it analyses real communications.

- Subjective - data collection may be subjective as it relies on the researcher's interpretation.

24 of 59

Case studies & content analysis

Meta-analysis - a researcher looks at the findings from a number of different studies and produces a statistic to represent the overall effect.

Effect size - a measure of the strength of the relationship between two variables.

Review - a consideration of a number of studies that have investigated the same topic in order to reach a general conclusion about a particular hypothesis.

25 of 59

Sampling techniques

Opportunity sample - a sample of participants produced by selecting people who are most easily available at the time of the study.

Strength: easiest method because you just use the first suitable participants you can find.

Limitation: biased because the sample is drawn from a small part of the population.

Random sample - a sample of participants produced by using a random technique such that every member of the target population being tested has an equal chance of being selected.

Strength: unbiased because all members of the target population have an equal chance of selection.

Limitation: need to have a list of all members of the population and then contact all of those selected, which may take some time.

26 of 59

Sampling techniques

Stratified sample - a sample of participants produced by identifying subgroups according to their frequency in the population. Participants are then selected randomly from the subgroups.

Strength: likely to be more representative than other methods because there is a proportional and randomly selected representation of subgroups.

Limitation: very time-consuming to identify subgroups, then randomly select participants and contact them.

Systematic sample - a sample obtained by selecting every nth person.

Strength: unbiased as participants are selected using an objective system.

Limitation: not truly unbiased unless you select a number using a random method and start with this person, and then select every nth person.

27 of 59

Sampling techniques

Volunteer sample - a sample of participants that relies solely on volunteers to make up the sample.

Strength: gives access to a variety of participants which may make the sample more representative and less biased. 

Limitation: biased because participants are likely to be more highly motivated and/or with extra time on their hands. They may be more highly motivated to be helpful, or more broke and needing the money offered for participation.

Bias - a systematic distortion.

Generalisation - applying the findings of a particular study to the population.

28 of 59

Experimental design

Independent groups - involves different groups doing each condition.

Evaluation:

+ Demand characteristics - participants are less likely to guess the aim and change their behaviour accordingly.

- Individual differences - as the groups contain different people, their individual differences might influence the result.

Repeated measures - involves one group doing all conditions.

Evaluation:

+ Individual differences - by using the same participants in all conditions, there are no individual differences to act as a confounding variable.

- Order effects - by doing the experiment more than once in different conditions, the participants maybe affected by order effects.

29 of 59

Experimental design

Matched pairs - different groups in each condition, but the groups are matched on key factors.

Evaluation:

+ Participant variables - by matching the groups in each condition on key participant variables, the influence of individual differences should be significantly reduced.

- Individual differences - despite some control, it is impossible to remove all individual differences.

30 of 59

Control

Counterbalancing:

  • Group of participants are split into 2 smaller groups.
  • Half do condition A then B and the other half do condition B then A.
  • This means the potential effects of doing one condition after another in a repeated measures design will be counteracted.

Random allocation: participants can be randomly allocated to one of the conditions.

Standardisation: a process where you try to ensure all your participants experience the research process in the same way.

Randomisation: researchers may choose to randomise parts of the procedure to remove anybias by making it all 'due to chance'. 

Pilot study: a samll-scale trial run of the experiment. This allows you to check that your design or method will work.

31 of 59

Observational design

Observational research involves watching and recording behaviour that is relevant to a particular research aim.

Behavioural categories: before starting, the researcher decides which behaviour is relevant to the research question and sets up a tally chart to record it. This consists of different categories of target behaviour that the researcher ticks when they see it occurring.

Evaluation:

+ Objective - these must be clear and objective so that anyone could easily identify when the target behaviour has happened.

+ Relevance - they hould be comprehensive in covering all possible behaviour that is relevant to the research aim.

32 of 59

Observational design

Event sampling - the observer watches for the target behaviours in the sample and simply records all instances of the behaviour in the appropriate column when they happen.

Evaluation:

- Accuracy - this can be difficult to do accurately when there is a lot of action to record, therefore reducing reliability of the data.

Time sampling - the observer watches and records all the occurrences of relevant behaviour at set, or randomly set, time intervals.

Evaluation:

- Outside the time frame - it is possible to miss important events because they happen outside of the time frame for recording behaviour.

33 of 59

Ethics in research

Informed consent: participants should be aware of their involvement in the research and, wherever possible, know the full aim of the investigation before they take part.

Deception: participants should be aware of all elements of the investigation before they take part, and should not be lied to during the process of the research. 

Confidentiality: the names and personal details of participants should not be revealed to others beyond the researchers.

Protection from harm: participants can be harmed by participating in research if they are put nder stress, embarrassed, frustrated, hurt or exposed to anything that changes their mental or physical state. 

Debriefing: after taking part in research, the participants should be informed about exactly what the investigation is aiming to do. They should also be given the chance to ask questions.

34 of 59

Peer review

The peer review process:

1. Editor sends copies to several expert reviewers.

2. Reviewers independently read the paper.

3. Editor decides what to do next.

Evaluation:

+ Barrier - peer review acts as a barrier, stopping flawed, fraudulent and foolish research becoming part of the public understanding of psychology.

- Bias - it is impossible to ensure that all reviews are done in a totally unbiased way because the reviewers may have a vested interest in promoting or suppressing some studies that agree with or refute their own research.

35 of 59

Implications of psychological research for the eco

Clinical psychology:

  • Drug treatments.
  • Talking cures.
  • Cognitive behavioural therapies.

Occupational psychology:

  • Stress management programmes.
  • Changes in shift patterns.
  • Managing human resources.

Health psychology: 

  • Treating addiction.
  • Running health promotion campaigns.
36 of 59

Validity

Validity - the extent to which the research actually measures and tests what it claims to.

Internal validity - the extent to which the results of the study are due to the tested variable rather than extraneous or confounding variables.

Externality validity - the extent to which the results of the study can be generalised to other people, other settings and across time.

Assessing:

  • Face validity.
  • Concurrent validity.
  • Predictive validity.
  • Temporal validity.

Improving:

  • IV - employing tighter controls on extraneous variables.
  • EV - developing realistic tests and using natural settings.
37 of 59

Reliability

Reliability - the extent to which the research is consistent. If the research is repeated in the same way and the findings are the same, the results are said to be reliable.

Internal reliability - concerned with internal consistency, and is demonstrated in Tuesday that the procedure is applied and the measurements used.

External reliability - about how consistent a test is over time.

Assessing:

  • The test-retest method.
  • Inter-observer reliability.
  • The split-half method.

Improving:

  • Increasing the objectivity.
  • Standardising procedures.
38 of 59

Features of science

Scientific method:

  • Observation of phenomenon leads to theories that explain it.
  • Hypotheses based on theories are tested using empirical methods.
  • Theory is reviewed in light of evidence.

Empirical methods - these are data-collecting techniques based on sensory information; actual evidence rather than thoughts and ideas.

Objectivity - achieved when the data and its interpretation are free from bias.

Replicability - researchers replicate procedures and test it to see if the same results occur, otherwise findings could be the result of a flawed process.

Theory construction:

  • Initially, theories seek to explain observed phenomena (induction).
  • Theories enable prediction which are tested and the results are used to refine the theory (deduction).
39 of 59

Features of science

Falsifiability - theories should generate testable predictions which can be proven wrong. This means that the process used to test the hypothesis and the data must be objective and exist in a way that can be tested.

Hypothesis testing - a hypothesis is a testable prediction based on a theory.

Paradigm - a shared set of assumptions about the content and methods of a particular discipline.

Paradigm shift - occurs when the dominant paradigm is replaced with a new one.

40 of 59

Report writing

Abstract: the purpose of the abstract is to summarise the research so other researchers can quickly decide if it is relevant without having to read the entire report. The abstract briefly outlines the entire investigation so the reader can see what the aim of the research was, the theoretical background, the method, the results and the conclusions drawn from the result, all within one short paragraph.

Introduction: outlines why the study was done.

The method/procedure:

  • Method - including deisgn decisions and identification of variables.
  • Participants - sampling method, sample size and breakdown, and allocation to conditions if relevant.
  • Apparatus/materials - any technical equipment needed to run the study.
  • Standardised procedure - step-by-step instructions, including when and where the study took place and instructions to the participants.
  • Controls - details of how issues of bias were dealth with.
41 of 59

Report writing

Results: summarises the data in meaningful tables and charts, supported by written descriptions of the conclusions drawn from the data, including details of the findings of any inferential tests used.

Discussion: links the findings and conclusions of this study to the backgound theory and research.

References: a list of all the references cited in the writing of the report.

42 of 59

Types of data

Primary data - information gathered by the researcher first hand in order to answer their specific research question.

  • Designing a research study.
  • Piloting it.
  • Getting a sample.
  • Gathering data.
  • Analysis.

Evaluation:

+ Reliable - it is likely to be more reliable and valid than secondary data due to high levels of control available.

- Time and effort - it costs a lot in terms of time and effort.

43 of 59

Types of data

Secondary data - information that has already been gathered for other research purposes. The relevant data is identified and analysed in a way that enables the researcher to draw conclusions relevant to their research aim.

Evaluation:

+ Several sources - data is drawn from several sources so can provide greater insight into the research.

- Less reliable - it may be less reliable than primary data.

44 of 59

Types of data

Meta-analyses: this is a method that combines data from several studies that have the same research aim. It uses secondary data. The data from each study is pooled and re-analysed using statistical techniques that allow a conclusion to be drawn.

Qualitative data:

  • In the form of words and descriptions.
  • Self-report methods.
  • Behaviour is just explored with an attempt to interpret.
  • Can be converted to quantitative data.

Evaluation of qualitative data:

+ In-depth - it is likely to provide an in-depth exploration of a topic.

- Time-consuming - it takes a lot of time to gather the data and analyse it.

45 of 59

Types of data

Quantitative data:

  • Numerical data.
  • Experiments and correlations gather numerical data, as do observations and closed questions in self-report methods.
  • Behaviour is measured in terms of how much, and is quantified so comparisons can be made by analysing the dataset using appropriate descriptive statistics and graphical representations.

Evaluation of quantitative data:

+ Easy - it is easy to analyse and make comparisons between groups.

- Superficial measures - it is likely to only give superficial measures.

46 of 59

Measures of central tendency

The mean - add all the data items together and divide by the number of data items used.

Evaluation:

+ Representative - likely to be most representative as all scores are used.

- Large range - it can be less reliable if there are extreme scores.

The mode - the most frequently occurring data item.

Evaluation:

+ Extreme scores - unaffected by any extreme scores.

- Data - it does not use all of the data.

47 of 59

Measures of central tendency

The median - rank all the data items from smallest to largest in size and pick the middle value.

Evaluation:

+ Easy - easier to calculate compared to the mean.

- Less representative - it des not use all the scores.

48 of 59

Measures of dispersion

The range - this is worked out by subtracting the lowest value in the dataset from the highest value. The higher the range, the more variability there is likely to be in the dataset.

Evaluation:

+ Easy - easy to calculate.

- Extreme values - it may not representative of the data if there are extreme values at the top/bottom of the dataset.

The standard deviation - this tells you the spread of data around the mean and allows you to see the relationships between scores.

Evaluation:

+ Useful - it provides useful information about how individual scores relate to each other and to the mean.

- Hard - it is harder to calculate than the range.

49 of 59

Maths skills

Percentages:

  • The proportion of the total calculated out of 100.
  • Divide the number you wish to express as a % by the total score possible and multiply by 100.

Fractions:

  • Part of a whole number.
  • Divide the score achieved by the total score, and represent as: score achieved/total possible.

Ratios:

  • Tells you how many of one thing there are in comparison to another thing.
  • Part to part - the two numbers add up to the whole.
  • Part to whole - the part is expressed in relation to the whole.
50 of 59

Maths skills

Mathematical symbols:

  • = equal to.
  • < less than.
  • > greater than.
  • E sum of.

Estimating results:

  • Helps you to quickly identify any trends in the data.

Order of magnitude:

  • A way of expressing a number by fcusing on its overall size (powers of 10).

Significant figures:

  • A way of rounding scores to make the figure easier to understand. The figure is rounded to the digit after the first 0.
51 of 59

Displaying data

Data tables: should contain descriptive statistics, such as measures of central tendency and dispersions; for some data, it might be limited to percentage calculations.

Scattergrams: only used to display correlational data; the two co-variables are plotted against each other, one on each axis.

Bar charts: categorical data for comparison.

Histograms: used when the x-axis is showing continuous data while the y-axis shows the frequency of occurrence of that data.

Pie charts: show the frequency of categories as percentages.

Frequency polygons: have continuous data along the x-axis, but polygons can show the frequency of scores for two or more variables.

52 of 59

Distributions

Normal distribution:

  • The mean, median and mode will be at the same point.
  • The data is symmetrical about the mean.
  • The shape of the line on the graph is bell shaped.

Skewed distribution:

  • Non-symmetrical because the scores are not distributed equally on either side of the mean.
  • They are common when only a few measures have been taken.
  • Positive: more scores are concentrated to the left of te graph.
  • Negative: most scores fall above the mean and the peak of the chart is to the right.
53 of 59

Levels of measurement

Nominal data - frequency data. It is gathered by counting the frequency of occurrence of the target behaviour. Tally charts are used to record the data, which is summarised in a contingency table.

  • Sign test.
  • Chi-squared test.

Ordinal data - continuous data because it represents scores along a scale. It allows you to rank order participant responses along the scale.

  • Mann-Whitney test.
  • Wilcoxon test.

Interval data - continuous data, but the scale of mesurement features exact and equal intervals between points on the scale. It tell you how much each measure represents.

  • T-tests.
  • Pearson's r test.
54 of 59

Inferential testing

Use the 0.05% level of significance.

Process:

  • Gather data and put it through the appropriate inferential test for Z, an observed or calculated value.
  • For every test, there is a critical value table. Using information on design and hypotheses, find the critical value appropriate to your study.
  • Compare your observed value from the test to the critical value on the table. Decide whether your results are signifcant or not.
55 of 59

Inferential testing

How to do a sign test?

  • For each participant, subtract their scores on measure 1 from measure 2.
  • Add a plus or minus sign to indicate the direction of the difference.
  • Omit data where there is no difference.
  • Count the number of the least frequently occurring signs.
  • This number is your calculated or observed value.
  • This is called s.

Using critical values tables:

  • Need to know whether the hypothesis is directional or non-directional.
  • Need to know the number of scores used to calculate s and the probability level you are aiming for.
56 of 59

Inferential testing

Inferential testing and significance:

  • Purpose is to allow for generalisation beyond the sample to the rest of the target population, by finding out whether the observed difference or relationship is significant or not.
  • It takes into account the type of hypothesis being tested, the design of the study and the level of data gathered.
  • The right test is then selected and applied to the data in order to find the test statistic.
  • Significance is determined by comparing the test statistic to the critical value on that test's critical value table in order to calculate the probability that the results occured by chance alone.
57 of 59

Inferential testing

Type I error - this is an optimistic decision leading to the rejection of the null hypothesis when in fact the results obtained were due to chance.

Type II error - this is where you are too cautious and accept the null hypothesis when the results are real.

The sign test and chi-squared are used when nominal data has been gathered and you are testing for differences or associations between variables.

Mann-Whitney and Wilcoxon test ordinal data and test for difference between conditions.

The t-tests are used for interval or parametric data and are the most powerful tests of difference.

Spearman's rho and Pearson's r are used to test correlational data. It is not possible to run a correlation on nominal data.

58 of 59

Using critical value tables

Interpretation of significance:

Statistical analsis usig the appropriate inferential test produces an observed or calculated value.

Each test has an accompanying table of critical values which is used to determine whether the findings of the study are significant. This is done by comparing the observed value with the critical value on the table for the chosen level of significance.

In each test, you need to know if the observed value has to be equal to or more than the critical value or equal to or less than the critical value.

59 of 59

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »