- Qualitative Data: Is in the form of numbers and helps to measure studies on a numerical basis.
- Quantitative Data: Is in the form of words and uses the description and meaning of behaviour.
- Laboratory Experiment: Artificial environment with tight controls over variables. Advantages: Easier to comment on; easy to replicate and could be cheaper and would take less time. Disadvantages: Participants that are aware may change their behaviour. Low realism and difficult to generalise.
- Field Experiment: Natural environment with independant variable manipulated by researchers. Advantages: People may behave more naturally/real and easier to generalise results. Disadvantages: Difficult to replicate and can be time consuming and costly.
- Natural Experiment: Natural changes in independant variables are used - they aren't manipulated. Advantages: Less chance of any bias characterisitcs being in the results and better ethics. Disadvantages: The independant variable isn't controlled by the researcher and no control over the allocation of participants.
- Quasi Experiment: Is any experiment that does not fulfill all conditions of a lab experiment even if carried out in controlled conditions - usually IV can't be manipulated.
1 of 30
- Interview: Advantages: It's easier to tackle sensitive topics. You can look at more complex issues. Disadvantages: It takes a lot more time and effort to do and costs a lot.
- Questionnaire: Advantages: It doesn't cost much and is quick. Less influence from the researcher. Disadvantages: Not so many people will respond to them and people may not understand the questions fully.
- Observation: It is more likely to be valid. It had more value as a primary research tool. Disadvantages: It might cause the observer effect and can be quite costly.
- Case Study: Advantages: It can be good to challenge a existing theory and the data can be very interesting. Disadvantages: Quite low reliability and findings can be subjective.
- Correlational Analysis: It can measure the strengths of relationships and its a valuable to exploratory research. Disadvantages: It measures non-linear relationships and its not possible to establish a cause and effect through this.
2 of 30
- The British Psychology Society sets the rules on what psychologists can and can't do in their research. Its important to have this because otherwise psychologists could begin to do things that are morally wrong.
- A cost-benefit analysis is a systematic approach estimating strengths and weaknesses of alternatives which satisifies businesses.
- It is only acceptable to deceive participants when the results may change if they knew.
- Presumptive consent is using a small sample to see if the wider population would consent.
- Prior general connsent is obtaining consent before hand to see if the participant would take part in a study that included deception.
- Debriefing is a short interview that takes place between researchers and participants immediately following their participation.
- Informed Consent: An ethical issue would be if a vulnerable person gives consent then they could be taken advantage of. This can be dealt with by if someone is under 16 their parents might need to be informed or for other vulnerable people, a legal representative should be.
3 of 30
- Protection from Harm: An ethical ussue could be if the participant is harmed mentally or physically, the researcher could get in a lot of trouble. To stop this the researcher should make sure the participant will be able to deal with all aspects of the study.
- Anonymity/Confidentiality: It is against the Data Protection Act 1998 to do this, so the researcher could get sued. The researcher should ask if the participant wants their part in the research to be confidential or not.
- Right to Withdraw: If the participants wants to withdraw but is not allowed, this could hurt them mentally or phsyically, which could get the researcher in trouble. The researcher should tell them straight away that they can leave whenever they want.
- Deception: An ethical issue is it is unacceptable because participants could become uneasy which would be a problem for the researcher. The researcher should inform the participant of everything that will happen.
- A target population is a group of people who share a set of characteristics that the researcher wants to find out about. It is important that the chosen sample reflects the target population as this will result in a waste of time if not because the results could only apply to the specific sample studied and wouldn't be generalised.
4 of 30
- Random Sample: Every person in a target population has an equal chance of being selected. So a list of every member of the target population is required. The sample must be selected in an unbias way. Random number tables - Contain strings of numbers where each number has the same chance of being selected, they are independant to other numbers. They are often found in statistical text books. Computer Selection - A computer can generate an endless string of numbers which have no relationship to one another as a sequence. Each participants name is given a number and a random number generator program is used to produce the required sample size. Manual selection - Using this method, the researcher has to put each name on a seperate slip of paper and place them in a container. The researcher then selects the sample size from the containter. The slips should all be the same size and the researcher should draw them blind (cannot see the actual paper slips).
- Even if a sample is truly random, it may not be representative. The randomization may select participants with similar characteristics that that don't reflect the target population by chance, another reason to replicate studies. There are pratical limitations to these samples: the larger the target population, the more difficult it is to select the names as this will take a lot of time and effort. There is no guarantee those selected will take part in the study, they may refuse.
5 of 30
- Opportunity Sample: This consists of taking the sample from people who are avaliable at the time of the study and is a non-random method. This may simply consist of choosing the first 20 students in a college canteen, who agree to fill in a questionnaire. It is popular because it is easy in terms of time and money. Somtimes with natural experiments, opportunity sampling has to be used as the researcher has no control over who is studied.
- Issues are that it is likely to produce a biased sample as it is easy for the researcher to choose people from their own social/cultural group. This sample would therefore not be representative as the sample may have different qualities to people in general, as a consequence, research shouldn't be generalised to the wider population.
- Volunteer Sample: Consists of participants becoming part of a study because they respond to a requst by the researcher. This could be in the form of an advert in a newspaper or poster on a public building. They are non-random methods. It is quick and relatively easy to do and can easily reach a large variety of participants - they decide whether they join or not.
- Issues are the type of participants who volunteer may not be represenatative of the target population for a number of reasons e.g. they may be more obedient, more motiavted to take part in studies etc.
6 of 30
- Validity concerns how accurate and trutful the research is. There are two types:
- Internal validity considers whether the study really is testing what it says it is; for instance an experiment is internally valid if it was the IV that caused the DV to change. It can be lowered by
- poor control of extraneous variables as it may be them that has caused the DV to change.
- The presence of demand characteristics which is where participants behave in the way that they think is expected of them, which means the study might not really be testing what they want to.
- Researcher/investigator effects which is where the researcher affects the outcome of the study. It could be through their own behaviour (investigator effect) or they set up the experiment to conform to their own beliefs (researcher bias).
- Order effects - in repeated measures, the change in the DV could be caused by the effect of having previously done the first condition e.g they perform better because they know what will happen.
- Individual differences - in an indepent measures experiment, it may be individual differences between the participants in each condition that are responsible for the changes in the DV.
7 of 30
Assessing Internal Validity
- Face validity: The study of measure is scrutinised to see if, on the surface, it is an accurate reflection of what is trying to be measured.
- Concurrent Validity: Two sets of scores are obtained, one from the new test and one from an alternative test that is known to be valid; the results of both tests are correlated and a high positive correlation indicates that the new test has concurrent validity.
- Predictive Validity: This assesses how well a result of a measure will predict something at another time e.g. whether a mock will predict a real result.
- Ways to improve internal validity are:
- Do the study in a lab condition so that no extraneous variables can effect the results;
- Don't use the same participants because they could show order effects;
- Don't use different participants everytime because their individual differences could influence the studies results.
8 of 30
- It looks at the extent to which the findings of the research can be generalised across people, time and places. If a study lacks internal validity, it automatically lacks external validity.
- There are three types of external validity:
- Population Validity: The extent to which research findings can be generalised to the target population or the wider population.
- Temporal Validity: The extent to which research findings can be generalised to other periods of time.
- Ecological Validity: The extent to which research findings can be generalised to other settings. If a study had mundane realism it may have ecological validity, but if the setting of the study is very specific it may not be possible to generalise it to other settings. A study can have external validity if it has experimental realism - this is when the participants believe that the situation they are in is real.
- Ways to assess this validity is: If it has a big sample it should be higher; if it looks at lots of different cultures it should be higher; if it is done more recently then it should be higher.
- A good way to judge this validity is to look at replications of the study and see if similar results were gotten.
9 of 30
- Reliability refers to whether there is consistency in the research.
- Researcher reliability: Whether all the researchers involved in the study are all behaving consistently. We measure this through inter-rater reliability --> observational studies.
- Assessing researcher reliability: Repeat the same study again with different researchers, if the researcher reliability is high then similar results will be gathered.
- Assessing inter rater relaibilty: This can be done by comparing the observation records of the different researchers; if they are similar, then inter rater reliability will be high.
- Improving researcher reliability: You could make sure that all the researchers are given the same standardised instructions, so they all say the same thing and make sure they agree fully.
- Improving inter-rater reliability: Make sure that the behaviours/acts that are observed are in specific categories, so that one person won't put one behaviour in one category and another researcher puts it in another.
10 of 30
- Internal Reliabilty: Assesses the consistency of results across items within a test. The extent to which a measure is consistent within itself.
- Assessing Internal Reliability: Split half method where the items are randomly divided into two halves. The participants score on one half of the test is compared to the other to see if they are similar.
- Improving Internal Reliability: If specific items didn't tend to get the same score on both halves of the test then this could be changed because they may be lacking internal validity.
- External Reliability: Refers to the extent to which a measure varies from one use to another e.g. if you repeat an experiment you should get similar results.
- Assesing External Reliability: They could do the test re-test which is where you give the same participant the same questions on two different occassions.
- Improving External Reliability: You would need to make sure the questions given aren't just about participants thoughts/feelings on that day and make sure that they are long term answers.
11 of 30
- A pilot study is a small scale trial study that is carried out before the main study begins.
- You may carry this out because:
- You want to identify any extraneous variables;
- You want to test whether participants guess the aims of the study (demand characteristics);
- You want to check that procedures are adequate e.g. you've allowed participants time to complete the study;
- You want to check that participants understand or accurately interpret questions.
- A pilot study also allows us to see whether we are likely to achieve a significant result. Research projects are costly and time consuming, and therefore it is wise to test whether you are likely to find anything interesting before going ahead.
- There are two different types of experimental hypothesis: One tailed (directional) and two tailed (non-directional). A directional hypothesis will only state which group will score higher or lower e.g. "X will be higher than Y". A non-directional hypothesis will only predict a difference between two conditions, but won't say which group will score higher or lower.
12 of 30
- Independent Groups: Different participants are used for each condition of the experiment. Advantages: No order effects. Disadvantages: Individual differences can be an issue.
- Repeated Measures: Same participants are used for all the conditions of the experiment. Advantages: Individual differences are controlled. Disadvantages: There is an order effect.
- Matched Pairs: Different participants are used for each part of the experiment, but they are matched for relevant characteristics e.g. each person in group 1 is matched for a specific characteristic with a person in group 2. Advantages: It stops individual differences and stops order effects. Disadvantages: It is quite time consuming.
- If individual differences or order effects affect the result of the experiment they will decrease the validity and reliability.
- To minimise individual differences we usually use random allocation.
- To minimise order effects we usually use the method of counterbalancing by administering the various procedures in different sequences.
13 of 30
- Histograms: They show the frequency of different scores on a continous scale e.g. scores on a memory test. The range of possible scores is represented in class intervals on the x axis. The number of participants achieving the scores in a perticular class interval is plotted on y axis. Both are joined together to show that we are representing a contionous set of scores on one variable. Only one set of data can be displayed e.g. scores on a memory test.
- Bar Charts: Shows frequencies of scores in seperate categories e.g. how many people have blue eyes and how many have brown eyes. The categories are plotted on the x axis and the total number in each category is plotted on the y axis. Bars are not joined together to show that they are displaying discrete categories.
- Scattergrams: Used to display correlational data. It shows the strength and direction of the relationship between two variables. One variable is plotted on the x axis and one variable is plotted on the y axis. A cross is placed at the point where those two scores meet.
14 of 30
- Mean: Strengths - May not be a number in the data set and uses all the numbers. Weaknesses - Affected by outliers.
- Median: Strengths - May not be a number in the data set. Weaknesses - Affected by outliers and doesn't take into account all of the data.
- Mode: Strengths - Will always be a number in the data set and won't be affected by outliers. Weaknesses - There may be more than one of them.
- Measures of dispersion tells us how the data is spread. It is the range and standard deviation.
- The range: It involves subtracting the lowest score from the highest score. It is most useful when assessing how representative the median is a typical score. The higher the range, the less representative the median is because it would indicate the scores are spead widely.
- Standard Deviation: This gives us the average distance of each score from the mean and therefore, tells us something about how representative the mean is.
- A high standard deviation would indicate that the mean isn't representative because it indicates a high average distance between each score and the mean, they are spead out from the mean.
- A low standard deviation would indicate that the mean is representative because it indicates a low average distance between each score (they are more tightly clustered around the mean).
15 of 30
- Features of a good questionnaire: Clear questions that are easy to understand; questions that don't lead respondents to give a particular answer' avoids making assumptions about respondants; avouids questions that are too personal; questionnaire is piloted to make sure questions are understood and interpreted correctly and if using multiple choice, and adequate choice of responses should be given.
- Features of a good interview: The interview should have a specific purpose/aim. It should be carefully planned/piloted. The interviewer should aim to establish a rapport with the interviewee.
- Features of a good observation: There should be clear aims; behavioural categories should be developed before it begins to ensure consistency between observers; observers should be carefully trained before it begins so that they are clear about what counts as an example of each behavioural category; the timing of each observation should be standardised; video recording allows observation reports to be checked and pilot categories and checklists before observations begin.
16 of 30
- Correlational hypotheses predict a relationship between two variables not a difference, and therefore they are worded differently to experimental hypotheses.
- A directional hypothesis for a correlation states whether the relationship will be a positive or negative correlation. A non-directional hypothesis simply states there won't be a correlation.
- Correlations are designed to investigate the strength and direction of a relationship between two variables. The strength of the correlation is expressed by the correlation coefficient. This is always a figure between +1 and -1 where +1 indicates a positive correlation and -1 indicates a negative correlation. It will be a 0 if there is no correlation between two variables.
- The closer to 0, the weaker the correlation, the closer to 1, there stronger the correlation.
- Content Analysis: If a researcher wants to convert qualitative data into quantitative data they use this process. It is carried out when doing an observation because it involves developing categories and noting how many instances are observed in the material used.
17 of 30
- The difference being that rather than directly observing behaviour you collect materials that have already been produced and use them.
- There are 6 main stages to content analysis:
- Draw up a coding system/checklist and choose categories that fit your hypothesis;
- Do a pilot study to check this;
- Select and collect your source material and design the data collection method;
- Code the data using your categories;
- Record how many times each behaviour is seen using a tally chart;
- Analyse data using bar graphs/frequency distribution table;
- Lastly, draw conclusions.
18 of 30
- Replicability means if you can repeat the study and can similar results.
- It helps guard against scientific fraud because if a study is done more than once you should be able to tell if there is any fraud in the first study.
- It allows scientists to check whether a finding was a one-off because if the findings are completely different when repeated this suggests that it was a one off occurence.
- Tightly controlled lab conditions increase replicability because this should mean that findings are more similar because there shouldn't be any extraneous variables that influence the study. Also there would be more detailed directions, so the studies would be carried out more accurately.
- Objectivity means the information we get through our senes - gets rid of bias.
- There is an understanding that true objective science is impossible as all people have beliefs and expectations and that these influence the observations they make and could introduce bias into their investigations.
- Attempts shoould be made to remain objective: This entails tight control and double blind techniques (where participants and researchers don't know what the aims of the study are).
19 of 30
- Theories are used to explain certain behaviours. A theory can't just be made up without any justification as this would not be objective, and therefore wouldn't be scientific.
- A theory will be constructed as the result of either observation, or from past research.
- Once a scientific theory is constructed it must be subjected to rigorous testing to see whether the gathered evidence supports or challenges the theory.
- Scientific progress is made when theories are developed, systematically tested using empirical research methods, leading those theories to be validated, modified or rejected.
- In the introduction, a psychologist will introduce the topic of the investigation. They will write about any observations that led to the theory. They will use this information to justify the direction of their hypothesis. The aim and hypothesis is given at the end of the introduction.
- A hypothesis is a prediction stating what you expect to happen which you can test.
- A researcher will create a null hypothesis which preicts there will be no significant difference between condtions of variables. It might be necessary to include this to eliminate bias because this could show it is down to chance. If the results of the study aren't strong enough then the psychologist would have to accept the null hypothesis.
20 of 30
- Science relies on empirical methods of observation and measurement.
- These methods rely on direct sensory experience (phenomena that can be directly observed).
- Careful observation and measurement are needed to generate empirical evidence.
- Only that which can be publically observable and can be agreed on by others can be validated as knowledge.
- Opinion, intution, and beliefs are not empirical and therefore not scientific.
- Major features of science:
- Theory Construction;
- Hypothesis Testing;
- The use of empirical methods.
21 of 30
- For research to be widely accepted in psychology it has to be published in an academic journal.
- To safeguard the quality of published research, all the reputable academic journals employ a robust review process of all research papers before thy are considered for publication.
- The paper is sent to external reviewers to read the draft to check on: Originality of the research; the appropriateness of the research design; ethical issues; the sample technique used; potential sources of bias; the operationalization and control of key variables; the reliability and interpretation of the findings and appropriateness of conclusions drawn.
- Recommendations are then made to whether the paper is accepted.
- Peer review helps to ensure that any research paper published in a well respected journal can be taken seriously by fellow researchers.
- If peer reviewers are biased this could cause opposite theories to be rejected which isn't fair. Also friends may favour friends which could make research subjective.
- If research only shows a small difference it is less likely to get published, even if the research is important. We need to know all the results to understand the bigger picture. Well known psychologists are more likely to get published compared to less well known ones which means important research may not get published.
22 of 30
- The mean average uses all avaliable numerical data and is the most powerful test of central tendency. The mode is the most common occuring number in the data set. The greater the range the more spread out the scores will be.
- Bar Charts: Usually used when the data is in categories (nominal data), when the data is in ran order (ordinal data) or to illustrate the average scores from different samples.
- Histograms: These tend to be used when data is continious. Class intervals are created and are represented along the x axis and the frequency of scores in each class is represented on the y axis. Used with interval data.
- Frequency Polygons: Continous data in class intervals are represented. Useful when two or more data sets need representing on the same graph. It is drawn by linking the frequencies of the mid points of the class intervals. The lines must be joined up to the x axis at the beginning and at the end of the line.
- Scattergrams: These are used when plotting two co-variables against each other. Can help to determine the direction of correlation; establish the strength of the correlation and analyse non-linear relationships.
23 of 30
Level of Significance
- It refers to the percentage chance a researcher is prepared to take of rejecting the null hypothesis, when in fact it should have been retained. In other words, the chance of accepting that the IV causes the change in the DV when it was really down to chance. Probability values may be written as percentages or decimals e.g. the 5% level of chance can be written as p = 0.005. (where p = probability of the null hypothesis being true).
- P = 0.05 means there is a 1 in 20 chance of the results having occured by chance alone.
- If p < 0.05 this means that the probability of the null hypothesis being true is less than 5%.
- If p <_ 0.025 this means that the probability of the null hypothesis being true is equal to or less than 2.5 in 100 or 2.5%.
- If p << 0.01 this means that the probability is less than 1 in 100.
- p = 0.05 is the usual level chosen. However, when research is socially sensitive and therefore it is important not to accpet the study as significant if it is caused by chance, 0.01 is used.
- Purpose of statistical test: It enables us to calculate a statistic from the data they have collected, compared to a critical value given in a table - this allows them to decide if the study is significant at the chosen level.
24 of 30
Type 1 and 2 Errors
- By using significance levels, we are always at risk of either rejecting the null hypothesis when it is true or retaining the null hypothesis when it is false.
- A type 1 error occurs when a null hypothesis is rejected when it shouldn't be. The likelihood of a type 1 error mirrors the level of significance chosen e.g. 0.05 chance.
- A type 1 error is also known as a false positive.
- A type 2 error occurs when a null hypothesis is retained when it shouldn't have been.
- It is when we accept that the results are down to chance, when in fact they are not.
- It is referred to as a false negative and is more likely to occur with a higher level significance.
- p = 0.05 tends to be chosen because it minimises the chance of type 1 and type 2 errors.
- If p = 0.1 is employed, more type 1 errors occur.
- If p = 0.01 is employed, more type 2 errors occur.
25 of 30
- The four tests are: Chi-squared; Wilcoxon T; Mann-Whitney U and Spearman's Rho.
- The test relies on whether the researcher is testing for differences between groups or a correlation between two variables; level of measurement and in test of difference; whether the design is an independent groups, repeated measures or matched pairs.
- Lab, Field, Natural and Quasi experiments are all testing for difference.
- Correlational research is trying to show how two co-variables are linked.
- Tests of difference: Chi-squared, Wilxcon T and Mann-Whitney U.
- Correlation Tests: Spearman's Rho.
- Nominal data uses chi-squared.
- Ordinal data uses Spearman's Rho, Mann-Whitney U and Wilcoxon.
- Treated at ordinal level: Wilcoxon T; Mann-Whitney U and Spearman's Rho.
- Indpendent Groups use chi-squared and Mann-Whitney U.
- Repeated Measures/Matched Pairs both use Wilcoxon T.
- Correlations don't have an experimental design, so Spearman Rho doesn't apply.
26 of 30
- The critical value varies according to:
- 1) The number of participants you used (known as N) apart from Chi-squared where it is the degrees of freedom (df) which for your purposes will always be 1.
- 2) Whether the research hypothesis was one tailed (directional) or two tailed (non-directional).
- 3) The level of significance you are choosing which will usually be p = 0.05.
- For Chi-aquared and Spearman's Rho, the observed value has to be equal or greater than the critical value. For Mann-Whitney U and Wilcoxon the observed value has to be equal or less than the critical value to be significant.
- Investigative Reports: In this order you would need to write:
- Descriptive statistics (Table and Interpretations); Graphical represenatation of data; inferential statistics; conclusion based on this; discussion of findings (relations to previous research, problems with the research and suggestion for further research).
27 of 30
Analysis and Interpretation
- Content Analysis: It is the systematic research techiques for analysing transcripts of interviews, documents. For example advertisements; films; T.V; children's books; magazines and websites.
- Care is needed to ensure that the categories are discrete and don't overlap.
- Some are concerned that the richness and complexity of qualitative data is lost through content analysis and this is reductionist.
- Turning qualitative data into hard data may be a problem as the researcher could let their bias influence the study.
- Content analysis allows for greater reliability checking because of the coding system it is easier to repeat the studies.
- Qualitative data benefits from being converted into quantitative data in terms of statistical procedures which can be used to identify patterns in the data.
28 of 30
- It is a method for identifying, analysing and reporting patterns or themes in data.
- It minimally organises and describes your data set in rich detail. However, it can go further than this and interprets various aspects of the research topic.
- The 6 stages are: 1) Transcribe the data if needed, by reading throughly. 2) Divide the text using a forward slash whenever the subject is changed. 3) Search the text for meanings that are similar and group them together. 4) Keep adjusting them as you continue to sort through. 5) Once you have done all themes, you need to define and name them. 6) Write up the report - Present a case for each theme and provide some supporting quotations from the test.
- Accessibility: It is one of the more accessible ways of analysing data. This means reports are usually more understandable which means its easier for psychologists to share their research.
- Subjectivity: There is a risk that different researchers could interpret the data in different ways. This means people may lose confidence in the findings that are gathered.
- Quality: It is hard to analyse the quality because control, objectivity and replicability are missing because it is qualitative research. To stop this it is suggested that we use different criteria for analysing data.
29 of 30
- Separate it into 4 sections - participants, design, material and procedure.
- under participants, write about your sample and how you will go about choosing them (e.g. volunteer) and why. if the question hints at the demographic you should use, go for it, but in most answers it's up to you to pick how many participants you're actually going to have etc.
- under design, write your hypothesis (and whether it's directional etc), the independent and dependent variables in your study, what type of experiment it is going to be, or whether it's a correlational study, and maybe explain some measures you will take to try and cancel out extraneous variables etc.
- under materials, you don't have to put much. e.g. if you're analysing kids' behaviour on a playground, you may need a video camera to record and analyse later, a content analysis sheet, and coloured bibs so you can differentiate your participants from the other children
- under procedure is when you go through how many days the experiment will take place, the exact steps involved in your experiment (including ethical stuff, consent etc, and then the actual study), how the data is collected, whether you're using multiple observers at a time to increase inter-rater reliability, things like that. end it with a debriefing remember! if you have time, also go through how you will plot your results, and maybe even go through which statistical test.
30 of 30