A2 Psychology Research Methods

?

Reporting Psychological Investigations (5)

Referencing 

Full details of any source of material the researcher drew up upon or cited in the report must be referenced. 

The name of the journal and title of the book appear in italics with the issue number

1 of 32

.

2 of 32

Reporting Psychological Investigations (1)

The first section in a journal article is a short summary/ abstract that includes major elements such as aim, hypotheses, method, procedure, results and conclusions.

Psychologists will read lots of abstracts to identify investigations that are wothy of further examination.

Introduction- This is a review of the general area of investigation with detailed relevant theories, concepts and a studies that are related to the current study.

Research review should follow a logical progression, the beginning should be broad and should become gradually more specific unti aims and hypothesis are presented.

3 of 32

Reporting Psychological Investigations (2)

Method: should include detail so that other researchers can replicate the study 

  • Design- should be clearly stated e.g. individual groups, naturalistic observation. There is a reason and justification given for the choice
  • Sample- Information related to people involved in the study, how many people were there?. Sampling method+target population included.
  • Apparatus/ materials- Detail of materials used and assessment instruments
  • Procedure- List of everything that has happened from start to finish. Everything said to participants, briefing debriefing, standardised instuctions 
  • Ethics- explanation of how they were adressed in the study
4 of 32

Reporting Psychological Investigations (3)

Results- summarise the key findings from investigation. Includes descriptive statistics e.g. tables, charts, graphs.

Inferential statistics should include reference to the choice of statistical test, calculated and critical values,  level of significance and the final outcome. i.e. which hypothesis was accepted and which was rejected.

Raw data collected and calculations appear in the appendix rather than main body of report.

If qualitiative methods are used  then the results/findings are likely to involve themes and/or categories

5 of 32

Coding and quantitative data/ Thematic analysis an

Coding is the initial stage of content analysis. 

Data analysed may be large so information needs to be categorised into meaningful units.

Example: Counting up the number of times a particular word or phrase appears.

Content analysis also generates qualitative data an example is thematic analysis. 

Themes only emerge once data has been coded.

A theme in content analysis refers to any idea that keeps cropping up, recurrent.

Once researcher is satisfied with the themes they may collect a new set of data to test the validity of themes and categories

6 of 32

Content Analysis

Type of observational research where people are studied indirectly via communications they have produced. 

Type of communication that is subject to contnent analysis is written, spoken and examples from the media. 

Aim of content analysis is to summarise and describe the communication in a systematic way so overall conclusions can be drawn.

7 of 32

Improving Validity (Experimental Research)

Using a control group can improve validity as the researcher is able to assess whether the changes in the DV were due to the effect of the IV.

Standardising procedures will minimise the impact of participant reactivity and investigator effects on the validity of the outcome.

The use of single-blind and double-blind procedures is designed to achieve the same aim. 

Single-blind: Pp's unaware of the aims of the study which reduced the effect of demand characterisitcs on their behaviour.

Double-blind: Third party constructs the investigation without knowing it's mai purpose which reduces demand charcaterisitcs and investigator effects. This then improves validity.

8 of 32

Hypotheses

  • Hypothesis are written before an investigation is carried out
  • Alternative hypothesis states that something will happen
  • Hypotheses can be directional or non-directional

Directional hypothesis- states the direction of the difference or relationship e.g. more or less, positve or negative. Black people can run faster than white people

Non-directonal hypothesis- hypothesis that states a difference,does not specify the direction. e.g.  Your ethnicity will effect how fast  you can run

Null hypothesis- states there is no clear difference between conditions e.g. There is no correlation between how fast a person can run and their ethnicity.

9 of 32

Case Studies

A case study is a detailed, in-depth analysis of an individual, group or institution.

Involves analysis of unusual individuals or events such as someone with a rare disorder. 

Case studies produce qualitative data.

Case history of individual is constructed using interviews, observations or questionaires. 

Quantitative data can be produced if the person is subject to experimental or psychological testing to assess what they are capable of.

Case studies are longitudinal and additonal information can be gathered from family, friends and the indiviudal themselves.

10 of 32

Case Study Evaluation

Strengths: offer rich, detailed insights that shed light on very unusual and atypical forms of behaviour.

Contribute to our understanding of 'normal' functioning

Generate hypotheses for future study 

Limitations: Hard to generalise results when dealing with a small sample size       ( case studies are small samples).

Information in final report is based on subjective selection and interpretation of the researcher. it could be interpreted differently by another.

Information from friends and family may be prone to inaccuracy because of memeory decay, especially if childhood stories are told.

Evicence from case studies are therefore low in validity.

11 of 32

Improving Reliability (Experiments)

Experiments: Lab experiements are described as reliable because researcher have strict control over the procedure for example the instructions that the pp's recieved and conditions within which they are tested.

Control is more achievable in a lab then a field.

One thing that can affect the reliability of findings is if pp's were tested under slightly different conditions each time they were tested. 

12 of 32

Reporting Psychological Investigations (4)

Discussion

Researcher summarises results/findings in verbal rather than a statisitcal form. This is discussed in the context of evidence presented in the introduction. 

Limitations must be taken into account and discussed, reference to method and the sample should have suggestions of how to overcome limitations that might be addressed in future study. 

Wider implications are considered, may include real-world applications of what has been discovered and the contribution the investigation has made to existing knowledge-base within the field.

13 of 32

Improving Reliability (Observations)

Observations: reliability can be improved by maing sure behavioural categories have been properly operationalised and that they are measurable/self evident.

Categories shouldn't overlap and all possible behaviours should be covered on the checklist.

Different observers have to make their own judgements of what to record and where if categories are not operationalised well or ae overlapping or absent. 

This can inturn end up with differing and inconsistent results.

14 of 32

Inter-observer Reliability

Everyone has there own unique way of seeing the world. It is the same with observational research as one researcher's interpretation of events differs to someone elses. This intorduces subjectivity bias. 

To improve reliability it is recommended that observers should conduct their observations in at least pairs but inter-observer reliability must be established.

A small scale run, pilot study, of the observation should be done to check that observers are applying behavioural categories in the same way, it may be reported at the end of the study to show that data collected was reliable. 

Observers watch same events/sequence of events and record independently.

Data should then be correlated to assess its reliability.

15 of 32

Improving Reliability (Questionaires)

Questionaires: Reliability of questionaires should be measured using the test-retest method.

A questionaire that has a low test-retest reliability may require some items to be removed or rewritten.

For example if questions are complex they may be interpreted differently by the same person on different occasions. A solution being to replace open questions with closed ones so they have a fixed choice alternative which may be less ambiguous.

16 of 32

Improving Reliability (Interviews)

Interviews: Best way of ensuring reliability is making sure the same interviewer is used each time. 

If not possible/practical all interviewers must be properly trained so that one particular interviewer is not asking questions that are too leading or ambiguous.

Structured interviews avoid this as the interviewer's behaviour is more controlled as fixed questions are used. 

However unstructured interviews are more 'free-flowing' and are less likely to be reliable.

17 of 32

Validity

Validity refers to whether are psychological test, observation, experiment produces a result that is legitimate.

Is it genuine and does it represent what is out there in the real world?

Does the researcher measure what they intended to measure?

Can the research be generalised beyond the research setting it was found?

18 of 32

Assessment of validity

Face Validity: Basic form of validity, whether a test, scale or measure appears to measure what it is meant to.

Simply looking at the measurment or passing it onto an expert to check.

Concurrent Validity: demonstrated when the results obtained frm a test or scale are very close to or match those obtained on another recognised or well established test. 

You compare your results to a well known test to check for validity.

Close agreement between the tests would indicate the new test has high concurrent validity 

19 of 32

Reliability

Reliability is a measure of consistency. 

If a particular measurement can be repeated then that measurement is described as reliable.

Ways of testing reliabilty 

Test-retest: Involves administering the same test or questionaire to the same person or people on different occasions.

If reliable then results obtained should be the same or at least similar each time they are administered.

There must be time between test and retest to ensure participants/respondents cannot recall answers or questions to surveys but not too long so that attitudes, opinions or beliefs are changed.

20 of 32

Internal Validity

Refers to whether the effects observed in an experiemnt are due to the manipulation of the IV and not aother factor.

A threat to internal validity is if participants respond to deman characteristics and act in a way they think is expected.

For example: Milgram's study pp's claimed to be 'playing along' and new that they were not administering real shocks. They responded to the demands of the situation.

21 of 32

External Validity

External validity relates to factors outside of the investigation i.e. generalising to other settings, populations and eras.

Ecological Validity: Generalising findings from one setting to another, most particular 'everyday life'.

If the task that is used to measure the DV in an experiment is not 'like everyday life', lacking mundane realism, it can lower ecological validity.

All aspects of research must be looked at in order to decide whether findings can be generalised beyond the particular research setting.

22 of 32

Content Analysis Evaluation

Strengths: Circumnavigate, get around, ethical issues because most of the material wanting to be analysed already exists in the public domain. For example ads, articles and films.

There are then no issues obtaining permission as communication is seen as sensitive and being high in external validity. 

Flexible as it produces both qualitative and quantitative data depending on the aims of the research.

Limitations: People tend to be studied indirectly so the communication they produce is usually analysed outside the context within which it occured. 

Researcher can then attribute opinions and motivations to the speaker or writer that were not intended originally. 

23 of 32

Temporal Validity

Whether findings from a study or concept within a theory hold true over time.

Example: Critics suggest that the reasons for high conformity rates in Asch's study was because during the original asch experiment it was conducted during a conformist era in recent America history (1950's).

Freuds concepts such as the idea females experience penis envy is criticised to being utdated, sexist and a reflection of the patriarchal Victorian society which he lived in.

24 of 32

Improving Validity (Questionnaires)

A lot of questionnaires and psychological tests incoporate a lie scale within the questions to assess the consistency of a respondent's response and to control the effects of social desirability bias.

Validity can be enhanced further by assuring respondents that all data that us submitted will remain anonymous.

25 of 32

Improving Validity (Observations)

Observational research may produce findings that have high ecological validity as there is minimal intervention by the researcher.

In covert observations the behaviour of the person observed is more likely to be natural/authentic because the observer remains undetected.

Behavioural categories that are too broad, overlapping, or ambiguous may have a negative impact ont he validity of the data collected.

26 of 32

Improving Validity (Qualitative Methods)

Qualitative are thought to have higher levels of ecological validity then quanititative, this is due to quanititative data being less interpretative.

Qualititative data have higher levels of ecological validity because it is more indepth and detailed due to case studies and interviews so it is more able to reflect the persons reality.

The researcher still has to demonstrate the interpretative validity of their conclusions which is the extent to which researchers interpretations of events match those of the participants.

Validity is enhanced further through triangulation, he use of a number of different sources as evidence. Example being data compiled through interviews with family and friends, personal diaries, observations and questionaires etc...

27 of 32

Levels of Significance

Statisitcal tests work on the basis of probability.rather than certainty.

Significance level is the point that a researcher can claim to have discovered a significant difference or correlation within the data.

The point at which the researcher can reject the null hypothesis and accept the alternative hypothesis.

Usual level of significance in psychology is 0.05 or 5%.

This means the probability that the observed effect, result, occured by chance is equal to or less than 5%.

Psychologists can never be 100% certain about a particular result as they have not tested all members of the population.

28 of 32

Use of statistical tables

Once a statistical test has been calculated the result is a number which is known as the calculated value or observed value.

To check for statistical significance the calculated value must be compared with a critical value which is the number that tells us whether to reject the null hypothesis or to accept it.

Each statisitcal test has it's own set of critical values developed by statisticians.

29 of 32

Using Tables of Critical Values

There are thre criteria a researcher follows in order to know which critical value to use.

Whether the test is one-tailed or two-tailed?A one-tailed test is used if the hypothesis was directional and a two-tailed test is used if there was a non-directional hypothesis.

The number of participants in the study appears as N value on the table. For some tests degrees of freedom (df) is calculated instead.

The level of significance (or p value). 0.05 is the standard level of significance in psychological research.

30 of 32

Lower Levels of Significance

A more stringent level of significance, 0.01, can be used in studies where there may be human cost for example drug trials.

If there is a large difference between the calulated and critical values the researcher would check more stringent levels as the lower the p value is the more statistically significant the result.

31 of 32

Type 1 and Type 2 Errors

As researchers can't be 100% certain that they have found statistical significanc it is possible that the wrong hypothesis is accepted.

A type 1 error: When the null hypothesis is rejected and the alternative hypothesis is accepted. It should be the other way round because in reality the null hypothesis is 'true'. It is also referred to as an optimistic error or fals positive as the researcher claims to have found a significant difference or correlation when it doesn't exist.

A type 2 error: When the null hypothesis is accepted but it should have been the alternative hypothesis because in relaity the alternative hypothesis is true. This is known as a pessimistic error or false negative.

More likely for type 1 error to occur if significant levels are lenient(too high)= 0.01 or 10%

Type 2 error more likely to occur if significant levels are stringent(too low)=0.01 or 1%

32 of 32

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »