# research methods pt 2

- Created by: Ikra Amin
- Created on: 16-04-15 22:45

## Observational methods

the observational method can be used in a number of ways, ranging from controlled observations in labs to naturalistic observations in a natural environment.

aim of controlled observation: control variables that might influence behaviour

aim of naturalistic observation: produce data with higher ecological validity by observing naturally occuring behaviour

observations may be participant & non participant(research not personally involved but as an outside) depending on whether the observer interacts with the observed or not

can be overt/disclosed (know they're being observed) or covert/undisclosed (do not know) depending on whether ps known they are being observed or not

naturalistic: no rigid control or manipulation but allows researcher to access more natural, freely occurring behaviour & tend to be field based. yield data that has high ecological validity

## Structured vs unstructured observations

Unstructured:

- spontaneously noting behaviour/events they regard as important as they happened (rather than having a pre planned list of specific behaviours that are expected)
- qualitative/descriptive data most likely (in contrast to quantitative/numerical)

Structured:

- use behavioural categories or other means to consistently record data
- rating behaviours on scales or coding behaviours according to predefined principles are other ways data can be gathered in a structured way in an observation
- essential that observers are familiar with and practised using whatever structure has been agreed on. this ensures agreement between observed - inter-observer reliability

## + & - of observations

positives

- naturalistic = ecologically valid
- lab observation = easy access to p's and equipment
- when unstructured, observations allow the research to access data without the constraints of pre determined checklists and this caters for unexpected behaviours
- unstructured = descriptive data yielded that is not reductionist
- structured = get data in consistent manner = reliable data and standardised process so can be replicated to see if future findings consistent (if they are its externally reliable)

weaknesses

- do not allow us to establish cause & effect relationship between variables when used as a method of investigation
- researchers may misinterpret what they see which = invalid & unreliable data
- establishing high inter-obsever reliability takes time & may never be perfect
- only know what is happening, not why it is happening

## Behaviour sampling methods

Ways of recording behaviours

Event sampling: Key behavioural events are recorded everytime they occur (e.g. if the behaviour being investigating is 'fighting' only fights would be recorded and what aspects of the event were to be recorded ewould have to be predetermined.

Time sampling: Behaviour is recorded at specified time intervals, e.g. record what a child's doing during the 1st 10 seconds of every 3 minutes for 1 hour

Point sampling: Observing p's serially, i.e. watching one person for 30 secs and recording what they do, then watching the next etc.

when carrying out observations you need to create behavioural categories which are specific, not vague. this way you're likely to record objective data reliably. A behavioural checklist with clearly operationalised categories is a good idea and the use of more than 1 observer can allow inter-observer reliability.

## naturalistic observation

advantages

- natural, real behaviour is being observed
- if p's are unaware they're being watched their behaviours are more natural
- high ecological validity

disadvantages

- little control over potentially confounding variables
- covert/undisclosed can be unethical as p's unable to give consent
- no control over allocations of p's to groups
- IV not controlled

## controlled observations

advantages

- control over confound variables
- good access to participants and known what you're looking for

disadvantages

- control can affect behaviour which in turn affects data/results

## participant observations

advantages

- easier to understand observees behaviour
- participant and observe more likely to become personally involved so easier to access data

disadvantages

- hard to record observations (often done retrospectively and therefore unreliable)
- observer bias - observer records data in a subjective way so many project what they expect onto what's being observed

## non-participant observations

advantages

- observations can be made as they happen and are more reliable
- less likely to be biased in their observations and not as close to p's

disadvantages

- behaviour may be recorded but the meaning behind it is not known

## self-report methods

includes: Questionnaires, interviews (structured, unstructured or semi structured), diaries

Interviews strengths:

- Allow researcher to clarify q's if they're ambiguous (not clear) because they're asking the q's directly to the participant so can respond to their queries
- Allow relationship to build up between the p & interviewer especially when face to face which may encourage p's to be honest & open
- semi-structured ones allow the research to digress or probe furthre if the need arises because of their less structured format

Interview weaknesses:

- P's may be reluctant to open up or tell the truth. This may be awkward if subject matters sensitive.
- Takes longer to gather data as each p is asked individually

## questionnaire

advantages

- Take less time to gather data because a group of p's can be questioned simultaneously
- Easy to standardise and replicate
- P's may be more honest if the subject matter is sensitive because they answer anonymously
- Researchers don't need to be present throughout so p's may complete the questionnaire when and where it is convenient for them

disadvantages

- quantitative data generated by fixed choice questions provides somewhat false impression of precision
- may appear less personal so it is more difficul to establish trust and warmth between the p and research
- researchers often absent when p's answer, any ambiguous questions can't be clarified and unexpected responses cannot be expanded upon simulatenously

## self report method cont

overall, the self-report method/measure

- allows subjective input
- allows the collection of both quantitative and qualitative data
- allows research into areas which might be difficult to research in any other way

but, self reports may not produce valid findings, if

- p's do not respond truthfully
- do not fully understand the questions
- are influenced by social desirability effects or respond to demand characteristics
- if p's are asked the 'wrong' questions

## Validity and self reports

face validity: Relates to whether a measure corresponds to another, previously establid valid measures of the same variable - p's would fill in both questionnaries and the 2 scores should be positively correlated

population validity: Whether the sample is broad to ensure that findings generalise to other populations

concurrent validity: whether the questions being asked appear to correspond to what is being investigated. do the questions look on the surface like they are measuring what they ought to?

ecological validity:questions in the interview/questionnaire should relate to experiences that are real for the participant and that relate to aspects of their everyday lives

## Threats to internal validity

Social desirability effect - when p's answer dishonestly to avoid creating a bad impression.

Ambiguous questions - p's may not fully understand what they are being asked (more likely in questionnaires)

Response bias - more likely if the questionnaire is length and the same question type is used throughout

Forced choice format - p's may not be able to express how they really fell or what they really do because the option is unavailable due to the narrow choice of options provided

Demand characteristics/screw-you effect - p's may be able to work out what is expected of them in terms of their response

Researcher bias -researchers find results they expect to find (e.g. leading questions, or non-verbal cues like a smile or raised eyebrow)

Characteristics of the researcher - may influence the way p's respond (e.g. gender - p's respond more positively to members of the opposite sex who they find attractive) etc

## Reliability and self reports

another word for reliability is consistency. in terms of self reports this can be considered in several ways:

- internal reliability - whether the self report is consistent within itself - whether the questions within it are consistent and measure the same variable throughout. to assess the internal reliability of a questionnaire is to use the split half approach which involved splitting the q's into 2 groups random - each comprising half the questionnaire. if the questionnaire is internally reliable each half should yield similar scores.
- external reliability - refers to whether the self report stays the same from one use to another. we would expect a questionnaire measuring a psychological variable that is relatively stable to yield a consistent score when applied to the same person over a short period of time. if not it lacks external reliability. a way to assess the external reliability of self report is to question p's 1 time and again some time later. the 2 sets of scores would then be correlated to determine the extent to which pairs of scores from the same person at 2 diff times are similar. (test retest approach)

reliability affected by how q's are asked & how they're responded to by p's or scored/interpreted by the researcher

## How the following improves validity/reliability

- Avoid interviews - questionnaires are easier to standardise (responses should be more consistent) - improves validity as p's may respond honestly so valid data and more reliable
- Avoid asking p's to give info that clearly identifies them as respondents - increases validity & reliability as they'll be more honest if its anonymous
- consider the characteristics of the research in terms of gender, ethnicity, formality etc - affects validity. may influence the way p's respond
- if using interviews, train interviewers to ask questions in a standardised way - improves reliability if same interviewer behaves in same way with every p
- avoid asking respondents to give their names or complete questionnaires in front of fronts - increases validity and reliability as they don't behave in a certain way and may answer more honestly so increases chance of reliable result and becomes more valid
- use structured interviews as opposed to semi/unstructured interviews - increases validity as they know what they want from the interview and each researcher asks same things
- train the researcher carefully to avoid them giving either intentional or uninentional cues to what they are expecting to find - increases validity as the researchers expectations won't influence results

## cont....

- avoid face to face interviews - improves validity p's less likely to respond to social desirability effect
- avoid scoring questionnaires in the same direction, e.g. if investigating healthy eating, don't always have the healthy eating box on the extreme left - increases validity as will reduce chances of demand characteristics
- avoid using too many open-ended questions - reliability may be affected as may not be consistent for every participant if they are asked open ended q's a lot as each response differs
- avoid too many fixed choice questions - validity affected as p's may not be able to express how they really feel due to narrow choice of questions
- avoid using semantic differential type scales with no numerical values - affects reliability and validity - p's may not be fully understood the scale so many answer incorrectly or misinterpret it
- ensure questions make sense and are not ambigious in meaning - increases validity as p's understand q's properly so can answer correctly. also increases reliability as all questions will make sense so p's won't be confused.

## Case studies

an in-depth study of just one person or small groups of people. data may be collected by interviews, experiments and observations. sometimes cases are regularly reviewed over a period of time, hence providing longitudinal information

advantages

- high ecological validity as they provide large amounts of potentially accurate info that would be unobtainable in any other way.
- appropriate for the investigation of exceptional cases
- in depth studies may be used to challenge accepted theories
- individual case studies may provide the opportunity to pool data at a later date in order to carry out some form of quantitative analysis
- case studies provide an insight often impossible with other methods. the richness of data enhances our overall pscyhological knowlege without necessarily testing a specific hypothesis

## Case studies

an in-depth study of just one person or small groups of people. data may be collected by interviews, experiments and observations. sometimes cases are regularly reviewed over a period of time, hence providing longitudinal information

advantages

- high ecological validity as they provide large amounts of potentially accurate info that would be unobtainable in any other way.
- appropriate for the investigation of exceptional cases
- in depth studies may be used to challenge accepted theories
- individual case studies may provide the opportunity to pool data at a later date in order to carry out some form of quantitative analysis
- case studies provide an insight often impossible with other methods. the richness of data enhances our overall pscyhological knowlege without necessarily testing a specific hypothesis

## disadvantages of case study

- results are unlikely to be generalisable to a broader population as each case study is unique
- since studies cannot be replicated reliability may be low
- the researchers own subjective feelings may influence the data gathering as well as the final report
- if the study is retrospective, recollections may be inaccurate or subject to deliberate deception
- it is impossible to control potential counfounding variables
- if the case study is on 1 person and they withdraw, all data must be withdrawn and the case study cannot progress any further

## Content analysis

Analysis of qualitative data, e.g. media, books, diaries, films, interviews, pictures etc. by identifying themes and categories, which can then be used in a systematic manner to produce quantitative data.

qual data --> quan data

2 parts to the process:

- an interpretive aspect which involves deciding which categories are meaningful for study in terms of what you are investigating
- a mechanical aspect, which involes organisms and subdividing the material, counting how frequently words, images, ideas, actions etc occur

## cont

Steps in content analysis - 5 mark question for how you carry it out & link to scenario in the exam

- read/watch/look at/listen to how material to be analysd, and identify recurrent themes or categories
- give examples of your categories, which fit in with the scenario
- re read (etc) the material and record how many times the categories or themes recur by placing into a tally chart. make it clear that you will be re reading or watching for a second time
- count the number of tallies recorded for each category
- analyse the findings, using the quantitative data. this may involve comparison, before/after analysis, descriptive analysis etc. to help draw conclusions.

How reliability and investigator bias can be a problem for content analysis:

investigator bias results in subjective interpretation so they may record things differently. People might rush.

Investigator bias - you're going to choose what themes that fit i.e. what you expect

## Reliability

Reliability is based on consistency.and how consistent it measures

Internal reliability = consistency within the method/how consistent a method measures within itself.It is concerned with the consistency within a test - e.g. attitude scales, psychometric tests. **(How you are collecting data)**

Within an observational study two or more observers are usually used to control for subjectivity., i.e. personal bias in the observations. Problems with reliability arise because it can be difficult to categorise complex behaviour into observation criteria.

External reliability = consistency between uses of the method (produce similar results when repeated at a later date)

External reliability is the ability to produce the same results ever time a test is carried out

Both internal and external reliability can be checked using correlational techniques.

## Techniques to check internal reliability

Split half method/technique - Used to establish the internal reliability of psychological tests. correlating the results with the other half (e.g. the odd numbers with the even numbers of the test) and gaining a positive correlation, to see how similar they are. A high degree of similarity would support internal consistency and thus reliability

Inter-rater reliability/inter-observer reliability - used to test the accuracy of the observations.if the same behaviour is rated the same by 2 different observers then the observations are reliable. observers must be well trained and have precise, clear observation criteria to ensure consistency between them.

## Techniques to check external reliability

Test-retest reliability (same test on the same people at a later date) - i.e. replicating the original researc. Meta analyses draw on this when they compare the findings from different studies that have tested the same hypothesis. Strong consistency between the different findings (ie reliability) indicates validity.

## Improving reliability

reliability depends on the accuracy of any measurement. inaccurate measurements or inaccurate data recording will reduce reliability. ways to reduce inaccuracy:

**when possible, take more than 1 measurement from each p**. calculating an average score over 3 trials, for instance, will reduce the impact of an anomalous score**pilot studies**(especially observations) can be used to check that the proposed method of measurement works properly- when more than 1 researcher is used in a study, the way in which they collect and record data should be
**standardised.**this may require a period of training - ensure that the measurement is as objective as possible, ie interpretation by both p regarding what they're required to do, and research regarding what the p has actually done, or said, should be kept to a minmum (i.e.
**any categories are operationalised)**

## reliability

reliability refers to consistency amongst results and also ability to be repeated

internal reliability: how consistently a method measures constructs within itself. can be checked by split half methood in which results from half the measures are correlated with results from the other half. a high positive correlation indivates high internal reliability.

external reliability: how consistent a method measures constructs over time when repeated. checked by test/rest method where results of the test conducted on 1 occasion can be correlated with results of the test carried out at a later date. low positive correlation would indicate low external reliability.

inter rater reliability: testing the accuracy of observations. 2 or more observers getting similar results

can reduce reliability by: not piloting study/no training of observers; not taking more than 1 measurement from p's; using different methods of data collection; not having operationalised categories; subjective measurements

enhance reliability: Pilot studies; taking more than 1 measurement from each p; standardising data collection methods

## Data analysis and reporting on investigations

Data analysis and statistical techniques

statistical analysis is a method of summarising and analysing data for the purpose of drawing conclusions

carrying out psychological research often involved collecting a lot of data.

we can make a distinction between descriptive and inferential stats

descriptive: offer a way to summarise our data - mean, median, mode

inferential: allow us to make a conclusion related to our hypothesis -> infers a conclusion. How significant something is. Linked to probability and significance levels

## descriptive statistics

give us a way to summarise and describe our data but do not allow us to make a conclusion related to our hypothesis. descriptive stats incl. graphs, tables, measures of central tendency and measures of dispersion (SD & Range). in order to decide which summaries are most suited to our data we need to understand the 4 used levels of data: types of data:

Nominal: data consists of numbers of participants falling into VARIOUS CATEGORIES (e.g. number of students who consider themselves overweight, underweight or just right) -> each p has an individual score

Ordinal: DATA CAN BE PLACED IN ORDER OF SIZE, ie it can be ranked from lowest to highest. ordinal data are often measured on scales of unequal intervals, eg scores allocated on Strictly Come Dancing allow p's to be plced in rank order, but it cannot be assumed that the interval between each test score is equal

Interval: DATA CAN BE MEASURED ON A SCALE OF EQUAL INTERVALS, e.g. distance, time, temperature = set scale = interval data

Need to know ^^^^ to pick a statistical test

## Measures of dispersion

- Measures of dispersion measure the variability within the data distribution, aka how spread out the scores are
- the variation ratio complements the mode because it is the proportion of non modal scores and so is suitable for nominal data
- the range is the difference between the smallest and the largest value in a set of scores. However, anomalies can distort the data
- Sophisticated measure of dispersion: Standard deviation which tells us how much, on average, scores difffer from the mean

Appropriate measures of central tendency and dispersion based on the level of measurement:

- Level of measurement: Nominal. Measure of central tendency: Mode. Measure of dispersion: variation ratio
- Level of measurement: Ordinal. Measure of central tendency: Mean. Measure of dispersion: SD
- Level of measurement: Interval/radio .Measure of central tendency: SD. Measure of dispersion: SD

Look at pg 57 in R/M for graphical descriptive stats

## Normal distribution percentages

learn % --> the 3 on left side(after -3SD the % is 0.1% and +3 SD the % is 0.1%)

## Histograms

Show data for all categories, even those with 0 values. The column width for each category interval is equal so the area of the column is proportional to the number of cases it contains of the sample.

Continual data so there's no gaps.

## Frequency polygon aka LINE GRAPH

Also known as a line graph, the frequency polygon is similar to the histograms, except that it allows 2 or more sets of data to be shown on the same graph.

## Tables

- Effective way of summarising a large amount of data, eg the measures of central tendency & dispersion can be provided in the one table.
- Tables have advantage of being very precise, e.g. figures are readily apparent whereas graphs might only allow approximate figures to be worked out.
- However, tables can be harder to interpret than a graph because it is more difficult to visualise the data
- Often tables are used when there are just 2 conditions to look at, as the figures are easy to see at a glance
- When a table shows data for a total of 4 categories, differentiated according to 2 different variables, this is known as a 2 x 2 contigency table

E.g. 4 cells of info (4 numbers)

## Probability and significance

Probability - a number between 0 and 1, where 0 means an event def will not happen and 1 means an event def will happen. the reason that probability (p) is between 0 and 1 is the way probability is calculated. to calculate the probability that a particular outcome will occur, it has to be divided by the number of possible outcomes. Probability = number of particular outcomes/number of possible outcomes

For exaple, what is the probability of getting a head (1 particular outcome) when you toss a coin? the result could be head or tail, giving 2 possible outcomes. thus, the probability of getting a head is 1 divided by 2, i,e 0.5

Probability sometimes expressed as % not a decimal. this involved multiplying by 100 so 0.5 becomes 50%

## cont..Statistical significance

Testing for statistical significance involves the use of inferential tests, ie tests which allow us to infer the role of chance/random variability in producing the observed results. only if there is a low possibility that chance factors were responsible for any observed difference, correlation or associated in the variables tested, can results be considered to be significant.

How significant do results have to be?

An effect, for example, a difference, has to be big enough and consistent enough to conclude that a result is significant and probably not due to chance factors. This means the research hypothesis has been supported.

Significance levels are expressed in terms of the probability that results were produced by chance, and are usually expressed as a decimal in the form of p<0.00, where p stands for posibility that chance factors are responsible for results.

generally, 5% level of significance (p<0.05) is appropriate. if a result is significant at this level it can be said that even if the null hypothesis were true there would be less than 5% likelihood that the difference or association would be produced due to chance factors alone ( in other words, there is a 1 in 20 chance it was a fluke result)

## cont..Statistical significance

Testing for statistical significance involves the use of inferential tests, ie tests which allow us to infer the role of chance/random variability in producing the observed results. only if there is a low possibility that chance factors were responsible for any observed difference, correlation or associated in the variables tested, can results be considered to be significant.

How significant do results have to be?

An effect, for example, a difference, has to be big enough and consistent enough to conclude that a result is significant and probably not due to chance factors. This means the research hypothesis has been supported.

Significance levels are expressed in terms of the probability that results were produced by chance, and are usually expressed as a decimal in the form of p<0.00, where p stands for posibility that chance factors are responsible for results.

generally, 5% level of significance (p<0.05) is appropriate. if a result is significant at this level it can be said that even if the null hypothesis were true there would be less than 5% likelihood that the difference or association would be produced due to chance factors alone ( in other words, there is a 1 in 20 chance it was a fluke result)

## cont..

There are many other possible levels of probability but the p<0.05 seems reasonable since:

- significance levels of p<0.5 (a 50% or 50:50 probability that chance factors were responsible) or p<0.3 (a 30% or roughly 1 in 3 probability that chance factors were responsible) are too lenient (not too strict), and may lead to a type one error, i.e. falsely rejecting the null hypothesis.
- significance levels of p<0.01 (a 1% or 1 in 100 probability that chance factors were responsible) or p<0.001 (a 0.1% or 1 in 1000 probability that chance factors were responsible) are regarded as too strict or stringent - a strong effect is likely to be ignored leading to a type two error, where a null hypothesis is falsely accepted.

## Type 1 & 2 errors

Type 1 error: occurs when a researcher rejects the null hypothesis in favour of the alternative/experimental hypothesis, even though the findings are actually due to chance

Type 2 error: occurs when the null hypothesis is accepted, but actually the alternative hypothesis is correct

Type 1 error factors include:

- level of significance too lenient
- poor experimental design
- problem of confounding variables

Type 2 error factors include:

- Level of significance too stringent
- poor experimental design
- problem of counfounding variables

## Inferential statistical tests

inferential statistical tests provide a calculated value based on the results of the investigation

this value can then be compared to a critical value (a value that statisticians have estimated to represent a significant effect) to determine whether the results are significant

the critical value depends upon the level of significance required ( p<0.05, p<0.01, etc) and other factors such as the number of subjects used in the test and whether the hypothesis is one or two tailed

inferential stats allow us to INFER that the EFFECT GAINED from the results on a SAMPLE of subjects is PROBABLY TYPICAL of the TARGET POPULATION the sample is drawn from

## cont..

In order to consider inferential statistics further you need to know about hypotheses

Null hypothesis: ie a prediction that there is no significant difference (or association) between operationalised variables (there will be no significant difference in the number of words recalled from a simple memory test (operationalised DV) by people over 60 (operationalised IV) to those under 20

The alternative (alternate/research) hypothesis takes 2 forms: proposes the expected outcome of study.

- directional (one tailed) - ie a prediction that a difference of association between variables will work in one particular direction (ps completing a word search in silence (operationalised condition of IV) will take significantly less time (operationalised DV) than those completing the wordsearch whilst listening to music (2nd operationalised condition of IV)
- non direction (two tailed) - ie a prediction that a difference or association will exist, but you don't know which direction it will go. (there will be a significant difference in the number of words recalled (operationalised DV) by people over 60 compared to those under 20 (the two operationalised conditions of the IV)

## Choosing a statistical test

2 groups of stats tests non parametric & parametric

- non parametric: simpler to carry out and more versatile

4 tests needed to analyse non parametric quantitative data are:

- spearmans rho (spearmans rank order correlation)
- mann-whitney U test
- wilcoxons signed ranks
- chi squared

The decision regarding which test to use is based on:

- am i investigating a difference or a relationship
- what level of data do i have, nominal, ordinal or interval
- what type of design did i ise, repeated measures, matches pairs or independent group

Look on pg 70 R/M for flow diagram

## Using inferential tests

justifying your choice of test

when justifying you have to explain why you're using that test. e.g. in order to assess the significance of these findings it was necessary to use a stats test. in this study the appropriate test wold be a mann-whitney U test because: a test of difference was required, the data were at least at an ordinal level and the design was an independent measures

calculating the observed value: next step is to perform the calculations. the outcome of a stats test is a number called the observed value

using a table of significance to compare the observed and critical values

to determine whether the observed value is significant, we consult an appropriate able of significance. comparising the critical value in the table and the observed value enables you to decide whether to accept/reject the null hypothesis. the table will tell you if the calculated val has to be less than or more than the critical value for significance to be achieved. for exam you need to know if the observed or calculated value needs to be lower or higher than the critical value. this differs depending on the stats test

## Comparing the calculated and critical values

Chi squared test (x2) calculated value must be GREATER than or equal to critical (if opposite you reject null hypothesis)

Mann-Whitney U (U) - calculated value must be LESS than or equal to the critical value (if opposite you reject the null hypothesis)

Wilcoxon signed ranks (T) - calculated value must be LESS than or equal to the critical value (if opposite you reject the null hypothesis)

Spearmans rho (R) - calculated value must be greater than or equal to the critical value (if opposite you reject the null hypothesis)

Reporting the result

the final step is to record the outcome of this whole process. you should incl the following info in a statement of significance: details of the level of significance, the critical and observed levels, whether the hypothesis was directional or non directional and whether it was accepted or rejected

## Examples

e.g. 1

for 10 participants, the critical value for rho is 0.564 at the 5% level of significance (p<0.05, one tailed). as the observed value of rho is 0.703, this is greater than the critical value and so there is less than a 5% probability that the result is due to chance. the null hypothesis can be rejected and the alternative hypothesis is accepted.

e.g. 2

the calculation value of U=30, which is greater than the critical value of U=27, where N1= 10, N2=10, at p<0.05 (for a one tailed test). as the observed value is greater than the critical value to demonstrate significance, there is a greater than 5% probability that the result is due to chance. this means we must accept the null hypothesos and reject the alternative hypothesis.

## Analysis and interpretation of qualitative data

Qualitative research takes an interpretative approach to the understanding of human behaviour, and can take many forms. If you are asked a question in the exam on how you would RECORD the data from interviews etc, these are the possible answers that you could note:

- written records, eg notes or transcripts
- audio or video recordings
- direct quotations from participants

Need to be able to JUSTIFY why your chosen way is the most suitable for recording data in the given scenario.

Ways you to collect qualitative data:

- interviews
- observations
- review of documents

## cont.

2 ways to analyse qualitative data:

- turn the qualitative data into quantitative data, usually using some form of rating system or by coding and categorising, e.g. carry out a content analysis
- purely qualitative studies deal with the gathered data as qualitative. there is usually no statistical analysis and the data are reported in purely verbal terms. this qualitative approach is generally taken by those who are strongly opposed to the reduction of qualitative data to quantitative values, on the grounds that too much human meaning is lost in the process.

the 1st stage in analysis involves organising the data.. if the data involved recording speech this means creating a transcript.

some types of analysis require only an accurate transcription of the speech, others require details of pauses, intonations, hesitations, etc. the researcher then needs to know the data thoroughly, perhaps by reading it over several times before attempting analysis.

Next the data is often coded; however, the reason for and emphasis of the coding depends upon the type of analysis.

## cont...

There are a number of ways of analysing the data qualitatively, including:

- interpretive phenomenological analysis
- grounded theory
- discourse analysis

interpretive phenomenological analysis explores how participants make sense of the world. it involved interpreting the meaning that particular events of experiences have for participants (hence phenomenological)

groundede theory starts with coding each line of text. on further analysis the codes start to combine into larger constructs. these constructs can, in turn, be explored and links between them studied. any theories that emerge are grounded to the data

discourse analysis investigates the social context of discourse and the interaction between speakers

each of these 3 approaches is inductive. theory developes from the data during the analysis and researchers avoid having priort assumptions. in contrast, most quantitative research involves the application or investigation of existing theory.

## cont

finally, qualitative researches acknowledge the need for reflexivity - the recognition that a researcher's attitude, biases, etc have an unavoidable influence on the research they are conducting, ie qualitative research is always to some degree based on subjective interpretation. the impact of reflexivity cannot be avoided but it can be monitored and reported.

Evaluation of qualitative data

quantitative data tends to be evaluated in terms of reliability and validity. howevere, these may be difficult to apply to qualitative research. qualitative data is based on the researchers interpretation of the p's subjective experience and is usually an analysis of that experience for that person, at that time, within their own social context. hence, data tends to have high external validity, but low internal validity and reliability. qualitative researchers would go further to point out that since they do not adopt a realist viewpoint, trying to apply concepts of reliability and validity becomes meaningless.

## evaluation of qualitative data continued.

Robson (2002) suggests that the concept of trustworthiness is a better way of thinking about qualitative data. one way of establishing trustworthiness is to use an external aduit (smith, 2003). this involves a check of the documentation, from transcript to final analysis, by an external party. the documentation should incl. notes about how any decisions or choices were made, and i acts as an audit trail. other criteria for evaluating qualitative research might include:

- transferability (can the insights from the research be transferred to help understand similar situations or experience?)
- negative case analysis (exploring cases that do not fit the emerging concepts)
- reflexivity (henwoods and Pidgeon, 1992)

## Writing hypotheses

in order to measure the outcome of a pice of research the aim must be expressed as a hypothesis, ie as a testable statement. to create a testable hypothesis both IV and DV must be operationalised, ie expressed in such a way that they can be measured or manipulated.

Null hypothesis

this is the starting point for all investigations - it is only if we can disprove the null hypothesis that we can consider whether it's appropriate to support the predicted outcome.

a null hypothesis is always NON DIRECTIONAL (two tailed) and predicts there will be NO difference or correlation between 2 sets of data

There will be no significant difference in the number of worlds recalled from a simple memory test (operationalised DV) by people over 60 (operationalised IV) compared to those under 20 (two operationalised conditions of IV)

## the alternative hypothesis

this is a testable statement that proposes the expected outcome of the study. may be directional (one tailed) or non directional (two tailed)

directional: predicts the direction of results, ie says that 1 condition will produce better/faster/higher results than the other.

non directional: predicts that there will be a difference in 2 sets of results, but goes no further than that.

creating non directional experimental hypothesis

there will be a significant difference in the operationalised DV (time taken to solve 10 anagrams) between the two operationalised conditions of the IV (ps listening to music compared to those working in silence)

## creating a directional hypothesis

this suggests that 1 group will achieve a higher/better/faster score than the other. best to write it by starting with the condition you think will do better.

ps completing a wordsearch in silence (one operationalised condition of the IV) will take significantly less time (operationalised DV) than those completing the wordsearch whilst listening to music (2nd operationalised condition of the IV)

ps subjected to a cognitive interview (one operationalised condition of IV) will answer significantly more questions accurately (operationalised DV) than ps subjected to a normal interview (2nd operationalised condition of the IV)

## Creating a correlational hypothesis

a correlation refers to the relationship between 2 variables (not the difference between those variables). can be directional or non directional

NON DIRECTIONAL correlational hypothesis

- there will be a significant correlation/relationship between variable one (hours of sunshine in a day) and self reported rating of happiness at the end of that day (operationalised variable 2)
- there will be a significant relationship between the number of units of alcohol consumed and an individual's reaction time as measured by a computer programme

DIRECTIONAL correlational hypothesis

there will be a significant positive/negative correlation between variable one (hours of sunshine in a day) and self reported ratings of happiness at the end of that day (operationalised variable 2)

NULL hypothesis correlational - always non directional

there will be no significant correlation/relationship between variable 1 (operationalised) and variable 2 (operationalised)

## Comments

No comments have yet been made