Empirical Evidence and Approach
Psychologists rely on scientific methods of acquiring knowledge to achieve their goals; this is called empirical evidence.
An empirical approach assumes that observations are not influenced by emotions or personal opinion; they are objective. It’s very difficult to do this is in psychology so psychologists have to be extremely careful about how they conduct their research.
How science works
Observe human behaviour
Develop explanations and Hypothesis
- Validity: extent to which something measures what it is supposed to measure. Involves issues of control, realism and generalisability. But study can have high realism but lack generalisability.
- Control: The extent to which any variable is held constant/regulated by a R it is important to control as many relevant EV’s as possible otherwise results would be meaningless as the R may have not actually tested what they intended to and instead the influence of another variable not the IV has been tested.
- Mundane Realism: Refers to how a study mirrors the real world. The simulated task environment is realistic to the degree to which experiences encountered in the environment will occur in the real world
- Generalisability: Just because a study is conducted in a natural environment it does not mean you can generalise the findings to the real world e.g. if only use US uni students ‘though experiment may be in a natural setting’ can’t generalise results to all ages/cultures.
- Internal validity (about control and realism) : extent to which study measures what it is set out to measure/degree to which the observed effect was due to the experimental manipulation rather than other factors such as EV'S which may affect results.
- External validity (about generalisability): The degree to which the findings can be generalised to other settings (ecological validity), to other groups of people (population validity) or over time/to any era (historical validity). Can be affected by representativeness of the sample.
- External Validity is affected by internal validity – you cannot generalise the results of the study if it was low in internal validity.
- Sample validity:The extent to which the P’s represent people outside the R situation.
Reliability: A measure of consistency both within a set of scores or items ( internal reliability) And also over time, such that it is possible to get the same results on subsequent ocassions when the meaure is used (external reliability).
Internal reliability: Whether a test is consistent within itself.
External reliability: Whether a test measures consistently over time.
The reliability of a experiment can be determined through replication.
What is an experiment?
A scientific procedure undertaken to test a hypothesis.
Must have an IV and DV to be an experiment
Investigator and experimenter definition and bias
Investigator - Design's the experiment/study
Experimenter - Carries out the experiment/study
Investigator/experimenter/R bias: Effect of the investigator/experimenter expectations on a P’s behaviour and thus on the results. E.g. fast and slow learner rats in a maze
Experiments - Laboratory Experiments
An experiment conducted in a special enviroment where variables can be carefully controlled.
+ Variables are easier to control in a lab comapred to a natural setting
+ High degree of control, since EV'S minimised = cause and effect can be determined if carefully controlled variables
+ If care taken in design and conduct, and has been reported accurately can be easily replicated (reliability)
- Artificial contrived situation where P’s may not behave as they do in everyday life because of a lack of mundane realism, P effects, investigator effects and demand characteristics – reduce internal validity
- Low ecological validity as it is difficult to generalise the results if the tasks given to the P’s are not like in real life (lacks mundane realism)
- Not everything can be investigated using a lab
- Ethics (is an issue) - deception
Experiments - Field Experiments
An experiment conducted in a more natural enviroment however the IV is still deliberately maniulated by a reseracher.
+ Can establish causal relationships by manipulating the IV and measuring its effect
+ Less artificial so higher mundane realism and thus higher internal validity.
+ Avoids P effects and demand characteristics (because the P’s may not be they are in an experiment) which may increase internal validity.
- There still may be demand characteristics e.g. the way an IV is operationalised may convey the experimental hypothesis to P’s.
- Low internal validity as little control of extraneous variables and since conditions will never be the same again difficult to replicate.
- Ethics (is an issue) - P’s may not have agreed to take part (not aware of participating) e.g. informed consent
Experiments - Natural Experiments
A research method where the experimenter can’t manipulate the IV directly but where it varies naturally and the effect can be observed on a dependent variable.
+ Enables psychologists to study real life problems (increased mundane realism and validity).
+ Allows R where the IV can’t be manipulated for ethical or practical reasons.
+ High ecological validity – the setting is in a natural environment, so the data can usually be generalised.
+ R has little/no involvement with situation so P's unaware of being observed = few demand characteristics and reduced R bias. But P’s may be aware = P effects, investigator effects and demand characteristics.
- Many extraneous/confounding variables (e.g. lack of random allocation to conditions and sample may have unique characteristics so) = low internal validity can't generalise R.
- Cannot demonstrate causal relationships because IV not directly manipulated.
- Can only be used where conditions vary naturally (and this happens rarely)
- Impossible to replicate in order to check validity and reliabilty / Ethics (an issue)-Protection if a sensitive subject
(but are not really experiments)
Difference studies: 2 groups of P’s are compared in terms of a DV (e.g. males vs. females) this is not a true experiment as the apparent IV hasn’t been manipulated.
Quasi-experiments: Studies that are ‘almost’ experiments but lack 1 or more features of a true experiment, such as full experimenter control over the IV (so natural experiments are quasi experiments) and random allocation of P’s to conditions meaning that cannot claim to demonstrate causal relationships.
Designing an Investigation
The following factors are important to consider when designing an investigation:
Target population: group that the R is interested in and from whom the sample is drawn and generalisations can be made.
Pilot study: Is a test run on a few P’s enabling you to check for design faults and to see if there could be any improvements to the study before carrying out investigation on a large scale, this is a routine procedure.
Confederate: Person (not a P) assigned by the experimenter to behave in a certain way to affect the experiment. May be used as IV.
(Correlation is not a R method so don’t say a correlational study but a correlational analysis)
Correlation/correlational analysis: Determines the extent of a relationship between 2 variables. Usually a linear correlation is predicted but can be curvilinear relationship (i.e. Yerkes-Dodson law.)
Positive correlation: 2 variables increase together
Negative correlation: As one variable increases, the other decreases; the tighter the points cluster around the single straight line the
Zero correlation: No relationship (Casual relationships can be ruled out if not correlation exists)
Correlation doesn't mean Causal
Correlation doesn’t mean that one variable caused the other to change and only an experiment reveals causal relationships between variables. Causality is only one of three possible explanations for a correlation
1. Relationships is causal (1 variable caused the other to change)
2. The relationship is due to chance (2 variables just happen to be statistically related).
3. There is a third factor involved (another variable is causing the relationship).
So casual relationships can be ruled out if no correlation exists
Because 2 variables rise and fall together doesn’t mean they cause each other e.g. intelligence increases with age (intelligence and age are the co-variables) but age doesn’t cause intelligence.
Correlational Analysis - +'s and -'s
+ Used when unethical/impractical to manipulate variables / They can indicate trends leading to further R.
+ Allows a R to measure relationships between naturally occurring variables e.g. height and intelligence.
+ If correlation is significant then further R is justified and if not you can rule out causal relationship.
+ As with experiments the procedures can be repeated again and findings can be confirmed.
- People often misinterpret correlations and assume that a cause and effect have been found, not possible to draw conclusions about case and effect.
- Coefficient may look like there is no relationship between V's as it is near to 0 but it may be hiding a curvilinear relationship or one which shows more than one group in the data. If you calculated the correlation coefficient for a curvilinear relationship - find something close to 0 as half the time the relationship is + the rest of the time it is - and together they cancel each other out and get a 0 correlation
- There may be other unknown variables (intervening variables) which can explain link between co-variables.
- As with experiments may lack internal/external validity e.g. method used to measure IQ may lack validity or sample used may lack generalisability.
Correlational Analysis - Scattergraphs
Graphical presentation of the relationship (correlation) between 2 sets of score’s.
The scatter of the dots indicates the degree of correlation between the co-variables.
Drawing a scattergraph includes plotting 2 scores: one score is measured along the horizontal axis whilst the other is along the vertical axis and when the 2 plots intersect on the graph an X plotting point is placed. If doesn’t show type of correlation clearly draw a line of best fit (doesn’t have to pass through any particular number of x’s) unless need to draw a line of bets fit just leave it out. The pattern of points plotted on the scattergraph represents particular types of correlation.
Hypothesis - Correlation
When conductioning a study using a correlational analyssis. You need to produce a correlational hypothesis, which states the expected relationship between co-variables
Age and Beauty are the co-variables , the study expects to find a relationship bteween these co-varaibles, so possible hypothesis might be.
- Age and Beauty are positively correlated (Directional)
- As people get older they are rated as more beautiful (Directional)
- Age and beauty are corrleated (Non - Directional)
Observations - Naturalistic
Observation: Systematically watching and recording what people say and do i.e. how they behave.
Data gathered through observation is highly descriptive and will not offer an observation for what has been recorded. It’s the job of the R to make sense of the data, sorting it so that any evidence relevant to the hypothesis is presented clearly.
Naturalistic: A research method in a naturalistic setting where the investigator doesn’t interfere but observes the behaviour in question, though this is likely to involve the use of structured observation. Before starting the study, observers try to become familiar to whom they’re observing, to minimise the effect their presence has.
Observations (Naturalistic) - +'s and -'s
+ Few demand characteristics as P’s don't know they're being studied and not in a false situation=higher internal validity compared to questionnaires/interviews as what P's say they do if diff to what they actually do.
+ Info collected more detailed/provides a fuller pic of behaviour than the info collected in a laboratory.
+ High ecological validity since the behaviour occurs in its true form in a natural setting.
+ Can be used when other methods not possible e.g. might be unethical/ P’s unwilling to fill in questionnaire.
- Risk of observer bias as unlikely that R can remain completely objective, reduces reliability of data gathered.
- Control of environment not poss and confounding V's introduced near imposs determine causal relationships
- Replication would be difficult.
- Ethics are big prob. esp. naturalistic. Not knowing being watched e.g.1 -way mirrors issues with privacy, informed consent and confidentiality.
- Tend to be small scale so group studied may not be representative of the population - lack pop. validity.
- Lots planning - choosing V's to operationalise, creating behavioural categories and devise recording method.
Observations - Controlled
Controlled: Observations that take place where some variables in the P environment are controlled and manipulated by the experimenter
Often used in field experiments as an IV is being tested and control is possible. In order that control can be exerted and so that the behaviour is easier to observe. E.g. watch aggressive film or not
+ By controlling some variables, it is possible for the R’s to draw conclusions from their observation and is also easier to establish cause and effect.
- An unfamiliar setting may affect participant’s behaviour, making it less natural (ecological validity)
- P’s may be aware of being observed creating demand characteristics
Observations - Structured or Unstructured
Structured (systematic) observation: An observer uses various ‘systems’ to organise observations such as behavioural categories and sampling procedures.
+Gather relevant data as you know what you are looking for
-Intresting behaviour could go unnoticed as you are not looking for it
Unstructured observations: An observer records all relevant behaviour but has no system. This technique may be chosen as behaviour to be studied is likely to be unpredictable.
Observations (Structured) - Behavioural Categories
Need to operationalise the observation/behaviour to create behavioural categories i.e. a set of components
Behavioural categories: Dividing target behaviour in to a subset of behaviours. This can be done using a behaviour checklist or coding system. This improves reliability.
Behavioural Categories should:
- Be objective - record actions don’t make inferences
- Cover all possible component behaviours – don’t cover things with are not necessary
- Be mutually exclusive exclusive – don’t mark 2 categories at once i.e. hitting and shoving
Observations (Structured) - Sampling Procedures
Sampling Procedures: If conducting continuous observation R records every instance of behaviour in detail. In many situations not possible as it creates too much data.
So could use
Event sampling: An observational technique in which a count is kept of the number of times certain behaviour (event) occurs.
Time sampling: An observational technique in which the observer records behaviours in a given time frame e.g. every 30 seconds you may select more than 1 category from a checklist.
Observations - P and Non-P
Non-Participant: the experimenter does not become part of the group being observed
+ R can remain objective throughout
- The R loses a sense of the group dynamics by staying separate
Participant: Observer becomes one of the groups of P’s he wishes to observe. Observer may tell the others they will be observed (an overt observation), or may pretend to be one of the group and not inform them that they are being observed (a covert observation).
+ Can observe P's in a natural setting (high ecol validity) and gain understanding of causes of their behaviour.
+ The R develops a relationship with group - gain a greater understanding of the groups behaviour
- Remembering accurately may be difficult as unable to take notes.
- Observer loses objectivity - may interpret or record information in a biased way.
- P’ may act differently if they know a R is amongst them
- Ethical guidelines such as deception, consent and confidentiality may not be maintained.
Observations - Overt or Covert
Overt: people being observed do not know they're being watched or studied. But knowing that behaviour is being observed is likely to alter P's behaviour.
Covert/Undiscolsed Observation: Participants are unaware they are being watched e.g. one way mirror.
Observers try to be as unobtrusive as possible (to minimise hawthorne effect) though this is has ethical implications
Designing Observational Research
Are you using observation as a method or a technique?
Controlled or Naturalistic?
Overt or Covert?
Structured or Unstructured?
If Structured - what sampling procedures/ behavioural categories and methods to observe i.e. behavioural checklist, coding system or rating system?
Coding system: Systematic method for recording observations in which individual behaviours are given a code for ease of recording e.g. PLYO – playing when with owner.
Behaviour Checklist: A list of the behaviours to be recorded during an observational study
Rating System: E.g. Early Child Environment Rating Scale. Records observations of child’s early environment and rates items on a 7 point scale (1 is inadequate and 7 is excellent.) This is then related to other developmental outcomes such as school success.
Evaluating Observational Research - Validity
External Validity - Likely to be high as they involve more natural behaviour
Population Validity - May be a problem e.g. if children are only observed in middle class homes so we can't
Internal Validity - Observations will not be valid if the coding system is flawed
Observer bias: if what someone observes is affected by the expectations (they may see what they expect to see) This reduces the objectivity and validity of the R 2 or more observers make this worse. Can check observer reliability by inter-rater reliability and conduct pilot study.
Improving validity: Carry out R in varied setting with varied P’s and use more than 1 observer to reduce observer bias and average data across observer (to balance out any biases)
Ethical issues: This type of R is acceptable where those observed would expect to be observed by strangers. However R’s should be aware that’s not acceptable to intrude upon privacy of individuals who even whilst in public space may believe they are unobserved.
- In studies where P's are observed without their knowledge there are issues relating to informed consent.
- Observations - invasion of privacy (1 way mirrors is deception) so P confidentiality should be respected.
Evaluating Observational Research - Reliability
Refers to whether sonething is consistent . So any tool used to measure e.g. observations or interviews must be reliable (so prodce and so should produce the same result on every ocassion - if it doesn't must check the thing has changed and not our measuring tool.
Reliability of Observations
Inter-rater reliability: Extent to which 2 observers agree. To ensure reliability, have at least two observerswatching and recording what they see in the same way. Judges will often score observations in categories and the % of agreement between the judges will be calculated. E.g. if judges score observations in same category 8 times out of 10, there’s an 80% inter-rater reliability rate. If total agreements/total observation over 80% = reliable.
Observers must be trained in the use of coding systems/behaviour check list and thet must prcatise using them and discuss their observations. The investigator can then check how reliable they are.
Evaluating Observational Research
You can have a study that is reliable but lacks validity
E.g. if an observer uses a behaviour checklist which is not very thourough and sometimes the target individual does things which can't be recorded, the observation may be prefectlt reliable but lack validity because the behaviour checklist was poor.
Self-report - Questionnaire +'s and -'s
Questionnaire: Data is collected through use of written questions.
+ Can be easily repeated so data can be collected from large no’s of people quickly, cheaply and easily so is efficient.
+ P’s are anonymous, so are more willing to reveal personal info/more truthful than in interview, reliable method of gathering data.
+ People who are geographically distant can be studied.
- Answers may not be truthful because of leading q’s and social desirability bias or may just deliberately give the wrong answers.
- Difficult to obtain a representative sample as it is difficult to identity all members of a population and there is no guarantee that all will agree to take part in the study. It may be that those who do agree make up a biased same because only certain types of people fill in questionnaires e.g. literate people who are willing to spend time filling it in and returning it.
- Survey data is highly descriptive as so it’s difficult to establish causal relationships and the ability to infer causal relationships will be limited by the quality of the questionnaire.
Questionnaire - Designing one
Designing a simple questionnaire:
1. Define the objectives of the study: Decide upon a R area – “attitudes to time management in students.”
2. Formulate one or more hypotheses
3. Identify a population: Determine a way of selecting a sample form this population.
4. Create a good questionnaire: Select appropriate types of q’s and avoid the pitfall of q wording.
5. Before administering the questionnaire: Do a pilot study, allows you to gain feedback about length of time to complete. Check q’s are clear ensure its providing you with the kind of data you can analyse.
6. Adjust the questionnaire: Use results from the pilot study to make your R all the more effective.
Questionnaires - Sampling Tecnique
Sampling technique: Use stratified or quota sampling.
Stratified sampling: Divides target population into groups, people in sample from each group in same proportions as population. If selection is done randomly (Stratified sample) or by another method e.g. opportunity sampling (quota sampling)
+ More representative than opportunity sample because there is an equal representation of sub groups.
- Although the sample represents sub-group each quota taken may be biased in others ways e.g. if you use opportunity sampling you only have access to certain sections of the target population.
DON’T confuse a systematic sample for a random sample - selecting every 10th person is NOT random it’s a systematic method of selection however if you select a no. using a random method then select every 10th person after that then this would be a random sampling.
Questionnaires - Writing good questionnaires
Use (First set of points) and Avoid (Second set of points)
- Filler q’s: Irrelevant q’s distract respondent from main purpose of the study may red. demand characteristics.
- Sequence of q’s: Start with easy q’s saving ones that make a P feel anxious/defensive until end when relaxed
- Analysis: Q's need to be writtern so that the answers are easy to analyse.Mix of closed and open as close are easier to anlyse but forced to select untrue answers so doesn't represent their real behavior/thoughts.
- Sampling Technique: i.e. how to slect respondents. Questionnaires often use stratified or quota sampling
- Pilot Study- so can be refined as difficulties may be found
- Lack of clarity: Q’s should be understandable and mean the same thing to all P’s. They should be written in clear language, avoiding ambiguity and this can be done by operationalising certain terms.
- Embarrassing q’s: Q’s that focus on private matters should be avoided since as questions become more personal the likelihood of unanswered/wrongly answered q’s increases.
- Bias: Any bias may lead the P to be more likely to give a certain answer. Biggest prob is social desirability bias as P’s will often answer q’s in a way that make them feel better showing them in a better light.
- Leading q’s: Leading q’s can encourage a certain response from P’s.
Questionnaires - Types of Q's
Analysis: Q’s need to be written so they are easy to analyse mix of closed and open questions as close are easier to anlyse but they may be forced to select answers which are not true to them so does not represent their real behavior or thoughts.
Open q’s: Q’s that invite the respondent to provide their own answers than select one provided therefore produces qualitive data.
Closed q’s: Have a range of answers and P’s chose one producing quantative data and so are easier to analyse than easier open q’s.
Rank order: E.g. rate/rank range of options from 1 to 5 with 1 the most favourite.
Likert scale q’s: Statement where P’s indicate strength of agreement/disagreement using no’s from 1 (strongly agree) to 8.
Checklist q’s: List of terms is provided in which P’s tick those that apply.
Dichotomous q’s: Q’s which offer two choices e.g. Yes/No.
Semantic differential q’s: Q’s with 2 bi-polar words and P’s asked to respond by indicting point in between them representing their strength of feeling. E.g. Clean: : : : : : : : : : : : : : Dirty.
Self-report Techniques - Structured Interview +'s
Interview: R method than involves face to face interaction with P and results in collection of data.
Formal Interviews (structured): pre-determined q's i.e. a questionnaire that is delivered face to face with.
+ Can be easily repeated as the questions are standardised.
+ Less interviewing skill is required.
+ Simple to administer and lots of data collected cheaply and quickly.
+ Large sample can be obtained without difficulty depending on the subject of the questionnaire so data can often be generalised.
+ Can provide a great deal of insight into complicated and difficult indicial cases if carried out carefully by a skilled interviewer.
+ More easy to analyse than an unstructured interview because answers are more predictable.
Self-report Techniques - Structured Interview -'s
- The interviewer’s expectations, communicated unconsciously may influence the interviewee’s answers (called interviewer bias).
- Reliability may be affected by low inter-interviewer reliability (if one interviewer behaves differently to the next, results change).
- May be difficult to find the right sample and some may leave as the find it too lengthy or too difficult to sit through so dint complete it in these cases the data may need to be rejected from the study.
- If the interviewer is not sufficient skilled the P’s responses may not be relaxed and data will be false/little use.
Self-report Techniques - Semi-structured
Semi-structured (partially planned): There are no fixed questions but the interview is guided perhaps by predetermined set off topics to be covered. The order in which these topics are covered or the way in which they are addressed can vary across P’s.
+ R can be flexible in what questions are asked.
+ Can gain in-depth and accurate information from respondents.
- More difficult to compare answers.
Self-report - Unstructured Interview +'s/-'s
Unstructured Interviews (informal): The most informal and in-depth technique, there is less structure as new questions are developed along the way, they enable the interviewer to re-phrase questions if necessary, to ask follow-up questions or clarify answers that are ambiguous or contradictory. Interviewer may set the topic but the interview if free to dictate the content by taking the conversation in any direction they wish.
+Provides more detailed in depth information than structured interview.
+Can access information which may not be available from predetermined questions.
- More affected by Interviewer bias as in an unstructured interview the interviewer is developing questions on the spot questions may be less objective.
- Requires well trained interviewers which make it more expensive to produce reliable interviews.
Evaluating Self Report Techniques - Validity
- External Validity of self report techniques - extend to which the findings can be generalised to other situations and people. A major factor will be the representativeness of the sample used to collect data.
- Internal Validity of self report techniques is related to the issue of whether the questionnaire or interview (or psychological test) really measures what it is intended to measure.
There are several ways to asses this the most common are
- Face Validity - Does the test look as if it is measuring what the R intended to measure. E.g. are the q's related to the topic.
- Concurrent Validity - establihed by comaparing the current questionnaire or test with a previously established test on the same topic. P's take both tests and then the two test scores are compared.
- Validity is improved by assesing the validity of a technique. If such measures of validity are low then:
- External Validity: Use a more appropriate sampling method to improve population validity as the findings then be generalised into a wider population.
- Internal validity: If one or more measures of internal validity is low the items on the questionnaire/interview need to be revised in order to produce better matched scores on the new test and the established one.
- Ensuring Validity -> Predictive validity: Checking validity by seeing if future behaviour is consistent with what we could predict based on our test
Evaluating Self Report Techniques - Reliability
- Internal reliability: measure of the extent to which something is consistent within itself. For example, all the q’s on an IQ test (which is a kind of questionnaire) should be measuring the same thing. May not be relevant to all questionnaires because sometimes internal consistency is not important e.g. a questionnaire about day-care experiences may look at many different aspects of day care and is effects.
- External Reliability: measure of consistency over several different occasions. For example, if an interviewer conducted an interview, and then conducted the same interview with the same interviewee a week later, the outcome should be the same
- Reliability also concerns whether two interviewees produce the same outcome. This is called inter-interviewer reliability .
- Assessing Reliability - >
- Internal reliability - Split half reliability: Splitting test into 2 halves, carry out a correlation on the 2 halves, to ensure both halves of test are of equal difficulty, if they are, test is reliable. E.g. single group of P's all take test at once their answers are split in half and this could be done by comparing answers to odd number q’s with all the answers to the even number q’s. The individual scores on both halves of the test should be very similar and the 2 scores can be compared by calculating a correlation coefficient.
- External reliability -Test re-test reliability: Experiment administered twice, with a gap in-between. If the same P’s score similar results on both occasions, method is reliable. Generally used for factors that are stable over time, e.g. intelligence. Reliability usually higher when little time has passed between tests. If a test produces scores these can be compare by calculating correlation coefficient.
Evaluating Self-report Techniques-Imp. Reliability
Improving internal reliability: It’s possible to improve internal reliability by removing those items which are most inconsistent. The only ways to do this is by trial and error- remove one test item and see if the split-half correlation coefficient improves/ if it does, the removed item should be permanently left out.
Improving Reliability: It is possible to improve internal reliability by removing those items which are most inconsistent. The only way to do it is by trial and error. Remove one test item and see if the split-half correlation co-efficient improves. If it does then the removed item should be permanently left out.
- Deception about the true R aims may sometimes be necessary in order to collect truthful data.
- Psychological harm- Respondents may feel distressed by certain questions/having to think about certain sensitive topics
- Privacy Q’s may be related to sensitive and personal issues invading an individual’s privacy
- Confidentiality must be respected names and personal details should not be revealed without permission. No personal data should be stored.
Research Method or Technique
Research Method: No manipulation of variables by the R so is used to collect data about what people do and why.
Research Technique: Used to gather data in experiments about attitudes, analysis would involve a comparison between 2 groups - in an experimental study using a questionnaire as a R technique to assess the DV.
Research Method: No manipulation of variables by the R so is used to collect data about what people do and why e.g. naturalistic observation
Research Technique: Used to gather data in (lab) experiments e.g. show a 1 group a aggressive film then observe to see if behvaiuor changes. Other grou not shown agressive film (used as a control)
Definition of a Case Study
A Research method that involves a detailed study of a single individual, incident, institution or event.
Primary data: Gained directly by the Researcher from interviews, assessments and observations.
Secondary data: Data that has already been collected e.g. medical records or other studies by other Researchers.
Case Study - +'s and -'s
+ Rich in depth data can be gathered so info that may be overlooked using other methods likely to be identified. Useful for understanding the subtleties and complexities of an individual’s behaviour.
+ Can be used to investigate instances of human behaviour/experiences that rare which perhaps would not be able to generate such conditional experimentally ethically.
+ Data from several people can be pooled and analysed e.g. brain damage patients allowing a greater understanding of causes the symptoms they share.
+ Complex interaction of factors can be studied in contrast with experiments where many V's held constant.
- It’s difficult to generalise from individual cases as each has unique characteristic.
- Use recollection of past events as part of the case history -unreliable.
- R’s may lack objectivity as the get to know the case or because theoretical bias may lead them to overlook aspects of the finding.
- Ethics:confidentiality – cases easily identifiable due to unique characteristics even if real names not given.
- Time consuming and relationship between the R and the individual makes it diff to rely on objectivity of data.
Content Analysis - +'s and -'s
A kind of observational study in which behaviour is observed indirectly in written or verbal material such as (painting, interviews, books, TV.)
It’s indirect (since you are observing people through the artefacts they produce)
+ High ecological validity because it’s based on direct observations of what people actually do; real communications which are current and relevant e.g. recent newspapers.
+ Findings can be replicated because data sources are public/retained.
- Observer bias reduces the objectivity and validity of the findings.
- Likely to be culture-biased because interpretation of content will be affected by language/culture of observer and behavioural categories used.
Processes involved in Content Analysis
The R has to make 2 decisions:
Sampling method: What material to sample and how frequently? (which TV channels, how many programmes and what length of time)
Behavioural categories to be used:
Quantative Analysis: Examples in each category are counted e.g. Content analysis of teen behaviour from letters in a teen magazine e.g. count no. of letters in a teen magazine about categories developed e.g. bulling/sex/health
Qualitative analysis: Examples in each category are described e.g. Content analysis of teen behaviour from letters in a teen magazine e.g. quote from different letters about categories developed e.g. bulling/sex/health
Processes of Content Analysis (Qualitive) - after interviewing P's about family involvement in school
- All answers to the same questions put together
- Each statement developed into a briefer statement and given a code
- Statements compared with others and categorised (so statement with similar content placed together
- Categories grouped into larger units producing main categories e.g. support/enablement
Aims and Hypothesis
- Aim: Intended purpose of investigation
- Hypothesis: Precise, testable statement written in future tense about target pop.
- Operationalised Hypothesis: Make it testable, so it can be repeated, increasing reliability of findings. Must operationalise the variables (IV and DV–ensuring they are in a form that can be easily tested) – e.g. how you are going to measure your IV/DV. E.g. 'The scores obtained on a memory test by 10 females aged 16-24 will be higher than the scores obtained by 10 males aged 16-24'.
- Research hypothesis: Proposed at the start of R and is often based on theory
- Experimental (experiments, H1)/Alternative Hypothesis (observations/opinions, HA): The prediction you are making e.g. evidence there is a significant relationship/ difference between two sets of data.
- Null hypothesis (Ho): Backup hypothesis; statement of no difference/relationship. If data doesn’t support Ho, reject it and go with HA instead.
- Directional hypothesis (One-tailed Hypothesis): States expected direction of the predicted difference b/ween two conditions or 3 groups of P’s e.g. P’s do hmwk without music produce better results than P’s who do hmwk with music. Previous R/pilot study may suggest direction for findings. Easier to reject than a non-directional, so R that proves a directional hypothesis is regarded highly.
- Non-directional hypothesis (Two-tailed Hypothesis): Predicts that there will be a difference but not the direction of the difference between two conditions or groups of participants ; e.g. P’s who do homework without music will produce different results to P’s who do homework with music. (Use if you don’t know the answer to the problem or think something might happen – next piece of R you may choose directional.)
Variable: Something which is observed, measured, controlled or manipulated.
Operationalised Variables: how they will be specifically measured in the study.
Independent variable (IV): Variable the experimenter manipulates – assumed to have direct effect on the DV.
Dependent variable (DV): Variable that is measured, after making changes to the IV.
Extraneous variables (EV’s): Variables other than the IV that may affect dv e.g. temp. If affects all conditions equally confounding/bias does not occur but if EV isn't controlled/can provide alternative explanations for effects = Confounding variables
EV's - Situational Variables 1
Situational variables: Features of a R situation that may have influenced P behaviour and act as EV’S.
They should be controlled to ensure they are the same for all P’s.
Order effects: improved P performance may be due to practise (an EV) rather than the IV.
Time of day, temperature & noise: Only affect the DV if the environmental factor affects performance (e.g. in task is cognitive time of day may be significant as more P’s are alert in the morning) and if it varies systematically with the IV (P’s in group 1 are tested in the morning and group 2 in the afternoon) but if some of each group tested in the morning and others in the afternoon then time of day would not be an EV as it would not have a systematic effect on the DV.
EV's - Situational Variables 2
Investigator effects: Anything the investigator does which has an effect on P’s performance in a study, other than what was intended (e.g. the investigator’s expectations about a study or the P’s reaction to the behaviour/appearance of an investigator.) E.g. the way the investigator responds to a P may encourage some P’s more than others e.g. male R’s more encouraging with female P’s . Includes - >
- Direct Effects: (consequence of the investigator interacting with the P) and indirect effects (consequence of the investigator designing the study.) Effects of this are greater in non-experimental investigations such as interviews than observations.
- Indirect Effects:Investigator experimental design effects: Investigator may operationalise the measurement variables in such a way that the desired result is more likely or may limit the duration of the study for the same reason.
- Investigator loose procedure effect: Investigator may not clearly specify the standardised instructions or/and procedures leaving room for the results to be influenced by the experimenter. These should be controlled to ensuring they’re the same for all P’s.
- Demand Characteristics: Cue makes P’s aware of what the R expects to find/how P’s are expected to behave. Can change the outcome of a study because P’s change behaviour to conform to expectations.
- Screw You Effect – P knows what is happening in the exp and purposely refrains from showing any interest in it.
- Please You Effect – When a P knows what is happening in an exp and tries to alter results to please the experimenter even though realistically they won’t act that way.
Controlling Situational V's and P effects
Standardised Procedures: This means that each P is treated in exactly the same way, each doing exactly the same tasks, with the same materials, in exactly the same order. This reduces the variables in the procedure.
Standardised Procedures include Standardised instructions: Each P must be given exactly the same instructions, ideally by the same person and in the same way; otherwise this could affect the results. You can ensure standardisation by providing written instructions, which are simple and clear.
Double blind design: Neither the P’s nor experimenter is aware of the aims/important details of the study so have no expectations.
Controlling participant effects:
Single blind design: When P’s do not know the true aims of the study so cannot seek cues about the aims and react to them.
Experimental realism: The extent to which P’s become involved in an experiment and become less influenced by cues about how to behave e.g. by making the experimental task more engaging, P’s are less likely going to be looking for cues for how to behave.
Demand Characteristics and Investigator Effects
Demand Characteristics- A cue that makes P's aware of what the R expects to find or how P's are expected to behave. These can change the outcome of a study because P's will chang their behvaiour to conform to expectations.
Screw You Effect – P knows what is happening in the experiment and purposely refrains from showing any interest in it.
Please You Effect – When a P knows what is happening in an experiment and tries to alter results to please the experimenter even though realistically they won’t act that way.
Investigator Effects - Anything the investigator does which has an effect on P’s performance/outcome of study, other than what was intended (e.g. the investigator expectations about a study or the P’s reaction to the behaviour/appearance of an investigator. Includes both direct (consequence of the investigator interacting with the P) and indirect effects (consequence of the investigator designing the study.) Effects of this are greater in non-experimental investigations such as interviews than observations.
Indirect Effects: Investigator experimental design effects: Investigator may operationalise the measurement variables in such a way that the desired result is more likely or may limit the duration of the study for the same reason.
Investigator loose procedure effect: Investigator may not clearly specify the standardised instructions or/and procedures leaving room for the results to be influenced by the experimenter. These should be controlled to ensuring they’re the same for all P’s.
What is Experimental Design?
A set of procedures used to control the influence of factors such as P variables in a experiment.
The 3 experimental Designs are -
Experimental Design - Independent Groups Design
2 or more separate groups. P’s randomly allocated to one of the conditions. Each group tested in a different condition (1 of them being controlled).
+ Avoids order effects
- Potential for error resulting from individual differences/P variables between the groups of P’s taking part in the different conditions.
-More time consuming/expensive - Twice as many P’s are needed than with the repeated measures design.
How can these limitations be overcome
-P’s variables can be overcome if the sample size is large enough and if P’s are randomly allocated (theoretically distributes P variables evenly.) Random Allocation - Allocating P's to experimntal groups or conditions using random techniques
- Spend more time and money
Experimental Design - Repeated Measures
Each P takes part in every condition under test.
+ Individual differences/P variables between P’s are removed as a potential cofounding variable.
+ Fewer P’s required, since data for all conditions are collected from same P’s which is quicker and cheaper.
- Range of potential uses is smaller than for independent groups design E.g. reading schemes
- 1 condition may be harder than another (EV) - affect accuracy of results.
- On 2nd test, P’s may have guessed experimental aims and may influence answers - P effect
- Order effect: EV arising from the order in which conditions are presented. May affect performance through getting better through practice (learning effect) or getting worse through being boredom/tired (fatigue effect).
How can these limitations be overcome
- Make equivalent tests to make both conditions equal.
- Order effects can be controlled by counterbalancing
- Use single blind - A type of R design if which the P is not aware of the R aims or of ehoch conditions of the experiment they are recieving
Order effects can be controlled by counterbalancing: alternating the order in which P’s perform in different conditions of an experiment to balance the effects across both conditions or by having a sufficient time delay between 2 conditions.
Ensures that each condition is tested first or second In equal amounts, AB/BA or ABBA.
A- morning test B- afternoon test
- E.g. Group 1 each P does 'A' then 'B'
- Group 2 each P does 'B' then 'A' - Comparision made for each P on their performance on the two conditions (morning and afternoon)
- ABBA - All P's take part in each condition twice
- Trial 1 - Condition A (moring)
- Trial 2 - Condition B (afternoon)
- Trial 3 - Condition B (afternoon)
- Trial 4 - Condition A (moring) - Compare scores on trial 1+4 with 2+3, still rep. measures as comparing the scores of the same Person
Experimental Design - Matched Pairs Design
Pairs of P’s are matched in terms of key variables such as age/IQ. 1 member placed in the experimental group, the other in the control.
+ No order effects as P’s are only completing one condition.
+ Lowers P variables (although still no full control) produces more valid results.
-Achieving matched pairs of P’s is difficult/time consuming task and may be too costly as you must start with lots of P’s to ensure you can obtain matched pairs on key variables.
- Impossible to match people exactly, unless identical twins
- May not control all P variables as you can only match on variables known to be relevant, but others could be important.
How can these be overcome
- Restrict matching variables to make it easier.
- Use identical twin pairs; provides a good match.
- Conduct pilot study to consider key variables.
Experimental/Control Groups or Conditions
Independent Groups - have experimental and control groups as (each P is assigned to 1 group)
Repeated Measures - have experimental and control conditions (each P experiences both conditions)
Ethical Issues: A conflict in what the R needs in order to conduct useful and meaningful R and the rights of P’s. Ethical issues are conflicts about what is acceptable.
Ethical Guidelines: Concrete, quasi-legal documents help to guide conduct by establishing principles for standard practice/competence (way of resolving the conflict)
Theses ethical principles are set by the BPS and failure to follow them can lead to psychologists being rejected from the society or their licences being revoked and their name and R blackened.
What are the ethical Issues?
- Informed Consent
- The Right to Withdraw
- Protection from Hram
- Privacy - we have a right to privacy if this is invaded than confidentaility should be respected
Ethical Issues - Informed Consent
P’s have the right to be given comprehensive info concerning the nature/purpose of the Rand their role in it, in order that the can make an informed consent about whether to participate and assess the risk factor.
How to deal with it:
- P’s asked to formally indicate their agreement to participate and this should be based on comprehensive info concerning nature/purpose of the R and their role in it.
- Alternative is to gain presumptive consent (gain consent from others rather than P’s.)
- R’s can also offer the right to withdraw
- If anyone below legal age of consent is used in the R, then consent sought from the parents/guardians, same may apply for P’s with mental illness/learning difficulties/ old dementia patients they’re vulnerable so R must proceed very carefully.
- If a P is given info concerning the nature/purpose of study then this may it.
- Even if R’s have sought and obtained informed consent, this doesn’t guarantee that P’s really do understand what they have let themselves in for.
- Problem with presumptive consent is that what people expect they will/will not mind can be different from actually experiencing it.
Ethical Issues - Deception
Where a P is not told true aims of the study (e.g. what participation will involve) and thus cannot give truly informed consent so great care and careful consideration must be given to the project/use of one-way mirrors etc.
How to deal with it:
- The need for deception should be approved by an ethics committee, weighing up the benefits (of the study) against costs (to P’s): cost benefit analysis.
- P’s should be debriefed i.e. told they have been lied to (deceived) for the need of an experiment and offered the opportunity to withhold their data.
- Cost-benefit decisions are flawed as they involve subjective judgements, and the costs aren’t always apparent until after.
- Debriefing can’t turn the clock back: P may still feel embarrassed/have lowered self-esteem/emotionally distressed for being lied to and if P’s are going to leave in a state different to when they entered they should not be involved.
Ethical Issues - The Right to Withdraw
How to deal with it:
P’s should have right to withdraw at any time regardless of payments if they’re uncomfortable and should have the right to refuse permission for the use of their data.
P’s may feel they shouldn’t withdraw as it will spoil the study.
In many studies P’s are paid/rewarded so may not feel able to withdraw.
Ethical Issues - Protection from Harm
R’s have responsibility to protect P’s from physical/mental harm. Normally, risk of harm should be no greater than encountered in everyday life. This includes confidentiality.
How to deal with it:
Avoid any risks greater than everyday life
Stop the study.
R’s are not always able to accurately predict the risks of taking part in a study.
Ethical Issues - Confidentiality
Is a legal right failure to keep details confidential means they have failed to fully protect P from harm.
How to deal with it:
R’s should not record the names of any P’s; they should use numbers or fake names.
Sometimes possible to work out who P’s were on the basis of the info provided. So, in practice, confidentiality not be possible.
Ethical Issues - Privacy
Is the ability of an individual or group to keep their lives and personal affairs out of public view, or to control the flow of information about themselves
How to deal with it
Don’t observe anyone without their informed consent unless it is in a public place.
P’s may be asked to give their retrospective consent or withhold their data.
No universal agreement about what constitutes a public place.
Not everyone may feel this is acceptable e.g. lovers on a park bench.
What is sampling?
It is about identifying a subset of a population that can be used to represent the entire group as a whole.
The 3 Sampling Techniques are -
- Random Sampling
- Opportunity Sampling
- Volunteer Sampling
A sample of P’s is produced by using a random technique so every member of the target population has an equal chance of being selected.
+ Representative and unbiased because it is an equal chance method.
+ The R has no influence/control over who gets picked.
- May end up with a biased sample (e.g. more boys than girls as the sample is too small).
- Not everybody may be able to take part so the random may not be as random as you would have liked it to be.
- Time consuming and difficult to do if target population is big.
Uses people from target population available at the time.
+ Simple and doesn’t have to be planned.
+ Cheap and not time-consuming.
- Biased as the sample is drawn from a small part of the target population may not include certain types of people.
- Not representative as only restricted to people who are available at that time.
R places an advert, P’s respond to advert and volunteer to take part by contacting the R.
+ Very little time and effort required from R.
+ Access to a variety of p’s which would make the sample more representative and less bias.
- Sample is biased/unrepresentative because P’s who volunteer are likely to be highly motivates and/or have extra time.
Data which includes numbers and statistics.There are 4 types
1) Nominal: Data which is in categories.
2) Ordinal: Data which is ordered in some way e.g. list in order of liking football teams.
3) Interval: Data is measured using units of equal intervals e.g. counting correct answers.
4) Ratio: Here is a true zero point as in most measures of physical quantities.
+ Easier to analyse because it quantifiable and can be summarised easily.
+ For many R’s this approach to investigating behaviour is the right one as it is regarded as the most scientific and limits the amount of interpretation and opinion and is therefore more objective.
+ Can produce neat conclusions as numerical data reduces the variety of possibilities.
- Oversimplifies reality and human experience (statistically significant but humanly insignificant).
Research techniques which produce this data: Structured observations, case studies, content analysis questionnaires/ interviews (closed q's) and experiments.
Quantitative data Analysis
Quantative data Analysis: Any means of representing trends from numerical data. It can be analysed in 3 different ways:
Descriptive statistics: Allow us to reduce the data into a few numbers so others will not have to spend time reading the raw data trying to understand the results. They can’t do everything they do not tell us what you did or whether findings are reliable the results or type or size of relationship is not explained other.
Examples are -
Measures of central tendency: descriptive statistic which provides info about a ‘typical’ response for a data set (averages).
Measures of dispersion: A descriptive statistic that provides information about how spread-out/dispersed a set of data is.
Visual display: i.e. graphs which provide a way of ‘eyeballing’ your data and seeing findings at a glance
Measures of Central Tendency - Median
Median: The middle value in an ordered set of numbers.
+ Less affected than the mean by extreme scores.
- Not as sensitive as the mean as not all values are reflected.
- Can be unrepresentative in a small set of scores or widely varying scores /doesn’t represent all scores.
Measures of Central Tendency - Mean
Mean: Arithmetic average of a group of scores, calculated by adding up all numbers and dividing by the number of numbers.
+ Most representative/powerful measure since it analyses all the data in its calculation.
- Can be misrepresentative if there are any extreme values (so good to use a measure of dispertion)
- May not make sense in the context of the set of numbers e.g. 2.4 children.
- Cannot be used with nominal data.
Measures of Central Tendency - Mode
Mode: most common number –> if 2 modes=bimodal and more that 2 modes=multimodal.
+ Useful when data is in categories i.e. nominal categories.
+ Easy to calculate.
+ Unaffected by occasional extreme scores.
- Not useful of describing data when there are several modes.
- Doesn’t take into account every score.
- Not useful for small data sets.
- Does not always provide a typical score e.g. a small set of numbers when the most frequent number occurs at either end of a set of scores and is far from the central score.
- Sometimes no mode so it’s best used when there are lots of no’s in the sets of data and there’s likely to be lots of tied scores.
Measures of Dispersion - Standard Deviaiton
Standard deviation: shows the amount of variation in a data set, assessing the spread of data around the mean.
Tells us the quality of the mean in terms of how well it represents the rest of the scores.
A large SD tells us that the scores are widely spread out above and below the means suggesting is not very representative of the rest of the scores. A small SD means the mean is representative of the scores from which it was calculated.
+ More precise/sensitive as all values are taken into account and is not heavily distorted by extreme scores.
- However this may hide some of the characteristics of the data set (e.g. extreme values).
- Complicated to calculate.
- Less meaningful if data not normally distributed.
Measures of Dispersion - The 2 Range's
The Range: The difference between the highest and lowest number (highest no. – smallest no.)
+ Easy to calculate.
+ Provides you with direct information.
- Affected by extreme values.
- Doesn’t take into account the number of observations in the data set.
- Tells us very little about the actual spread of scores e.g. how spread out or clustered they are.
Semi interquartile Range: The measure of the spread of middle 50% of scores avoiding the extreme scores that may be in top and bottom 25%. Usually used when median is the measure o central tendency because of the similarities in their calculations as they are by the central scores.
+ Less sensitive/distorted to extreme scores than the range.
- Only uses half the data so much of the data doesn’t add anything to its calculation.
- Laborious to calculate by hand as it involves ranking etc.
Measures of Dispersion + Central Tendency
Which measure of dispersion to use with a measure of central tendency?
A measure of central tendency should always be accompanied by at least one measure of dispersion. The choice of which to use is down to careful consideration of the raw data that has been gathered however the table below is a rule of thumb.
When using a ….use a ….
Mean - Standard deviation
Median - Semi-Interquartile range/Range
Mode - Range
Presentation of Data Analysis - Visual Display 1
Tables: Display a clear summary of raw data (numbers which haven’t been treated in anyway)
A simple way and clear to present data is to put them in tables. How you construct a table will depend on the kind of data gathered and the R method used. It’s usual to use measures of central tendency and dispersion in a table rather than raw numbers. Which you chose depends on a consideration of both the kind of data you have collected and the advantages and disadvantages of each option. A title explaining what the numbers are and what they represent is essential if the table is to effectively communicate these findings.
Graph: A method used to convey information pictorially and clearly and therefore care should be taken in choosing and drawing the graph e.g. label and title the graph and avoid putting looking good above being simple and clear.
-Data shouldn’t be presented in ways that are misleading e.g. the distances between points on a vertical axis should be equal and the scale not exaggerated and distorted by the look of the graph. Since graphs are not intended to summarise data it’s not appropriate to use individual; P scores useless you are constructing a scattergraph. If data is gathered using the experimental method, usual convention is to plot the DV on the vertical axis and the IV on the horizontal axis.
Visual Display - Line Graph and Bar Chart
Line graphs: Display numerical data but not categorised data. As with the bar chart, the y axis represents frequency but the values along the x must be continuous.
Bar charts: display data in categories with gaps in between the categories. The height of the bar represents the frequency.
- Used for nominal data (data in categories) discreet data- when data fits into 1 category only e.g. lion.
- The x-axis (frequencies are usually on the y-axis) does not need to show a complete scale (if showing categories)
- There should be gaps between the bars.
- When drawing a bar chart, the vertical axis should show the score of the variable e.g. the mean/frequency whilst the horizontal axis should show the individual categories/variables you measured. The bars on the horizontal axis should be drawn separately with equal width and gaps. Need not show all the categories on the horizontal axis, it’s acceptable to just show those of interests as a comparison. However being selective in this way can be misleading so are must be taken as only choosing to show certain categories doesn’t tell the whole story
Visual Display - Histogram and Freq. Polygon
Histograms: use continuous data and so there are no gaps between the vertical bars; this indicates that the horizontal axis has a continuous measure rather than distinct categories. The vertical axis represent the frequency of something has occurred. The points on the vertical axis should be equal as should the width of the columns.
- Used for interval or ordinal data.
- No intervals (if data is grouped) are missed, even if they are empty. Class intervals are represented by their mid-point at the centre of each column. There are no gaps between columns.
Frequency polygons: Like a histogram but the midpoints joined by a straight continuous line, highlighting continuous nature of the variable on the x axis.
- Used for interval or ordinal data.
- All class intervals are represented.
- Instead of columns, a line is used to join the mid-point of each class interval.
Non-numerical - feelings. Cant be quantified but can turn into quantative data by use of categories.
+ Rich/detailed data of emotions, opinion that may not be assessed using quantative methods with closed q's.
+ Represents the true complexities of human behaviour
+ Useful for studies at individual level, to find in depth, the ways in which people think /feel (e.g. case studies).
+ Takes the point of view of P as their responses are not restricted in advance by the point of view of the R.
+ Provides rich details of how people behave because P's given free range to express themselves
- Is less controlled and structured compared to quantative so is hard to assess reliability.
- Diff and laborious to analyse and difficult to see patterns in the data that would allow you to draw conclusion.
- Subjective analysis can be affected by personal expectations/beliefs (though quantative methods may only appear to be objetive but are equally affected by bias.
R techniques produce this data: (unstructured) observations, content analysis, case studies, questionnaire/interviews (with open q's) e.g. unstructured interview
Presentation of Qualitative Data
Is challenging to summarise as there is lots of it i.e. video recordings and large amount of written material. But must find ways of summarising data to draw conclusions
The first step is to categorise data is some way:
Pre-existing Categories: i.e. the R decides on the categories before beginning the research
Emergent Categories: i.e. the categories/themes emerge when examining the data
Later the behavioural categories can be used to summarise the data
- The categories/themes could be listed
- Examples of behaviours within the category may be represented using quotes from P’s or descriptions of typical behaviours in that category
- Frequency of occurrences in each category can be counted and turned into quantitative data
- Finally a researcher can draw conclusions
Other research methods and techniques - 1
Multi-method approach:Combination of different techniques and methods.
Meta-analysis:R studies findings from number of different studies in order to reach a general conclusion about a particular hypothesis. The R uses effect size (measure of strength of relationship between 2 variables) as DV.
+ More reliable conclusions can be drawn.
- Research designs vary, so studies can never be truly comparable.
Cross-cultural studies:Natural experiment in which the IV is different cultural practices and the DV is a behaviour e.g. attachment.
+ Does allow R’s to see if some behaviours are universal.
- Imposed etic (tests developed in one country are used in another, with different norms, so affect results). Also group of P’s may not be representative of whole culture, but generalisations are made.
Other research methods and techniques - 2
Longitudinal studies: Observation of same items over long period of time, usually aiming to compare same individuals at different ages, to observe long term effects.
+ High in validity, people usually do not remember past events and if they were asked about their past, they would not remember.
- Attrition: (loss of P’s over time, leaving biased, small sample) is a problem.
- Also P’s are likely to become aware of aims.
- Subject to cohort effects (one P may have unique characteristic due to specific experiences) so the group studies may not be typical.
Cross-sectional studies:One group of P’s of young age are compared with another, older group, with the view of finding out the influence of age on the behaviour.
+ Quick and easy and relatively cheap way to gather data.
- P variables not controlled and cohort effects can mean the groups are different and so are difficult to compare to each other.
Other research methods and techniques - 3
Role play: Controlled observation in which p’s are asked to imagine how they would behave in certain situations.
+ Enables R’s to study behaviours which might otherwise be unethical.
- It may not be an accurate and valid representation of how people would act.