ew

ere

?
  • Created by: Beth
  • Created on: 07-01-13 23:02

Observations (Structured) - Behavioural Categories

Need to operationalise the observation/behaviour to create behavioural categories i.e. a set of components

Behavioural categories: Dividing target behaviour in to a subset of behaviours. This can be done using a behaviour checklist or coding system. This improves reliability.

Behavioural Categories should:

  • Be objective - record actions don’t make inferences
  • Cover all possible component behaviours – don’t cover things with are not necessary
  • Be mutually exclusive exclusive – don’t mark 2 categories at once i.e. hitting and shoving
1 of 22

Observations (Structured) - Sampling Procedures

Sampling Procedures: If conducting continuous observation R records every instance of behaviour in detail. In many situations not possible as it creates too much data. So could use Event sampling: An observational technique in which a count is kept of the number of times certain behaviour (event) occurs. Time sampling: An observational technique in which the observer records behaviours in a given time frame e.g. every 30 seconds you may select more than 1 category from a checklist.

2 of 22

Observations - P and Non-P

Non-Participant: the experimenter does not become part of the group being observed

+ R can remain objective throughout

- The R loses a sense of the group dynamics by staying separate

 Participant: Observer becomes one of the groups of P’s he wishes to observe. Observer may tell the others they will be observed (an overt observation), or may pretend to be one of the group and not inform them that they are being observed (a covert observation).

+ Can observe P's in a natural setting (high ecol validity) and gain understanding of causes of their behaviour.

+ The R develops a relationship with group -  gain a greater understanding of the groups behaviour

- Remembering accurately may be difficult as unable to take notes.

- Observer loses objectivity - may interpret or record information in a biased way.

- P’ may act differently if they know a R is amongst them

- Ethical guidelines such as deception, consent and confidentiality may not be maintained.

3 of 22

Observations - Overt or Covert

Overt: people being observed do not know they're being watched or studied. But knowing that behaviour is being observed is likely to alter P's behaviour. Covert/Undisclosed

Observation: Participants are unaware they are being watched e.g. one way mirror.

Observers try to be as unobtrusive as possible (to minimise hawthorne effect) though this is has ethical implications

4 of 22

Designing Observational Research

Are you using observation as a method or a technique?

Controlled or Naturalistic?

Overt or Covert? 

Structured or Unstructured?

If Structured - what sampling procedures/ behavioural categories and methods to observe i.e. behavioural checklist, coding system or rating system?

Coding system: Systematic method for recording observations in which individual behaviours are given a code for ease of recording e.g. PLYO – playing when with owner.

Behaviour Checklist: A list of the behaviours to be recorded during an observational study

Rating System: E.g. Early Child Environment Rating Scale. Records observations of child’s early environment and rates items on a 7 point scale (1 is inadequate and 7 is excellent.) This is then related to other developmental outcomes such as school success.  

5 of 22

Evaluating Observational Research - Validity

External Validity - Likely to be high as they involve more natural behaviour

Population Validity - May be a problem e.g. if children are only observed in middle class homes so we can't

Internal Validity - Observations will not be valid if the coding system is flawed

Observer bias: if what someone observes is affected by the expectations (they may see what they expect to see) This reduces the objectivity and validity of the R 2 or more observers make this worse. Can check observer reliability by inter-rater reliability and conduct pilot study.

Improving validity: Carry out R in varied setting with varied P’s and use more than 1 observer to reduce observer bias and average data across observer (to balance out any biases)

Ethical issues: This type of R is acceptable where those observed would expect to be observed by strangers. However R’s should be aware that’s not acceptable to intrude upon privacy of individuals who even whilst in public space may believe they are unobserved.

  • In studies where P's are observed without their knowledge there are issues relating to informed consent.
  • Observations - invasion of privacy (1 way mirrors is deception) so P confidentiality should be respected.
6 of 22

Evaluating Observational Research - Reliability

Refers to whether sonething is consistent . So any tool used to measure e.g. observations or interviews must be reliable (so prodce and so should produce the same result  on every ocassion - if it doesn't must check the thing has changed and not our measuring tool.

Reliability of Observations

Inter-rater reliability: Extent to which 2 observers agree. To ensure reliability, have at least two observerswatching and recording what they see in the same way. Judges will often score observations in categories and the % of agreement between the judges will be calculated. E.g. if judges score observations in same category 8 times out of 10, there’s an 80% inter-rater reliability rate. If total agreements/total observation over 80% = reliable.

Improving Reliability

Observers must be trained in the use of coding systems/behaviour check list and thet must prcatise using them and discuss their observations. The investigator can then check how reliable they are.

7 of 22

Evaluating Observational Research

You can have a study that is reliable but lacks validity

E.g. if an observer uses a behaviour checklist which is not very thourough and sometimes the target individual does things which can't be recorded, the observation may be prefectlt reliable but lack validity because the behaviour checklist was poor.

8 of 22

Self-report - Questionnaire +'s and -'s

Questionnaire: Data is collected through use of written questions.

+ Can be easily repeated so data can be collected from large no’s of people quickly, cheaply and easily so is efficient.

+ P’s are anonymous, so are more willing to reveal personal info/more truthful than in interview, reliable method of gathering data.

+ People who are geographically distant can be studied.

- Answers may not be truthful because of leading q’s and social desirability bias or may just deliberately give the wrong answers.

- Difficult to obtain a representative sample as it is difficult to identity all members of a population and there is no guarantee that all will agree to take part in the study. It may be that those who do agree make up a biased same because only certain types of people fill in questionnaires e.g. literate people who are willing to spend time filling it in and returning it.

- Survey data is highly descriptive as so it’s difficult to establish causal relationships and the ability to infer causal relationships will be limited by the quality of the questionnaire.

9 of 22

Questionnaire - Designing one

Designing a simple questionnaire:

1.       Define the objectives of the study: Decide upon a R area – “attitudes to time management in students.”

2.       Formulate one or more hypotheses

3.       Identify a population: Determine a way of selecting a sample form this population.

4.       Create a good questionnaire: Select appropriate types of q’s and avoid the pitfall of q wording.

5.       Before administering the questionnaire: Do a pilot study, allows you to gain feedback about length of time to complete. Check q’s are clear ensure its providing you with the kind of data you can analyse.

6.       Adjust the questionnaire: Use results from the pilot study to make your R all the more effective.

10 of 22

Questionnaires - Sampling Tecnique

Sampling technique: Use stratified or quota sampling.

Stratified sampling: Divides target population into groups, people in sample from each group in same proportions as population. If selection is done randomly (Stratified sample) or by another method e.g. opportunity sampling (quota sampling

+ More representative than opportunity sample because there is an equal representation of sub groups.

- Although the sample represents sub-group each quota taken may be biased in others ways e.g. if you use opportunity sampling you only have access to certain sections of the target population.

DON’T confuse a systematic sample for a random sample - selecting every 10th person is NOT random it’s a systematic method of selection however if you select a no. using a random method then select every 10th person after that then this would be a random sampling.

11 of 22

Questionnaires - Writing good questionnaires

 Use (First set of points) and Avoid (Second set of points)       

  • Filler q’s: Irrelevant q’s distract respondent from main purpose of the study may red. demand characteristics.
  • Sequence of q’s: Start with easy q’s saving ones that make a P feel anxious/defensive until end when relaxed
  • Analysis: Q's need to be writtern so that the answers are easy to analyse.Mix of closed and open as close are easier to anlyse but forced to select untrue answers  so doesn't represent their real behavior/thoughts.
  • Sampling Technique: i.e. how to slect respondents. Questionnaires often use stratified or quota sampling
  • Pilot Study- so can be refined as difficulties may be found
  • Lack of clarity: Q’s should be understandable and mean the same thing to all P’s. They should be written in clear language, avoiding ambiguity and this can be done by operationalising certain terms.
  • Embarrassing q’s: Q’s that focus on private matters should be avoided since as questions become more personal the likelihood of unanswered/wrongly answered q’s increases.
  • Bias: Any bias may lead the P to be more likely to give a certain answer. Biggest prob is social desirability bias as P’s will often answer q’s in a way that make them feel better showing them in a better light.
  • Leading q’s: Leading q’s can encourage a certain response from P’s.
12 of 22

Questionnaires - Types of Q's

Analysis: Q’s need to be written so they are easy to analyse mix of closed and open questions as close are easier to anlyse but they may be forced to select answers which are not true to them so does not represent their real behavior or thoughts.

Open q’s: Q’s that invite the respondent to provide their own answers than select one provided therefore produces qualitive data.

Closed q’s: Have a range of answers and P’s chose one producing quantative data and so are easier to analyse than easier open q’s.

Rank order: E.g. rate/rank range of options from 1 to 5 with 1 the most favourite.

Likert scale q’s: Statement where P’s indicate strength of agreement/disagreement using no’s from 1 (strongly agree) to 8.

Checklist q’s: List of terms is provided in which P’s tick those that apply.

Dichotomous q’s: Q’s which offer two choices e.g. Yes/No.

Semantic differential q’s: Q’s with 2 bi-polar words and P’s asked to respond by indicting point in between them representing their strength of feeling. E.g. Clean: : : : : : : : : : : : : : Dirty.

13 of 22

Self-report Techniques - Structured Interview +'s

Interview: R method than involves face to face interaction with P and results in collection of data. 

Formal Interviews (structured): pre-determined q's i.e. a questionnaire that is delivered face to face with.

+ Can be easily repeated as the questions are standardised.

+ Less interviewing skill is required.

+ Simple to administer and lots of data collected cheaply and quickly.

+ Large sample can be obtained without difficulty depending on the subject of the questionnaire so data can often be generalised.

+ Can provide a great deal of insight into complicated and difficult indicial cases if carried out carefully by a skilled interviewer.

+ More easy to analyse than an unstructured interview because answers are more predictable.

14 of 22

Self-report Techniques - Structured Interview -'s

- The interviewer’s expectations, communicated unconsciously may influence the interviewee’s answers (called interviewer bias).

- Reliability may be affected by low inter-interviewer reliability (if one interviewer behaves differently to the next, results change).

- May be difficult to find the right sample and some may leave as the find it too lengthy or too difficult to sit through so dint complete it in these cases the data may need to be rejected from the study.

- If the interviewer is not sufficient skilled the P’s responses may not be relaxed and data will be false/little use.

15 of 22

Self-report Techniques - Semi-structured

Semi-structured (partially planned): There are no fixed questions but the interview is guided perhaps by predetermined set off topics to be covered. The order in which these topics are covered or the way in which they are addressed can vary across P’s.

+ R can be flexible in what questions are asked.

+ Can gain in-depth and accurate information from respondents.

- More difficult to compare answers.

 

16 of 22

Self-report - Unstructured Interview +'s/-'s

Unstructured Interviews (informal): The most informal and in-depth technique, there is less structure as new questions are developed along the way, they enable the interviewer to re-phrase questions if necessary, to ask follow-up questions or clarify answers that are ambiguous or contradictory. Interviewer may set the topic but the interview if free to dictate the content by taking the conversation in any direction they wish.

+Provides more detailed in depth information than structured interview.

+Can access information which may not be available from predetermined questions.

- More affected by Interviewer bias as in an unstructured interview the interviewer is developing questions on the spot questions may be less objective.

- Requires well trained interviewers which make it more expensive to produce reliable interviews.

 

17 of 22

Evaluating Self Report Techniques - Validity

  • External Validity of self report techniques - extend to which the findings can be generalised to other situations and people. A major factor will be the representativeness of the sample used to collect data.
  • Internal Validity of self report techniques is related to the issue of whether the questionnaire or interview (or psychological test)  really measures what it is intended to measure.

There are several ways to asses this the most common are 

  • Face Validity - Does the test look as if it is measuring what the R intended to measure. E.g. are the q's related to the topic.
  • Concurrent Validity -  establihed by comaparing the current questionnaire or test with a previously established test on the same topic. P's take both tests  and then the two test scores are compared.
  • Validity is improved by assesing the validity of a technique. If such measures of validity are low then:
  • External Validity: Use a more appropriate sampling method to improve population validity as the findings then be generalised into a wider population.
  • Internal validity: If one or more measures of internal validity is low the items on the questionnaire/interview need to be revised in order to produce better matched scores on the new test and the established one.
  • Ensuring Validity -> Predictive validity: Checking validity by seeing if future behaviour is consistent with what we could predict based on our test
18 of 22

Evaluating Self Report Techniques - Reliability

  • Internal reliability:  measure of the extent to which something is consistent within itself. For example, all the q’s on an IQ test (which is a kind of questionnaire) should be measuring the same thing. May not be relevant to all questionnaires because sometimes internal consistency is not important e.g. a questionnaire about day-care experiences may look at many different aspects of day care and is effects.
  • External Reliability: measure of consistency over several different occasions. For example, if an interviewer conducted an interview, and then conducted the same interview with the same interviewee a week later, the outcome should be the same
  • Reliability also concerns whether two interviewees produce the same outcome. This is called inter-interviewer reliability .
  • Assessing Reliability - >
  • Internal reliability - Split half reliability: Splitting test into 2 halves, carry out a correlation on the 2 halves, to ensure both halves of test are of equal difficulty, if they are, test is reliable. E.g. single group of P's all take test at once their answers are split in half and this could be done by comparing answers to odd number q’s with all the answers to the even number q’s. The individual scores on both halves of the test should be very similar and the 2 scores can be compared by calculating a correlation coefficient.
  • External reliability -Test re-test reliability: Experiment administered twice, with a gap in-between. If the same P’s score similar results on both occasions, method is reliable. Generally used for factors that are stable over time, e.g. intelligence. Reliability usually higher when little time has passed between tests. If a test produces scores these can be compare by calculating correlation coefficient.
19 of 22

Evaluating Self-report Techniques-Imp. Reliability

Improving internal reliability: It’s possible to improve internal reliability by removing those items which are most inconsistent. The only ways to do this is by trial and error- remove one test item and see if the split-half correlation coefficient improves/ if it does, the removed item should be permanently left out.

Improving Reliability: It is possible to improve internal reliability by removing those items which are most inconsistent. The only way to do it is by trial and error. Remove one test item and see if the split-half correlation co-efficient improves. If it does then the removed item should be permanently left out.

Ethical Issue

  • Deception about the true R aims may sometimes be necessary in order to collect truthful data.
  • Psychological harm- Respondents may feel distressed by certain questions/having to think about certain sensitive topics
  •  Privacy  Q’s may be related to sensitive and personal issues invading an individual’s privacy
  • Confidentiality must be respected names and personal details should not be revealed without permission. No personal data should be stored.
20 of 22

Research Method or Technique

Questionnaire/Interview

Research Method: No manipulation of variables by the R so is used to collect data about what people do and why.

Research Technique: Used to gather data in experiments about attitudes, analysis would involve a comparison between 2 groups - in an experimental study using a questionnaire as a R technique to assess the DV.

Observation

Research Method: No manipulation of variables by the R so is used to collect data about what people do and why e.g. naturalistic observation

Research Technique: Used to gather data in (lab) experiments e.g. show a 1 group a aggressive film then observe to see if behvaiuor changes. Other grou not shown agressive film (used as a control)

21 of 22

Definition of a Case Study

A Research method that involves a detailed study of a single individual, incident, institution or event.

Primary data: Gained directly by the Researcher from interviews, assessments and observations.

Secondary data: Data that has already been collected e.g. medical records or other studies by other Researchers.

22 of 22

Comments

No comments have yet been made

Similar Anthropology resources:

See all Anthropology resources »