Hypothesis and Variables
When psychologists carry out research, they are hoping to answer certain questions.
They must write the question in a formalised, testable way called a hypothesis.
This hypothesis is stated at the start of the experiment and predicts the outcome.
Null hypothesis states that there will be no difference between the conditions or no relationship between two variables. We set out to disprove or reject this hypothesis.
Alternative ( experimental ) hypothesis states that there will be a difference between the conditions or a relationship between two variables.
Alternative hypothesis can be one tailed, where the direction in which the results will go is predicted, or two tailed where a difference is predicted but without direction.
Operationalising Variables is the way the variable will be used must be clearly defined, exactly how to measure results.
Reliability and Validity
To observe behaviour of men and women at traffic lights and count how many jump the light as it turns red. IV= men and women. DV = number jumping the red light. Way to operationalise = keep a tally chart, mark is written either in male or female column. Alternative experimental hypothesis = 'men will be observed to jump red traffic lights more often than women'.
Reliability refers to the consistency with which we measure something, if we repeat an operation many times, we always get the same result.
Validity means the extent to which we actually measure what we set out to measure.
Reliability and validity must be maximised so that we can be sure that the DV is really affected by the IV and nothing else. In order to maximise validity within the experiment, internal validity, we make sure that we do out best to control extraneous variables. Maximise external validity by ensuring that the conditions are as close to real life as possible or at least, by justifying why they're not. There must be reliability between observers where two or more are rating some behaviour, if this isn't carried out, the findings they record could become subjective and therefore not a consistent representation of what has been observed.
Measures of reliability and validity are also used in the standardisation of tests. When new tests or new versions of tests are being developed, the items on the test must be carefully scrutinised to make sure that they provide a reliable and valid measure of the populated being tested. It would be no use for example, if a new IQ tests gave different results when the same person completed it on two different occasions, nor would it be any use if it was testing some other general knowledge rather than IQ.
Observer reliability measures the consistency with which two or more observers rate the same behaviour during observation studies. Data obtained from each observer are correlated to establish the degree of similarity in the scored.
Inter-rater reliability is when observers reliability has been achieved if there's a significant positive correlation between the scored from each observer.
If there's no correlation observers will receive further training in the techniques they're using. The researcher will ensure that the behaviour to be observed has been clearly defined or operationalised.
Test-retest reliability assesses the consistency of a test over time, participants complete the test on two different occasions. The results should be the same on both occasions.
Split-half reliability assesses the extent to which individual items in a test are consistent with out items in the same test, there are various ways to do this, compare results from odd and even number questions, compare results from first half of test with second half, randomly splitting the test into two parts. Psychology use many different methods to test validity.
Variable is a thing that may change or vary in some way and which can be categorised or measured, the control, manipulation and measurement of variables are central to psychological research, psychologists must precisely define or operationalise variables if their research is to be scientifically creditable.
The manipulation carried out it in an experiment will affect the dependent variable, this is the variable measured in an experiment. The experiment is the best method in which to control extraneous variables, these are variables which the researcher has little control but could affect the outcome of the experiment by affecting the dependent variable. The "dependent variable" represents the output or effect, or is tested to see if it is the effect. The "independent variables" represent the inputs or causes, or are tested to see if they are the cause
Extraneous variables come from the participants themselves, participant variables, or from the conditions under which the positions are tested, situation variables.
Suppose we attempted to measure frustration levels in participants solving a difficult anagram and we pick participants from an English class going on in the next classroom. It's likely that many of these participants will be above average at solving anagrams and therefore will show less frustration. Such participant variables will therefore, confound our results or give us an unrealistic idea of the effects of anagram solving on frustration. Similarly, if we were to test one group of people at different times of the day, say one early in the morning and one late at night, we may have a situation variable since people are generally more alert at certain times of the day and those tested at night may well be tired
Demand Characteristics arise when the participant picks up on cues that lead the participant to behave in a certain way.
Investigator effects arise when the investigator knows the desired outcome of the experiment, this can lead the investigator to affect a participant's behaviour in a certain way or even to record results in such a way that the desired effect is shown.
These variables are both threats to the internal validity of the experiment.
Target Population is the population from which we draw a sample.
Sampling bias we need our sample to be typical of the population about which we wish to generalise results. If we decided to interview managers in a local firm that was struggling to make a profit we could not obtain a realistic view of the question we wanted to study.
Random sample is a sample in which every member of the target population has an equal chance of being included.
A quota sample reflects the exact proportions of specific characteristics as they occur in the target population.
Systematic sampling is when participants are picking on the basis of some system. A sample of five year olds for example, could be selected by picking every fifth name on the register, not a random sample, not everyone has an equal chance, relatively unbiased, often faster than random sampling. Often called quasi-random sample.
Opportunity sampling simply grabs people who happen to be near at the time.
Volunteer or self selecting sample advertising for participants, volunteers who turn up are a self selecting sample and are a biased one.
During the 1960s and 1970s, there were growing demands for explicit and detailed ethical guidelines for psychological research. Partly as a results of this debate, professional associations of psychologists in a number of countries published codes of conduct for research with participants. In 1979, the BPS published its 'ethical principles for research on human subjects', this was revised in 1990 and 1992 and republished in 1993 and 1998 with the title 'ethical principles for conducting research with human participants'. Ethical considerations are now a major concern in research, the BPS 1998 states that 'in all circumstances, investigators must consider the ethical implications and psychological consequences for the participants in their research'. The investigation should be considered from the viewpoint of all participants and threats to their psychological health, well being, values of dignity should be eliminated. These codes are considered to be so important that psychological associations now have committees which are responsible for ethical issues and continuously monitor and update ethical guidelines. Universities have ethics committees which examine research proposals to ensure they are in line with codes of conduct.