- Created by: kat_wright1983
- Created on: 13-09-18 09:42
Psychology as a science:
- Paradigm: Psychology is marked by too much internal disagreement to have a clear paradigm or paradigm shift.
- Testability: Not all theories can be tested using precise methods.
- Falsifiability: Theories cannot be considered scientific unless they admit to the possibility of being chosen wrong.
- Replicability: To be trusted, research findings must be repeated across different contexts and circumstances.
- Objectivity: All sources of personal bias are minimised so as not distort or influence the research process.
- The empirical methods: Theories should be based on the gathering of evidence through direct observation and experience.
Probability and significance:
- Probability: Accepted levels in psychology are 0.05 (5%) or 0.01 (1%) in drug trials.
- Significance: A statistical term that tells us how sure you are that a difference or correlation exists. A significant result means that the researcher can reject a null hypothesis.
- Writing a significance statement: Say whether the observed value is lower than or equal to the critical value for the result to be significant for the probability level and test type, and if the hypothesis should be accepted or rejected.
The Critical Value:
The critical value:
· When the statistical test has been calculated, the researcher is left with the calculated value. This needs to be compared with a critical value to decide whether the result is significant or not.
· The critical values for a sign test are given in a table of critical values, and you need the following information to use it: 5% significance; the number of participants in the investigation (the n value) and whether the hypothesis is directional or non-directional.
· For the sign test, the calculated value has to be equal to or lower than the critical value for the result to be regarded as significant.
Levels of Measurement:
- Nominal: Categorical, e.g. hair colour.
↳ Discrete as there is only one item per category.
- Ordinal: Ordered, e.g. scales.
↳ “Unsafe” due to lack of precision.
↳ Does not have equal intervals between units.
↳ Not used as part of statistical testing.
- Interval: Numerical scales of equal and precise units, e.g. time, temperature or weight.
↳ The most precise and sophisticated form of data in psychology and is a necessary criterion for the use of parametric tests.
Errors and hypothesis types:
- Type I error: The incorrect rejection of a true null hypothesis (a false positive). This is more likely if significance is too high.
- Type II error: The failure to reject a false null hypothesis (a false negative). This is more likely if significance levels are too low.
- One tailed test: Directional hypothesis
- Two-tailed test: Non-directional hypothesis.
Types of tests:
· Parametric test: A group of inferential statistics that make certain assumptions about the parameters (characteristics) of the population from which the sample is drawn.
· Inferential statistics: A type of statistical analysis that permits one to make inferences (i.e. draw conclusions) about an underlying population from a sample of data.
The Sign Test:
The sign test: A statistical test used to analyse the difference in scores between related items (e.g. the same participant tested twice).
To use a sign test, you must: Be looking for a difference rather than an association; have used a repeated measures design; use data that is organised into categories (nominal data).
How to do a sign test:
1 1. Convert data to nominal. For numerical data, subtract one score from the other and record a + for a positive answer and a – for negative.
2. Add up the plusses and minuses.
3 3. Take the less common sign and call this S.
4 4. Get the critical value from the critical value table. To get the critical value from the table, you will need the value of N, the hypothesis type and the probability value.
5 5. In order for the S value to be significant, you want the observed value (S) to be lower than or equal to than the critical value.
Observed value of S = frequency of least common difference sign
Other statistical tests:
- Mann-Whitney U Test: A test for a significant difference between two sets of scores. Data should be at least ordinal level using an unrelated design (repeated measures).
- Wilcoxon:A test for a significant difference between two sets of scores. Data should be at least ordinal level using a related design (repeated measures).
Chi-Squared: Test for association (difference or correlation) between two variables or conditions. Data should be nominal level using an (unrelated) independent design.
Degrees of freedom: (number of rows – 1) x (number of columns – 1)
- Spearman’s rho: A test for correlation between two sets of values. The test is selected when one or both variables are ordinal, though it can be used with interval data. The calculated value of rho must be equal to or more than the critical value for significance to be shown.
1. 1. Abstract- A short summary that includes all major elements of the investigation, including aims, hypothesis, method, results and conclusion.
2. 2. Introduction- A literary review of the area of investigation detailing relevant theories and studies.
3. 3. Method- The method is split into sub-sections to ensure that it is replicable. This includes: experimental design and the reasons for this; the sampling method, including the sampling method and number of participants, as well as the target population; the apparatus/materials used; the procedure and including briefings; standardised procedure, debriefing and an evaluation of ethics.
4. 4. Results- Should feature key findings from an investigation. This is likely to feature descriptive statistics, measures of central tendency and dispersion. Inferential statistics should show critical values and the statistical test used, as well as the level of significance. Results should state if a hypothesis should be retained or rejected.
5. 5. Discussion- Verbal summary of findings; discussion of evaluation and wider implications of research.
6. 6. References- Bibliography of all material cited or drawn upon in the report.