# Statistical Testing

?

## Independent t-test

Developed by Gosset and is used for between-groups or unrelated groups data.

T = (observed difference between means) - (expected difference between population means assuming Ho) / (estimate of standard error of difference between sample means)

Requires knowledge of each groups mean, standard deviation and sample size.

Assumes

• Data are normally distributed
• Data are interval or scale in nature
• Groups are independent
• Variance of each group is roughly equal to each other
1 of 15

## Paired Samples t-test

Used for within groups designs - eg repeated measures,within subjects, paired means.

Is more powerful than the independent t-test, so is more likely to find a significnat effect if there is one present.

T = [sigma(x1-x2)/N]/SEM
(size of the effect divided by the standard error of the means)

Assumes:

• Interval or ratio scale data
• sample of pairs is random from the population
• difference between scores is normally distributed
2 of 15

## Single Sample t-test

Compares a single sample of scores with a specific test value rather than another set of scores.

3 of 15

## Chi-Squared Test

Used when analysing categorical data

Calculates how often a partcular observation falls into a specific category, and compares this to how many would be expected in each category on the basis of chance.

Null hypothesis: all observations are equal to chance
Alternative hypothesis: observed frewuencies in each category show a significant difference from chance

Assumes:

• independence: each participant contributes to only one category
• expected frequency: each category should have at least one count expected in it, and no more than 20% should have an expected frequency of less than 5
- if violated, the statistical power is reduced and may make it below the acceptable levels (80%) for a test to run.

One IV: Χ2 Σ((observed value-expected value)2 )/(expected value)
Two IV: Expected Valuecell (Row total x column total )/(Total observations)

4 of 15

## Co-Variance

Used when it is not ethical to manipulate an IV to measure a DV.

Measures the average cross-product deviations between two variables

• COV = sigma(meanX – x)(meanY-y)/N        (for populations)

• COV = sigma(meanX – x)(meanY-y)/N-1     (for samples)

Interpretation:

• if COV is positive, x and y both increase together
• if COV is negative, x goes up as y goes down
• if COV = 0, then there is no relationship

However the variance levels of the variables can affect this, as well as the scale of measurement used.

5 of 15

## Correlation Coefficient

Introduced by Pearson

Measures the strength of a correlation between two variables and standardises the covariance to produce an R-value.

Output can be between 1 and -1, with 1 meaning a positive correlation, and -1 being a negative correlation.

Not possible to claim that there is a causal relationship jsut because of a correlation, as there may be a third factor also affecting the results, and it isn't known whether A preducts B or vice versea.

6 of 15

## Spearman's Rho

A non-parametric version of the correlation coefficient test, meaning it is used when dat is not normally distributed.

Doesn't require the same strict assumptions as the pearson's coefficient (normal distribution, independence of sample).

Is often sed in ordinal data.

Data are ranked and then correlations are caculated based on this.

7 of 15

## Simple Linear Regression

Tests a linear model to predict values of an outcome variable (DV) from the vlaues of one or more predictor variables (IV).

Simple linear regression involves one predictor/IV

Yi = (B0 + B1X) + Ei

B0 = intercept
B1 = gradient
X1 = predictor variable
Ei = error value

Aims to explain how much variance can be explained by the predictors, and to compare whether the model is significantly better than the meanat predicting an outcome

Produces a line of best fit

8 of 15

## Multiple Linear Regression

Used when there are multiple predictive variables (IVs) rather than just one as in the simple linear regression.

Produces a 'plane of best fit' to cover the relationship between all variables.

9 of 15

## Cohen's D

A measure used to determine the effect size, indicating the standardised difference between two means

An effect size of 1 means that the difference between means is greater than one standard deviation.

10 of 15

## Statistical Power

Statistical Power is the likelihood of finding an effect in a population assuming one actually exists.

Reppeated measures designs have the highest statistical power as this removes individual participant differences.

Calculated by 1 - Beta

Beta is the probability of not finding the effect - usually 0.2. If there's a less than 80% chance of finding the effect this is typically a suggestion of errors or lack of strength of the study

Affected by:

•  effect size
• number of participants
• alpha level
• variability, design, test choice, tails

Effect size, participant number and alpha level must be known to find the power

11 of 15

## ANOVA Overview

Meaning 'Analysis Of Variance'

It is an extension of the t-test, used when there are more than three conditions (with 2, you would use the independent t-test).

Measures the amount of variance caused by the experimental manipulation and compares this to variance which is unexplained by the experiment.

Produces the test statistic of F. A higher value of F means there's a bigger difference between the means so its more likely to be significant difference between groups.

Assumptions:

• Participants have been selelcted randomly
• Groups are independent to each other
• Roughly equal numbers of participants in each group (maximum power when the same)
• Each group has a fairly equal variance within the group

Three main types - one way, two way, and multivariate

12 of 15

## One-Way ANOVA

Used to compare one dependent variable (the mean) from one independent (unrelated) groups - for example mean weightloss depending on tea type (with multiple levels of IV - types of tea).

The null hypothesis expects that the means are equal between groups, and the alternative says they're not.

Aims to compare the amount of explained variance with the unexplained variance - where the explained variance is the variance between groups, and the unexplained variance is the variance within the group.

F = explained variance (between groups) / unexplained variance (noise)

ANOVAs output two degrees of freedom - the DF between conditions (ppt - 1), and the residual DFs (IV levels - 1). Both should be reported).

It can show whether there was significant difference between at least two of the means, however will not identify which means were different. This means an ad-hoc test is necessary to locate the difference.

13 of 15

## Two-Way ANOVA

Used when there is one dependent variable, and two independent variables. The IVs are usually nominal explanatory variables (eg gender or income), which are both manipulated, and a single DV is measured (eg anxiety in an interview)

Independent variables are called factors. ie they are two separate potential factors affecting the outcome (DV). Factors can be split further into levels - such as high, middle and low income, and male, female, and other genders.

ANOVA ananlysis on a 2-way ANOVA produces a main effect and an interaction effect. In the main effect, each factors effect is considered separately - ie, the role of income on anxiety and the role of gender on anxiety (just like a one-way anova). In interaction effects, all factors are considered at the same time - ie which combination of levels of factors have the biggest impact on the anxiety levels - the DV. ie it may be that high income men have the lowest anxiety whilst low income 'other' genders have the highest anxiety.

14 of 15

## Multivariate ANOVA

Also known as the 'mANOVA' (multiple analysis of variance)

Used when there are multiple dependent variables - two or more - in a study which are both being analysed. Often these will be measured pair-wise after to establish where the most significant impact was found.

For example, if researchers are measuring both anxiety score and reaction time for a particular independent variable.

15 of 15

## Comments

No comments have yet been made

## Similar Psychology resources:

See all Psychology resources »See all Statistical Methods resources »