Bad science

?

Non-sequiturs

Logical fallacy

Does not follow

The conclusions you draw do not follow from the research you actually did

1 of 12

Sharpshooter fallacy

Logical fallacy

Hypotheses should always come first, before we see the data

It’s easy to see whatever we want

2 of 12

Post hoc ergo propter hoc

Logical fallacy

-          After this, therefore because of this

-          Correlation and causation

-          Because y happened after x, x must have caused y

-          Correlation does not infer causation

-          We’re bad at seeing cause and effect where there isn’t any

-          A third variable can cause both x and y, which makes them be related even though one doesn’t cause the other

3 of 12

Confirmation bias

Logical fallacy

Giving more weight to evidence that supports our pre-existing beliefs

4 of 12

QRPs

Questionable research practices

p-hacking, HARKing, etc

Different from logical fallacies because they are not errors of judgement but deliberate and conscious tampering with the methods or results of scientific studies in order to achieve certain results

5 of 12

NHST

·       Null hypothesis significance testing

         Can we support or reject the null hypothesis?

      What criteria can we use to say whether we find a significant result?

6 of 12

p-value

the probability of the observed data under the null hypothesis

 If alcohol doesn’t affect reaction times, what’s the probability that people who drink alcoholic beer would be 100ms slower on average than those who drank non-alcoholic beer

The probability that you would see this result if the null hypothesis was true

Not proof, a measure of probability

7 of 12

Familywise error

The more tests you run, the more likely you are to find a significant result

One test has a false-positive rate of 5% but it stacks

8 of 12

p-hacking

Trying to get your results below the 0.5 cut-off by methods such as

-          Failing to report all of a study’s dependent measures

-          Failing to report all of a study’s conditions

-          “Rounding off” a p-value

-          Selectively reporting studies that “worked”

-          Deciding whether to exclude data after looking at the impact of doing so on the results

9 of 12

The file drawer problem

When only studies that "worked" are published, there are drawers full of studies that didn't work that we never get to read, meanwhile we are exposed to the one study that did work

10 of 12

HARKing

Hypothesis After Results Known

Reporting an unexpected finding as having been predicted from the start 

11 of 12

Publication bias

Studies are only published based on what their findings were and how significant the results

This leads to the file drawer problem

Fixed by pre-registered reports

12 of 12

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all research methods resources »