A type I error is the mistake of thinking something is true when it is not (also known as a “false positive”). A type II error is thinking something is not true when in fact it is (a “false negative”). When testing a specific hypothesis, scientists run statistical checks to work out how likely it would be for data which seem to support the idea to have come about simply by chance. If the likelihood of such a false-positive conclusion is less than 5%, they deem the evidence that the hypothesis is true “statistically significant”. They are thus accepting that one result in 20 will be falsely positive—but one in 20 seems a satisfactorily low rate.Here's the rest of the Unreliable Research Briefing.

Here is another breakdown that I put together and taped to the wall until it was committed to memory. Again. And again.

**Type I error**: Rejecting the null hypothesis when it is actually true (False Positive).

**Significance Level**of a test: The probability of a

**Type I error**.

The "p-value" is smallest level of significance at which the null hypothesis can be rejected.

**Type II error**: Failing to reject the null hypothesis when is actually true (False Negative).

The

**Power**of a test is one minus the probability of a

**Type II error**.

Remember is that you

*never accept*a hypothesis. You only

*reject the null hypothesis*, or

*fail to reject the null hypothesis*. The best way to remember this concept is by remembering that your geeky sciency friends are so annoying because they

*never accept anything*!

So... Go out tonight and reject a scientist. Give them a dose of their own medicine.