 # Tutorial on Introduction to biostatistics

### Statistical Hypothesis

After identifying and defining the variables to be investigated, the researcher has to develop the study hypothesis if conducting an experimental study. Classically, such studies will have two hypotheses. One is a null hypothesis, which is a statement of no effect or no association while the alternative hypothesis is a statement that depicts the researcher’s interest or scientific belief.

To illustrate, suppose a researcher wants to test whether a form of chemotherapy for treating small cell lung cancer is more effective than the standard therapy. The researcher can formulate the null and alternative hypothesis as follows:

Null Hypothesis:

There is no difference in efficacy between the standard therapy and the new therapy

Alternative Hypothesis:

New therapy is superior to the standard therapy.

Two types of errors can occur while making conclusions regarding the null hypothesis: Type I error and Type II error. A Type I error refers to rejecting the null hypothesis when the null hypothesis is true (false positive). A Type II error refers to accepting the null hypothesis when it is actually false (false negative).

Level of Significance and Power of the Test

The probability of making a Type I error is called level of significance (α). Normally researchers would aim to minimize the probability of making a Type I error. Most researchers will set this probability to 0.05.

The probability of making a Type II error is (β). The power of the study is calculated from (1-β) and is defined as the probability of detecting a real difference when the null hypothesis is false.

These parameters have to be predetermined by the researcher prior to the study to avert the risk of erroneously accepting the null hypothesis (even though it is really false) due to an inadequate sample size that is not enough to detect a true difference.