Distributions that appear across experimental design.
Chi-Square Distribution. This distribution arises as the sum of squares of standard normals. That is, if \(z_[k] \sim \mathcal{N}\left(0, 1\right)\), then \(\sum_{k} z_{k}^2 \sim \chi^2_{K}\), a chi-square distribution with \(K\)-degrees of freedom (d.f.).
This distributions claim to fame is that if \(y_i \sim \mathcal{N}\left(\mu, \sigma^2\right)\) independently, then
\[ \frac{1}{\sigma^2}\sum_{i = 1}^{n} \left(y_i- \bar{y}\right)^2 \sim \chi^2_{n -1} \] which is a nontrivial but very useful fact, since the expression on the right is similar to the usual estimator for the sample standard deviation. We’ll make use of connection when we construct some common hypothesis tests.
\[ \frac{\mathcal{N}\left(0, 1\right)}{\sqrt{\frac{\chi^2_{K}}{K}}} \]
This seems like an esoteric fact, but notice that the usual way of standardizing the mean (when the true variance is unknown) has this form,
\[ \frac{\sqrt{n}\left(\bar{y} - \mu\right)}{S} \]
\[ F_{u, v} = \frac{\frac{1}{u}\chi^2_u}{\frac{1}{v}\chi^2_v} \]
Since chi-squares arise whenever we have sums of squares, this distribution will come in handy whenever we need to compare two different sums of squares.