Tests About a Population Mean

Posted by Beetle B. on Tue 18 July 2017

\(\newcommand{\Cov}{\mathrm{Cov}}\) \(\newcommand{\Corr}{\mathrm{Corr}}\) \(\newcommand{\Sample}{X_{1},\dots,X_{n}}\)

Steps to Carry Out The Experiment

  1. Identify the parameter of interest.
  2. Determine the null value and state the null hypothesis.
  3. State the appropriate alternative hypothesis.
  4. Give the formula of the computed value of the test statistic. Use the null value, but nothing from the sample!
  5. State the rejection region for your choice of \(\alpha\).
  6. Compute sample quantities, and calculate the value.
  7. Reject or not \(H_{0}\)

Very important: 2 and 3 above should be done prior to examining any data.

Case I: A Normal Population With Known \(\sigma\)

Rarely do we know \(\sigma\). But assume we do.

The Null Hypothesis: \(H_{0}:\mu=\mu_{0}\)

The test statistic: \(Z=\frac{\bar{X}-\mu_{0}}{\sigma/\sqrt{n}}\)

  • If \(H_{a}:\mu>\mu_{0}\), the criterion is \(z\ge z_{\alpha}\) (you pick \(\alpha\)).
  • If \(H_{a}:\mu<\mu_{0}\), the criterion is \(z\le-z_{\alpha}\)
  • If \(H_{a}:\mu\ne\mu_{0}\), the criterion is \(z\ge z_{\alpha/2}\) or \(z\le-z_{\alpha/2}\)

\(\beta\) and Sample Size Determination

We usually don’t have a sample formula for \(\beta\), but we do for this case. Because we can do that, we can fix \(\alpha\) and \(\beta\) and calculate the needed \(n\).

For \(H_{a}:\mu>\mu_{0}\) and the actual \(\mu=\mu'\ne\mu_{0}\):

\begin{equation*} \beta(\mu')=\Phi\left(z_{\alpha}+\frac{\mu_{0}-\mu'}{\sigma/\sqrt{n}}\right) \end{equation*}

For \(H_{a}:\mu<\mu_{0}\) and \(\mu=\mu'\):

\begin{equation*} \beta(\mu')=1-\Phi\left(-z_{\alpha}+\frac{\mu_{0}-\mu'}{\sigma/\sqrt{n}}\right) \end{equation*}

For \(H_{a}:\mu\ne\mu_{0}\) and \(\mu=\mu'\):

\begin{equation*} \beta(\mu')=\Phi\left(z_{\alpha/2}+\frac{\mu_{0}-\mu'}{\sigma/\sqrt{n}}\right)-\Phi\left(-z_{\alpha/2}+\frac{\mu_{0}-\mu'}{\sigma/\sqrt{n}}\right) \end{equation*}

Given \(\alpha\) and \(\beta(\mu')=\beta\), \(n\) for one-tailed is:

\begin{equation*} n=\left[\frac{\sigma(z_{\alpha}+z_{\beta})}{\mu_{0}-\mu'}\right]^{2} \end{equation*}

And for two-tailed:

\begin{equation*} n=\left[\frac{\sigma(z_{\alpha/2}+z_{\beta})}{\mu_{0}-\mu'}\right]^{2} \end{equation*}

\(-z_{\beta}=z\) critical value that captures lower-tail \(\beta\):

\begin{equation*} z_{\alpha}+\frac{\mu_{0}-\mu'}{\sigma/\sqrt{n}} \end{equation*}

for one-tailed.

Solve this equation for the desired \(n\).

For a fixed value \(\mu',\beta(\mu')\rightarrow0\) as \(n\rightarrow\infty\) for both one and two tailed for a normal population with known \(\sigma\).

Case II: Large Sample Tests

Let \(n>40\), and assume you don’t know the distribution or \(\sigma\). But we do know \(s\) is close to \(\sigma\).

So \(Z=\frac{\bar{X}-\mu}{s/\sqrt{n}}\) is approximately normal.

Use the rules for Case I.

Case III: A Normal Population Distribution

Whatr if \(n\) is small? We still go ahead with assuming a normal approximation (which may be a bad assumption). The main difference is that we now use the t-distribution.


\begin{equation*} t=\frac{\bar{X}-\mu_{0}}{s/\sqrt{n}} \end{equation*}
  • \(H_{a}:\mu>\mu_{0}\implies t\ge t_{\alpha,n-1}\)
  • \(H_{a}:\mu<\mu_{0}\implies t\le-t_{\alpha,n-1}\)
  • \(H_{a}:\mu\ne\mu_{0}\implies t\ge t_{\alpha/2,n-1}\) or \(t\le-t_{\alpha/2,n-1}\)

\(\beta\) and Sample Size Distribution

For a t-distribution, to determine \(n\) given \(\beta\), consult the relevant plots or tables.


If the population is normal and \(n\) is large, then \(S\) is approximately normal with \(E(S)\approx\sigma\) and \(V(S)\approx\frac{\sigma^{2}}{2n}\). Then \(\bar{X}\) and \(S\) are independent random variables.