Theory 1 - Significance testing

Significance test

Ingredients of a significance test (unary hypothesis test):

  • Null hypothesis event
    • Identify a Claim
    • Then: is background assumption (supposing Claim isn’t known)
    • Goal is to invalidate in favor of Claim
  • Rejection Region (decision rule): an event
    • is unlikely assuming
    • Directionality: is more likely if Claim
    • Write in terms of decision statistic and significance level
  • Ability to compute
    • Usually: inferred from or
    • Adjust to achieve

Significance level

Suppose we are given a null hypothesis and a rejection region .

The significance level of is:

Sometimes the condition is dropped and we write , e.g. when a background model without assuming is not known.

Null hypothesis implies a distribution

Frequently will not take the form of an event in a sample space, .

Usually is unspecified, yet determines a known distribution.

At a minimum, the assumption of must determine numbers .

More generally, we do not need these details:

  • Background sample space
  • Non-conditional distribution (full model): or
  • Complement conditionals: or

In basic statistical inference theory, there are two kinds of error.

  • Type I error concludes with rejecting when is true.
  • Type II error concludes with maintaining when is false.

Type I error is usually a bigger problem. We want to consider “innocent until proven guilty.”

 is true is false
Maintain null hypothesisMade right callWrong acceptance
Reject null hypothesisWrong rejection
Made right call

To design a significance test at , we must identify , and specify with the property that .

When is written using a variable , we must choose between:

  • One-tail rejection region: with or with
  • Two-tail rejection region: with

Theory 2 - Binary testing, MAP and ML

Binary hypothesis test

Ingredients of a binary hypothesis test:

  • Complementary hypotheses and
    • Maybe also know the prior probabilities and
    • Goal: determine which case we are in, or
  • Decision rule made of complementary events and
    • is likely given , while is likely given
    • Decision rule: outcome , accept ; outcome , accept
    • Usually: written in terms of decision statistic using a design
    • We cover three designs:
      • MAP and ML (minimize ‘error probability’)
      • MC (minimizes ‘error cost’)
    • Designs use and (or , ) to construct and

MAP design

Suppose we know:

  • Both prior probabilities and
  • Both conditional distributions and (or and )

The maximum a posteriori probability (MAP) design for a decision statistic :

Discrete case:

Continuous case:

Then .

The MAP design minimizes the total probability of error.

ML design

Suppose we know only:

  • Both conditional distributions

The maximum likelihood (ML) design for :

ML is a simplified version of MAP. (Set and to .)


The probability of a false alarm, a Type I error, is called .

The probability of a miss, a Type II error, is called .

Total probability of error:

False alarm false alarm

Suppose sets off a smoke alarm, and is ‘no fire’ and is ‘yes fire’.

Then is the odds that we get an alarm assuming there is no fire.

This is not the odds of experiencing a false alarm (no context). That would be .

This is not the odds of a given alarm being a false one. That would be .

Theory 3 - MAP criterion proof

Explanation of MAP criterion - discrete case

First, we show that the MAP design selects for all those which render more likely than .

Observe this Calculation:

Now, take the condition for , and cross-multiply:

Divide both sides by and apply the above Calculation in reverse:

This is what we sought to prove.


Next, we verify that the MAP design minimizes the total probability of error.

The total probability of error is:

Expand this with summation notation (assuming the discrete case):

Now, how do we choose the set (and thus ) in such a way that this sum is minimized?

Since all terms are positive, and any may be placed in or in freely and independently of all other choices, the total sum is minimized when we minimize the impact of placing each .

So, for each , we place it in if:

That is equivalent to the MAP condition.

Theory 4 - MC design

  • Write for cost of false alarm, i.e. cost when is true but decided .
    • Probability of incurring cost is .
  • Write for cost of miss, i.e. cost when is true but decided .
    • Probability of incurring cost is .

Expected value of cost incurred

MC design

Suppose we know:

  • Both prior probabilities and
  • Both conditional distributions and (or and )

The minimum cost (MC) design for a decision statistic :

Discrete case:

Continuous case:

Then .

The MC design minimizes the expected value of the cost of error.

MC minimizes expected cost

Inside the argument that MAP minimizes total probability of error, we have this summation:

The expected value of the cost has a similar summation:

Following the same reasoning, we see that the cost is minimized if each is placed into precisely when the MC design condition is satisfied, and otherwise it is placed into .