Characteristics of Estimators

From Training Material
Revision as of 17:13, 25 November 2014 by Cesar Chew (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
title
Estimation
author
Yolande Tra

Learning Objectives。

  1. Define bias
  2. Define sampling variability
  3. Define expected value
  4. Define relative efficiency

Characteristics of Estimators。

This section discusses two important characteristics of statistics used as point estimates of parameters: bias and sampling variability. Bias refers to whether an estimator tends to either over or underestimate the parameter. Sampling variability refers to how much the estimate varies from sample to sample.

Have you ever noticed that some bathroom scales give you very different weights each time you weigh yourself? With this in mind, lets compare two scales. Scale 1 is a very high-tech digital scale and gives essentially the same weight each time you weigh yourself; it varies by at most 0.02 pounds from weighing to weighing. Although this scale has the potential to be very accurate, it is calibrated incorrectly and, on average, overstates your weight by one pound. Scale 2 is a cheap scale and gives very different results from weighing to weighing. However, it is just as likely to underestimate as overestimate your weight. Sometimes it vastly overestimates it and sometimes it vastly underestimates it. However, the average of a large number of measurements would be your actual weight. Scale 1 is biased since, on average, its measurements are one pound higher than your actual weight. Scale 2, by contrast, gives unbiased estimates of your weight. However, Scale 2 is highly variable and its measurements are often very far from your true weight. Scale 1, in spite of being biased, is fairly accurate. Its measurements are never more than 1.02 pounds from your actual weight.

We now turn to more formal definitions of variability and precision. However, the basic ideas are the same as in the bathroom scale example.


Bias。

A statistic is biased if the long-term average value of the statistic is not the parameter it is estimating. More formally, a statistic is biased if the mean of the sampling distribution of the statistic is not equal to the parameter. The mean of the sampling distribution of a statistic is sometimes referred to as the expected value of the statistic. As we saw in the section on the sampling distribution of the mean, the mean of the sampling distribution of the (sample) mean is the population mean (μ). Therefore the sample mean is an unbiased estimate of μ. Any given sample mean may underestimate or overestimate μ, but, there is no systematic tendency for sample means to either under or overestimate μ. In the section on variability, we saw that the formula for the variance in a population is

Pop var.gif

whereas the formula to estimate the variance from a sample is

Sample var.gif

Notice that the denominators of the formulas are different: N for the population and N-1 for the sample. We saw in the "Estimating Variance Simulation" that if N is used in the formula for s2, then the estimates tend to be too low and therefore biased. The formula with N-1 in the denominator gives an unbiased estimate of the population variance. Note that N-1 is the degrees of freedom.


Sampling Variability。

The sampling variability of a statistic refers to how much the statistic varies from sample to sample and is usually measured by its standard error ; the smaller the standard error, the less the sampling variability. For example, the standard error of the mean is a measure of the sampling variability of the mean. Recall that the formula for the standard error of the mean is

Sem form.gif

The larger the sample size (N), the smaller the standard error of the mean and therefore the lower the sampling variability. Statistics differ in their sampling variability even with the same sample size. For example, for normal distributions, the standard error of the median is larger than the standard error of the mean. The smaller the standard error of a statistic, the more efficient the statistic. The relative efficiency of two statistics is typically defined as the ratio of their standard errors. However, it is sometimes defined as the ratio of their squared standard errors.

















Questions

1 You are playing "Pin the Tail on the Donkey" at your friend's birthday party. While blindfolded, you have three tries to pin the tail in the correct location. All three times you pin it about a foot too low, and it lands on the donkey's back hooves. Select all of the terms that describe your location estimation.

unbiased
biased
variable
not variable

Answer >>

You tried to estimate where the donkey's tail should have gone. Your estimates were biased because you did not pin them on the correct spot; they were uniformly too low. However, your estimates did not vary much because they were all close to each other.


2 In the population, a parameter has a value of 15. Based on the means and standard errors of their sampling distributions, which of these estimators shows the most bias?

Mean = 14, SE = 2
Mean = 8, SE = 2
Mean = 15, SE = 6
Mean = 20, SE = 1

Answer >>

Bias refers to whether an estimator tends to either over or underestimate the parameter. In this case, the estimator with the sampling distribution with a mean of 8 is the most biased because it tends to be the most different from the population parameter.


3 In the population, a parameter has a value of 10. Based on the means and standard errors of their sampling distributions, which of these statistics estimates this parameter with the least sampling variability?

Mean = 10, SE = 5
Mean = 9, SE = 4
Mean = 11, SE = 2
Mean = 13, SE = 3

Answer >>

A statistic's sampling variability is usually measured by its standard error; the smaller the standard error, the less the sampling variability. In this case, the sampling distribution with a standard error of 2 has the least sampling variability.


4 In a population, a parameter called "mobent" has a value of 9. If the statistic estimating mobent is unbiased, what is its expected value?

Answer >>

Although this is a fictional parameter, the same thing applies. A statistic is unbiased if the mean of the sampling distribution of the statistic, also known as the expected value, is equal to the parameter. Therefore, an unbiased statistic would have an expected value of 9.