分享

P值解释和误区

 Z2ty6osc12zs6c 2018-05-26


 

P值:观察到极端值的概率

 

观察到的概率越低,结果就越显著。观察到概率低于P值时,认为足够证据支持H1(显著)

类似于反证法,先假设H0,A和B没有关系

观察到结果概率非常低,几乎不可能发生,推翻原假设H0

H1成立(有显著关系)

 

显著性不能证明任何事情是真,而只能拒绝两者没有关系(证明两者有显著关系)

显著性不能量化差异性

显著性不能夸大两者差异性有现实意义

显著性不能解释为什么两者有差异性

 

P值小于0.05解释(The Interpretation of the p-Value)P值小于0.05:如果H0是真,找到极端值的概率小于5%。不能简单说明H0是假或H1是真
A value of p < 0:05 for the null hypothesis has to be interpreted as follows: If
the null hypothesis is true, the chance to find a test statistic as extreme as or more
extreme than the one observed is less than 5%. This is not the same as saying that
the null hypothesis is false, and even less so, that an alternative hypothesis is true!

http://www.360doc.com/content/15/0704/22/22175932_482657194.shtml

 

 

 

 

 

 

 

 

P值误区 
 

Pitfalls in the Interpretation of p-Values
In other words, p-values measure evidence for a hypothesis. Unfortunately, they are
often incorrectly viewed as an error probability for rejection of the hypothesis, or,
even worse, as the posterior probability (i.e., after the data have been collected) that
the hypothesis is true. As an example, take the case where the alternative hypothesis
is that the mean is just a fraction of one standard deviation larger than the mean
under the null hypothesis: in that case, a sample that produces a p-value of 0.05 may
just as likely be produced if the alternative hypothesis is true as if the null hypothesis
is true!
P值用于测量H0假设的证据。不幸的事,P值常备误解。例如被误解为拒绝H0 的错误概率或H0成立的概率。

 

 

 

 

 

 

 

When is a number so much bigger or smaller than another that it should raise some eyebrows? Tests of significance help make such determinations. This lesson explains the p-value in significance tests, how to calculate them, and how to evaluate the results.

Tests of Significance

Imagine that you want to be the new point guard of your basketball team, but before you try out for the position, you want to make sure you have, pun intended, a real shot at achieving your goal. You shoot 20 free throws and make 12 of them; that's a 60% accuracy rate. You want to know if your accuracy rate, or the observation, is about the same or different than the team's accuracy rate, or the population statistic; enough to replace the old point guard.

You can do a test of significance to ascertain if your accuracy rate is significantly different from that of the team. A significance test measures whether some observed value is similar to the population statistic, or if the difference between them is large enough that it isn't likely to be by coincidence. When the difference between what is observed and what is expected surpasses some critical value, we say there is statistical significance.

Preparing for a test?

Try a practice test for free!

P-Value Defined

A standard normal distribution curve represents all of the observations of a single random variable such that the highest point under the curve is where you would expect to find values closest to the mean and values least likely to be observed in the smallest part under the curve.

The p-value is the probability of finding an observed value or a data point relative to all other possible results for the same variable. If the observed value is a value most likely to be found among all possible results, then there is not a statistically significant difference. If, on the other hand, the observed value is a value among unlikely values to be found, then there is a statistically significant difference. The smaller the probability associated with the observed value, the more likely the result is to be significant.

Finding The P-Value

To find the p-value, or the probability associated with a specific observation, you must first calculate the z score, also known as the test statistic.

The formula for finding the test statistic depends on whether the data includes means or proportions. The formulas we'll discuss assume a:

  1. Single sample significance test
  2. Normal distribution
  3. Large sample size.

When dealing with means, the z score is a function of the observed value (x-bar), population mean (mu), standard deviation (s), and the number of the observations (n).

When dealing with proportions, the z score is a function of the observed value (p-hat), proportion observed in the population (p), probability of successful outcome (p), probability of failure (q = 1 - p), and the number of trials (n).

After calculating the z score, you must look up the probability associated with that score on a Standard Normal Probabilities Table. This probability is the p-value or the probability of finding the observed value compared to all possible results. The p-value is then compared to the critical value to determine statistical significance.

The Critical Value

The critical value, or significance level, is established as part of the study design and is denoted by the Greek letter alpha. If we choose an alpha = 0.05, we are requiring an observed data point be so different from what is expected that it would not be observed more than 5% of the time. An alpha equaling 0.01 would be even more strict. In this case, a statistically significant test statistic beyond this critical value has less than a 1 in 100 probability of occurring by chance.

The last step in a significance test is to compare the p-value to alpha to determine statistical significance. If the p-value exceeds the critical value, then we can reject the idea that the observed value was a result found by chance.

What Significance Tells Us

So, let's say your free throw accuracy of 0.6 turns out to have a z score associated with a probability of 0.03 and your alpha is set at 0.05, or p < alpha, then there is a statistically significant difference. We can reject the idea that there is no difference between your accuracy and the accuracy of the team's, and accept the alternative: your shooting accuracy is significantly different from that of the team's.

If, on the other hand, the alpha is set at 0.01, then p > alpha and the result is not statistically significant. In this case, your coaches can say, 'Um, sorry. There simply isn't enough evidence to conclude you are way, way better.'

Significance does not:

  • Prove anything is true; it can only disprove that there is no difference
  • Quantify the difference between your accuracy and the team's
  • Magnify how meaningful the difference is between your accuracy and the team's
  • Explain why there was any difference found between your accuracy and the team's
  • Ensure you will be made the new point guard

Lesson Summary

A significance test measures whether some observed value is similar to the population statistic or if the difference between the observed value and the population statistic is large enough that it isn't likely to be a coincidence.

The p-value is the probability of finding an observed value or data point relative to all other possible results for the same variable. To find the p-value, you must first calculate the z score, also known as the test statistic. After calculating the z score, look up the probability associated with that score on a Standard Normal Probabilities Table. The last step in a significance test is to compare the p-value to an established critical value, called alpha, to determine statistical significance.

If the p-value is a value most likely to be found among all possible results, then there is not a statistically significant difference. If, on the other hand, the observed value is a value among unlikely values to be found, then there is a statistically significant difference.

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多