Financial Statistics
Posted: Sun May 09, 2004 4:11 am
I am continually amazed when I look at the statistics associated with economics. There seems to be a total indifference to the power (or sensitivity) of statistical tests.
It is very easy, sometimes trivial, to apply a test that fails to detect a statistical difference. It is quite a bit different to show that any difference that exists is very small or else it would have been detected (with a high level of confidence).
I attribute this to an overreaction to the exaggerated claims of some, especially salesmen, and the intimidating atmosphere faced by academics. For example, I doubt seriously that either Professor Shiller or Professor Campbell really believes that their findings (about P/E10) need as many qualifiers as they provide. It is just that they are conditioned to expect a hostile response whenever they report finding something of value.
A more balanced view would be to acknowledge the existence of routine errors while allowing a greater tolerance when findings are solidly supported by common sense. It is one thing to find obscure relationships associated with great returns in the past and to project their applicability into the future. It is very much another thing to look at something such as P/E10 and conclude that valuations matter.
Whenever we look at data, we are aware of what we cannot do as a practical matter. This much is not limited to economic statistics. But with economic results, we are forced to accept history as it is. We cannot run controlled experiments (with rare exceptions). Very often, we do not know what to look for until we have looked. That puts us in a difficult position since it is best to ask all questions before examining the data. Not having asked the question beforehand, we could easily prevent ourselves from reporting the obvious. There are ways of handling such situations, none of them ideal, but which are sufficiently objective so as to allow us to reach reasonable conclusions. We should not always restrict ourselves to extremely defensive arguments.
I have recently reread The Superinvestors of Graham-and-Doddsville, an edited transcript of Warren Buffett's 1984 talk at Columbia University. Buffett presented overwhelming evidence that there really are people skilled in selecting stocks, that their high returns are not simply a side effect of randomness and that prices really do matter. His evidence is sufficient to satisfy even the most demanding honest statistician and it is audited. Yet, I continue to read assertions that skill does not exist because some insensitive statistical test fails to show significance. Such concepts as efficient markets and random walks are helpful when they are used properly: as first approximations. To insist that they are relevant at all times is folly.
Have fun.
John R.
It is very easy, sometimes trivial, to apply a test that fails to detect a statistical difference. It is quite a bit different to show that any difference that exists is very small or else it would have been detected (with a high level of confidence).
I attribute this to an overreaction to the exaggerated claims of some, especially salesmen, and the intimidating atmosphere faced by academics. For example, I doubt seriously that either Professor Shiller or Professor Campbell really believes that their findings (about P/E10) need as many qualifiers as they provide. It is just that they are conditioned to expect a hostile response whenever they report finding something of value.
A more balanced view would be to acknowledge the existence of routine errors while allowing a greater tolerance when findings are solidly supported by common sense. It is one thing to find obscure relationships associated with great returns in the past and to project their applicability into the future. It is very much another thing to look at something such as P/E10 and conclude that valuations matter.
Whenever we look at data, we are aware of what we cannot do as a practical matter. This much is not limited to economic statistics. But with economic results, we are forced to accept history as it is. We cannot run controlled experiments (with rare exceptions). Very often, we do not know what to look for until we have looked. That puts us in a difficult position since it is best to ask all questions before examining the data. Not having asked the question beforehand, we could easily prevent ourselves from reporting the obvious. There are ways of handling such situations, none of them ideal, but which are sufficiently objective so as to allow us to reach reasonable conclusions. We should not always restrict ourselves to extremely defensive arguments.
I have recently reread The Superinvestors of Graham-and-Doddsville, an edited transcript of Warren Buffett's 1984 talk at Columbia University. Buffett presented overwhelming evidence that there really are people skilled in selecting stocks, that their high returns are not simply a side effect of randomness and that prices really do matter. His evidence is sufficient to satisfy even the most demanding honest statistician and it is audited. Yet, I continue to read assertions that skill does not exist because some insensitive statistical test fails to show significance. Such concepts as efficient markets and random walks are helpful when they are used properly: as first approximations. To insist that they are relevant at all times is folly.
Have fun.
John R.