ataloss wrote: anyway, I think we all agree that nfs is right and jjwr is wrong
That's just silly. I try to make well-grounded arguments but it's hardly rare that I can be cut to the quick.
The problem that I have with jwr in particular is that he thinks publishing numbers out of a calculator is evidence of insight. The screed he put up on his website the other day re linearity is 2500+ words, better than 3/4 of which is a recitation of linear regression coefficients. It is useless to anyone reading it.
If you know any statistics at all, you know that the whole thing could have been presented in tabular form on half a page. You also know that some evidence of statistical significance would be presented as well in the table.
The risk isn't to people who know some statistics. The risk is for people who don't and who presume that pages upon pages of such stuff must be both valid and important just because somebody took the time to write it all. That's not true.
There is no wisdom in rows and columns of numbers. If having hundreds or thousands of numbers presented to you induces your eyes to glaze over - it does for almost all, including me - then you can be lulled into not thinking about how the numbers were arrived at.
I have periodic online arguments with a fellow in Vancouver who wrote and markets a terrific little piece of software for modelling retirement finances. (No use to anyone here - it's Canada specific wrt government pension entitlements and taxes.) Online, he's prone to making categorical pronouncements that Process A will work better than Process B, based on the output from his software. The software has dead accurate modelling of tax rules, past and present, and spits out likely after tax retirement incomes to the penny. The results are of the form, if you do A, you can spend $35,181.72 a year, and if you do B, you can spend $34,831.11. These numbers are buried in multi-page reports with long rows and columns of numbers showing money coming in, money going out, portfolio balances, etc.
Look familiar? It's exactly the sort of thing that jwr does. I have repeatedly pointed out that there are gross assumptions buried in the operation of the software, assumptions which are good to no more than one or two decimal digits of accuracy. The usual rule when doing a series of calculations is that you cannot have more accuracy on the output side than you have in the input. Using that rule, it is clear that doing A lets you spend about $35k while doing B lets you spend about $35k; i.e. one is not superior to the other.
Steve has a real problem understanding that. The blizzard of numbers in the output produced by the software, and the fact that he has verified every step involved in getting from the inputs to the outputs, blinds him to the fact that the assumptions that made up the inputs aren't nearly as good as he thinks they are.
Why can I see this when almost no one else can? Because I've done it myself. In the late 60s, when I was still in high school, I built astronomical simulations using donated time on mainframes. Put in the card deck, get out 100 pages of numbers to pore over until next week's time slot came up. I got a trip to the national science fair out of those simulations.
Three years later, after a year of sophomore celestial mechanics, I realized it was all bunk. Not only was the method flawed - should have been using Runge-Kutta because of the iterative nature of the program - but the assumptions were crap too. Blinded by hundreds of pages of output, I never sat back and thought, "Well, elementary conservation principles of momentum and energy actually rule this sort of behavior out a priori, so there must be something wrong."
JWR hasn't gotten there yet.
Great minds think alike. Fools seldom differ.