Hi raddr,
Does this clear things up at all?
A little, but my questions are multiplying.
Let me take this step-by-step:
For most runs I calulate the historical avg. return for the portfolio and then the 1 yr., 10 yr., and 40 yr. SD's. As I note in the post, the longer time periods show lower SD's than would predicted from the 1 yr. SD.
Ok, so you take a proposed portfolio, apply it to historical data, and from those results you measure the mean and 1-, 10- and 40-year SDs of the
whole portfolio -- not of the constituent asset classes. Is this right?
I then generate a gaussian distribution of random numbers (usually 75 year's worth) based on the historical 1 yr. SD and the yearly avg. return - just like a conventional MC simulator.
Ok, so here you are generating sequences that represent the returns of the whole portfolio -- again, not of the individual asset classes, right?
Now at this stage, before you look at the 10- and 40-year SDs and the 75-year mean, do you also make cuts based on the 1-year SD? (This is what I thought you were saying in my previous post, but I'm not sure that confusion didn't creep in somewhere.)
10 and 40 yr. SD's are calculated and those sequences that fall outside the tolerance threshold are rejected.
What kind of thresholds are these? Are they absolute values, or or they relative to the 1-year SD? I.e., is the cut of the following form:
IF ((SD_40 > Threshold_40) OR (SD_10 > Threshold_10)) THEN Reject
or is it of the following form:
IF ((SD_40/SD_1 > Threshold_40) OR (SD_10/SD_1 > Threshold_10)) THEN Reject
or is it something else?
If the simulated portfolio passes this test then there is one final test: if the 75 yr. avg. return is outside the tolerance limit I set then the sequence is rejected.
How (and why) is this tolerance limit chosen?
Cheers,
Bpp