Make Money, Not Predictions

Yoda revealed a fundamental truth in Star Wars, “Difficult to see. Always in motion is the future.”

After many years of trying to teaching people how to trade, I have come to the conclusion that trading is best left to computers because machines are not subject to the human failings so eloquently summed up by Nobel Laureate Daniel Kahneman.

Even though I am not that old, I have done battle as a professional in the capital markets for more than half my life. Now that I manage millions of dollars, there is a certain “arc of my career” feeling to it when I think about trading. But I’m not the only one. My favorite is William Eckhardt.

Mr. Eckhardt is a standout in an industry filled with blowhards and snake oil salesmen. Not many traders (no, market makers and flash traders don’t count) extract so much from the market that they end up donating $20 million to their alma mater, so we had better listen up:

Anyone with average intelligence can learn to trade. This is not rocket science. However, it’s much easier to learn what you should do in trading than to do it. Good systems tend to violate normal human tendencies. Of the people who can learn the basics, only a small percentage will be successful traders.

If a betting game among a certain number of participants is played long enough, eventually one player will have all the money. If there is any skill involved, it will accelerate the process of concentrating all the stakes in a few hands. Something like this happens in the market. There is a persistent overall tendency for equity to flow from the many to the few. In the long run, the majority loses. The implication for the trader is that to win you have to act like the minority. If you bring normal human habits and tendencies to trading, you’ll gravitate toward the majority and inevitably lose.

The New Market Wizards: Conversations with America’s Top Traders

In 1997, Eckhardt and his former trading partner, the legendary Richard Dennis, gave an interview to Barclay Hedge. The two provided insight into three very important topics.

1. On backtesting:

Q: Many CTAs are continually testing and searching for new ways to improve their trading or for new trading approaches. Much of this research is based on back testing. Is back testing overrated? How much data are necessary in order to have confidence in the results?

Dennis: Back testing is essential. The key question is what time periods should be tested. You can take the point of view that you should use all available data. How would you know that 19th century wheat prices are less relevant than today’s wheat prices?

I believe that was the right answer in 1983. Today, I find it hard to be agnostic on the questions of whether markets have changed and how they may have changed. The trends of the 1970s occurred in the absence of computer-generated, trend-following algorithms. The markets of the last ten years are distorted by the onslaught of the technical trader.

As a result, I back test only the last ten years. In the unlikely event markets are as good and undistorted as they were in the good old days, I’ll be happy to make less than I might if I had used that early data in an optimization. The trade-off is that if markets continue their perversity, I’m way more likely to have captured a sound way to handle these more difficult markets because I’m fitted only to them.

Eckhardt: I know of no way to validate conjectures concerning technical trading without back testing; however, this procedure is fraught with peril–we all know horror stories. Having adequate amounts of data for reliable inferences is only one of many problems facing the technical analyst, but it is as crucial as any. Statisticians tend to consider that more than about 30 instances constitutes a large sample statistic. For futures price research this is a recipe for disaster. The underlying probability distributions in this subject are so exotic and pathologic that those subtle techniques that statisticians use to squeeze significance out of sparse data are all decidedly out of place.

To make even moderately reliable judgments about a kind of trade, you need something like 300 instances. This is a minimum figure. I don’t feel comfortable acting on research results unless I have several thousand instances.

2. On strategy optimization:

Q: When you back test particular trading strategies, do you attempt to optimize? If so, how many parameters can you comfortably optimize before falling prey to curve-fitting?

Dennis: There is no escaping a priori decision making in research. There may be no absolutes, but some ideas come close. For example, it would be very hard to justify favoring long positions over short. And no amount of data will validate trading certain markets larger than others (liquidity considerations aside). Research that starts with concepts is much more likely to avoid curve-fitting than blind number-crunching.

Eckhardt: I prefer the term “over fitting”. This makes clear that you can under fit to data. Those CTAs who boast that they never optimize are doing precisely that–they are grossly under fitting. The topic of fitting raises profound theoretical and practical questions, but it boils down to this: you want to fit to reproducible features and not to accidental ones.

To derive an estimate of how much overall good versus bad fitting your optimization labors have produced, the technique statisticians call cross-validation is quite helpful. Of course, this will not tell you where good or bad features originate or how to alter the mixture favorably. For this, it is crucial to assess the quality of degrees of freedom, not only their sheer number. A degree of freedom that has uniformly graduated significance over a manifold of possibilities is better than one that is quirky or that vacillates in meaning for slightly dissimilar cases. It is also important how selective the influence of a degree of freedom is.

A preset profit objective, for instance, is a much more suspect degree of freedom than, say, a look back. The latter presumably impinges on every trade, whereas the influence of the former tends to be concentrated on a few highly profitable scenarios.

The philosophy of science teaches that all observation is theory-laden; there is simply no way to analyze data in a theoretically neutral manner. In fitting to historical data, theoretically unsound procedures can lead to radically invalid conclusions. This is probably why back testing has developed such a bad reputation.

3. On the impact of so many people using technical trading rules:

Q: How will the rapidly increasing rate of technological advancement and computerization impact a CTA’s ability to capture speculative profits in the futures market?

Dennis: I am less pessimistic than Bill on this question, because I am very pessimistic about the effect of politics on the economy and the implications for price volatility.

Consider one very recent example: After a meteoric rise in U.S. stock prices, it has been seriously suggested that the funding of Social Security be supplemented by permitting, for the first time, investment in the stock market. I promise you that this will be the “solution” to the problem because it is politically painless. It is also based on the ridiculous notion that stocks “always” outperform bonds if you wait long enough.

There will be millions of homeless and starving elderly if we commit to this madness. The pressure to inflate away debt and politicize monetary policy will be overwhelming. This regrettable chaos means volatility and trends.

Eckhardt: When I first began trading solely on the basis of price and was much more concerned than I should have been about the academic orthodoxy that futures market price change was pure white noise–a random walk–I made the following notebook entry: “How can the aggregate of traders and users arbitrage out a potentially unlimited number of nonlinear relationships?” The implication was that they could not. Twenty-five years later, I am less confident about the continuing correctness of this answer. What I failed to take into consideration was the staggering explosion in information processing. This will only continue. Eventually artificial intelligence devices, superior to any human researcher, will effectively uncover all exploitable nonlinear relationships of price to price. Such relationships will be mined until technical analysis is no longer profitable. There is an irony in that dogmatic “random walk” theorists, dead wrong for a century, will turn out to have been prescient–futures markets will have been driven to randomness. The process has already begun.

I feel these developments are nearly assured (assuming no disruption of civilization). What is less clear is whether this will happen as rapidly as I predict–in 10 to 20 years. In the meantime, profitable trading will only get harder as increasingly more astute traders pursue progressively weaker statistical regularities. This is why it is necessary for a CTA continually to improve just to hold his or her own. The only consolation I can offer is that there are profits to be made participating in this process of randomization.

If you have no idea what any of this stuff means, you must learn it or expect to lose money.