A Prime Inconsistancy

originally published 4-8-2008

After the onslaught of Reagen-ism in the early 1980’s faded away to Bush-One-ism in the late 1980’s the US economy was in a funk. Then the computer generated trading collapse of October 1987, gave us a preview of just what could happen if we let machines run the markets. At the time I was helping upgrade a huge computer system in Cincinnati, Ohio. Some of us had worked with predictive systems, others had worked in banking. We wondered through the sky walk discussing everything at lunch. One day one of my co-workers and I were discussing a technical search problem. I had built some serious VSAM predictive file caching schemes to make slow file referencing programs pretty incredibly fast. He got to telling me about similar schemes finance programmers used to build trading programs. I had done enough research to know this was a recipe for financial disaster. There are problems with most guessing the future systems.

The basic premise involves similar events happening in globs, clumps, or in relative proximity to each other. The way this works is to add events to the cache as new if they are not already there, and remove the oldest events as their slots are needed. Most of these schemes are very similar, with tweaks here or there. You can add a counter to age the cached records. You can subtract from the counter after a certain age is passed – age being the number of times a particular record is not used. This tends to work pretty well as long as things go along in a similar fashion. If you can store a large cache and search with a binary algorithm you can have a very high hit to miss ratio. That works until the primary assumptions or “fundamentals” change. Once those change, however, the only thing to do is dump the cache, and start the process again.

Sometime in early 1987 we noted a correction in the markets. A few weeks later there was another correction. This cycle started innocently enough, I suppose. By late summer we were asking ourselves if our financial programmer friends had outdone themselves and introduced a “feedback” loop. When you are using predictive schemes you have to remember to filter out your own impact on things – otherwise you fall into your own trap. We computer geeks sometimes refer this sort of thing as being “hosed up”. Apparently our trader programmer friends had missed this one, because a few weeks before the October 1987 crash, I asserted that predictive schemes were being used, that a feedback loop had developed, and that no one understood what was happening well enough to stop it. I told my friend who had worked at the bank the process would became “recursive” and the market could drop a 1000 points. The assertion was proven generally correct, since that is pretty much what happened. The probability is if the system had been able to handle the number of transactions it might well have ended at a flat line zero. Part of what saved the system was in fact it’s inability to cope.

Technical people in the finance industries tend to be paid far less than well, so there has always been lots of incentive for their technical staff to be highly migratory. Some of them even encourage this with silly personnel policies. I have heard of a few who lay off technology employees during “down time”. Some do “contract shopping” for contractors – even after signing contracts. All this means the technology staff at your average finance company is always looking for their next position. Suppose one of these guys gets a temp spot in an engineering outfit for a few months and happens to look at one of my programs to cache VSAM file reads, and not being stupid, says – hey I can make a couple bucks by turning this idea into a market predictor. If you think this sort of thing didn’t happen in 1986-87 check out classified adds in any techie magazine from that time. Just count the number of “stock market” manipulating computer programs. Suppose this guy gets some prototype of a system together, and the next time he needs to find a spot goes shopping for a better deal with a finance company who needs someone into trading software. Suppose he actually gets the job. He includes what he has learned of predictive systems into this wonderful new trading system. The code required to implement the algorithm in a large system is small and insignificant. Once the system is built, and installed, the would be trading programmer is off to find his next spot. The users and support staff are clueless about what this algorithm does or how it works. Or maybe this guy only uses the technique to do file caching. Maybe it is a support person who adapts his technique to make predictive trades. In any case the system eventually acquires the ability to make cycle dependent predictive trades.

There were clearly other factors which perhaps started, the cyclic events these programs thrive on. Clearly there are other factors which impacted the final outcome. Clearly various people were using program trading strategies. What I am saying is over a period of time a set of predictive algorithms which were never intended to process “meta data” became perhaps commonly used for just this purpose, and this set the “drumbeat” which ended with a 508 point drop in the market.

The way this would work, a program might have a file of market results, perhaps of multiple granularity. Each day the program runs looking for “trends”, but particularly cycles. The cycles would be measured, and buy and sell suggestions made from the info. This would be pretty uninteresting as long as humans were using the machine results as advice in trading decisions. We all know how lazy greedy humans are, so you know it didn’t take too long for someone to clamor for computer trading, and the minute they got it they constructed software “robots” to do the trades for them. These “soft bots” were perhaps using the predictive schemes mentioned earlier. Sometime before September 1987 a feedback loop had obviously developed as the unexplained “corrections” started to get closer together. Understand this is most probably not a single program, but many programs using a similar algorithm. Once these programs detect a cycle, they will make trading decisions based on what is known about the cycle. Suppose for no particular reason there is a down day on march 10, then again on April 9, then on May 8. The predictive algorithms will analyze this and determine a cycle exists. Sell orders will be generated for June 7, which will become a down day, adding to the cycle. The next down day will be July 6. Somewhere in all this humans get involved and generate their own down day on July 20. The machine traders buy up until August 4, then sell on the 5th to miss the known down day which is the 6th. The human trader generated down day was then incorporated into the “softbots” known cycle, again selling off on August 19. This process continues along until mid October. There are no days left to buy so the machine instead of turning itself off, dumping it’s cache, and starting over, sells everything. The algorithm became recursive and presto the stockmarket crash of October 1987.

While working on predictive file caching systems I coined the term “prime inconsistency” to describe what happens when the process becomes useless, taking more time than it saves. if you have a miss in a file cache program you just read the real reference file – slower but no harm done. There is a major difference in predictive trend financial systems. First there is no really good way to remove your own results from your input data. There is no way to reset and go back to yesterday when things get totally hosed up.

By definition, If your soft bot trader is doing bank shares, it will have to see the share results of banks. Even if you exclude your soft bot’s trades from his input, those trades become part of the aggregate. At some point the technology reaches an installation critical mass such that its results can no longer be filtered, and so the feedback loop process is set to begin. All that is needed to trigger the process is something the cycle process identifier recognizes as a set of cycle end points.

I am not saying the form of soft bot trading described here was the only factor in the Black Monday (October 20, 1987) crash. I doubt the term soft bot was ever used. The first time I heard the term was around 1990 from a bunch of people who were more into engineering than finance. However, I do know the technology for building a soft bot existed well before 1986 – especially on mainframe computers.

Someone at the Fed has written a small dissertation which you could call the “Federal Reserve” view of what happened – it is easily found with a search engine. I have in fact read it. There is a lot of “meta data” analysis which the “Fed” is very good at, but the six word translation remains “things got really well hosed up”. Sure there is a lot of “Fed speak”, but the truth is no one can absolutely prove exactly what happened – there were too many parallel events which cannot be completely isolated.


%d bloggers like this: