Greed and Fear - Daily trading signals based on mathematics and software, no opinion, no emotion, no ego.

neural networkEarly visitors of this blog have seen my daily posts for years now with the so called Greed and Fear indicator and to which direction it was pointing. This was my initial attempt to expose the indicator to the public. Along the way, the performance was measured of course, to see if it was any good and useful in trading. The most honest way to measure this performance was by counting index points 'it called right' subtracted by 'index points it called wrong', as I've explained here in more detail.

But during all those year, I never explained in some more detail what the Greed and Fear indicator really was, while this is probably one of the most fascinating subjects today: a neural network! The more widely known terminology would be machine learning, artificial intelligence etc. There are subtle differences, but it all comes down to 'intelligent software'.

Towards the end of last year, I decided to suspend the daily indicator postings, and instead take the indicator to the next level in measuring the performance. There's still some work to be done there, but it's coming. 

One day, out of pure curiosity I decided to build a framework for multiple types neural networks. Since I'm a software engineer by education and profession, this already had my attention for a long time. The best ideas are created by just trying something, sometimes they work out, sometimes they don't. Of course, all this time I had in the back of my mind the intention to incorporate the neural network in my trading activities.

What then is a neural network? It's software that tries to mimic the human brain in a very, very basic way with one very important aspect in particular: the ability to learn! There's an overwhelming amount of literature about this subject. To get an initial idea, have look at the excellent Wikipedia page (opens new page).

Traditionally, if a mathematician wanted to develop a model to describe (and predict) behavior of a certain system, it would take a lot of research to determine which factors play a key role in the behavior of that system. There may be a huge amount of data available, but if that data contains a lot of noise that has nothing to do with the behavior, this may be a tedious or even impossible job to do.

Neural networks on the other hand are more or less able to find out for themselves which part of all the data contains the real information that would describe the system. This part of looking for information in data is called training or learning. In other words, it is trying to find a signal in a lot of noise. During this process the neural network is changing its own parameters so that it would more and more accurately describe the behavior of the system as it goes through all the data.

Once the neural network is finished training, there's still no guarantee it has actually learned something useful. This may sound contradictory, but there's always the risc over 'curve fitting'. This is a risc that could come up in any model, neural network or otherwise. Curve fitting in relationship with neural networks could mean that the network has learned mostly noise and no real signal from the data it was provided with. 

To verify the accuracy of what the neural network has learned, we should always apply it to a new set of data it has not seen before. This way we can measure how well the neural network describes/predicts the new unseen data. 

When we want to train a neural network to predict future behavior of the S&P 500, we're going to look at this as a sort of input-output relationship, something very common in training a neural network. The input is 'past behavior' up until now and the output is how the S&P 500 will behave tomorrow, next week, etc. What then is 'past behavior'? Let's just say: a lot of data, not just past behavior of the S&P 500 itself but a lot more than that. This can be repeated for, let's say, 10 years worth of trading days. Every trading day forms an input-output pair that the neural network has to learn. To verify if the network has actually learned something, we take another set of trading days it has not seen before and replay history as it were on that set and measure performance.

This is how the Greed and Fear indicator works. One major issue with a neural network is: we don't really exactly know how it comes up with an output-value. The network internals may look like complete chaos and there's no real way of telling how an input leads to an output. Can we still trust the neural network as time goes by? That may be a bit uncomfortable. Let's conclude with a beautiful quote from one of my textbooks: 'Neural networks have their own little secrets'.