Dim Red Glow

A blog about data mining, games, stocks and adventures.

Gradient Boosting (Part 2)

This time around let’s start from scratch. We have some training data. Each row of the data has some various features with real number values and a score attached to it. We want to build a system that produces those scores from the training data. If we forget for a second "how" and think about it generically we can say that there is some magical box that gives us a result. The result isn't perfect put its better than a random guess.

The results that we get back from this black box are a good first step, but what we really need is to further improve the results. To further improve the results we take the training data and instead of using the scores we were given we assign new scores to the rows. This score is how much we were off from the black box prediction.

Now we send this data in to a new black box and get back a result. This result isn't perfect either but it’s still better than a random guess. The two results then combined give us a better answer then just the first black box call. We repeat this process over and over making a series of black box calls that slowly gets the sum of all the results to match the original scores. If we then send in a single test row (that we don't know the score to) in to each of these black boxes that have been trained on the train data, the results can be added up to produce the score the test data has.

In math this looks something like this

f(x) = g(x) + g'(x) + g''(x) + g'''(x) + g''''(x) .... (etc)

Where f(x) is our score and  g(x) is our black box call. g'(x) is the 2nd black box call which was trained with the adjusted scores, g''(x) is the 3rd black box call with the scores still further adjusted. etc...

A few questions should arise. First how many subsequent calls do we do? And 2nd what exactly is the black box?

I'll answer the second question first. The black box (at least in my case) is the lowly decision tree. Specifically it is a tree that has been limited so that it terminates before it gets to a singular answer. In fact it is generally stopped while large fractions the the training data are still grouped together. The terminal nodes just give averages for the scores of the group at the terminal node. It is important that you limit the tree because you want to build an answer that is good in lots of cases.

Why? Because if you build specific answers and the answer is wrong, correcting the result is nearly impossible. Was it wrong because of a lack of data? Was it wrong because you split your decision tree on the wrong feature 3 nodes down? Was it wrong because this is an outlier? Any one of these things could be true so to eliminate them all as possibilities you stop relatively quickly in building your tree and the get an answer that is never right but at least puts the results in to a group. If you go too far down you start introducing noise in the results. Noise that creeps in because your answers are too specific. It adds static to the output.

How far down should you go? It depends on how much data you have how varied the answers are and how the data is distributed. In general I do a depth of ((Math.Log(trainRows.Length) / Math.Log(2)) / 2.0) but it varies from data set to data set. This is just where I start and I adjust it if need be. In short I go half way down to a specific answer.

Now if you limit the depth uniformly (or nearly uniformly) the results from each node will have similar output. That is the results will fall between the minimum and maximum score just like any prediction you might make, but probably somewhere in the middle (that is important). It will have the same amount of decisions used to derive the result as well so the answers information content should on average be about the same.  Because of this next iteration will have newly calculated target values the range of those scores in the worst case is identical to the previous range. In any other case the range is decreased since the low and high ends got smothered in to average values. So it is probably shrinking the range of scores and at worst leaving the range the same.

Also the score input for the next black box call will still have most of the information in it since all we have done is adjust scores based on 1 result and we did it in the same way to a large number of results. Results that from the previous tree's decisions share certain traits. Doing this allows us to slowly tease out qualities of similar rows of data in each new tree. But since we are starting over when a new tree the groupings end up different each time. In this way subtle shared qualities of rows can be expressed together once their remaining score (original score minus all the black box calls) line up.

This brings us to the first question how many calls should I do? To answer that accurately it’s important to know that usually the result that is returned is not added to the rows final score unmodified. Usually they decrease the value by some fixed percentage. Why? This further reduces the influence that decision tree has on the final prediction. In fact I've seen times where people reduce the number to 1/100th of its returned value. Doing these tiny baby steps can help, but sometimes it just adds noise to the system as well since each generation of a tree may have bias in it or might to strongly express a feature. In any case it depends on your decision trees and your data.

This goes for how many iterations to do as well. At some point you have gotten as accurate as you can get and doing further iterations over fits to the training data and makes your results worse. Or worse yet it just adds noise as the program attempts to fit noise of the data to the scores. In my case I test the predictions after each tree is generated to see if it is still adding accuracy. If I get 2 trees in a row that fail to produce an improvement. I quit (throwing away the final tree). Most program just have hard iteration cut off points. This works pretty well, but leads to a guessing game based on parameters you have setup.