Dim Red Glow

A blog about data mining, games, stocks and adventures.

A contest in a week (part 4)

Wouldn't it be the case that the one time I try to do a contest in a week (instead of doing it over months) I actually get a big break through with about 17 hours left to go. And now I don't have time to hone the result. Hehehe, oh well. Better that then no result. I'm getting ahead of myself, let me catch you up.

Here’s a rundown of how my weekend went. Friday evening I went out and played cards (magic the gathering, another hobby of mine). Then Friday night I worked on the contest till the wee hours of the morning. Usually this means I coded some then watched TV while things ran. Wash rinse repeat till I final call it a night. You can read the last post for details on how that went.

Then Saturday I skipped Mardis Gras (St. Louis has a rather large celebration, something I rather enjoy doing, great weather for it this year) only to work more on the program. My day involved programming and then play civilization 4 (any ole video game will do, I just happen to like that one :) ) or watch TV while things ran. I spent all day honing the Platt scaling and trying a few variations in my trees. I also added a few features namely month, day, quarter, year and day of week. Usually these are super strong indicators for sales and retail, I wasn’t sure they would be useful here. They get used they don't change my score much.

In the end I came to the conclusion that my best version of Platt scaling was only compensating for the noise in the results. Which is to say it was useless. I was just over fitting my local results. Saturday night I ran my program for about 8 hours doing 1380 tree (3 times my normal run of 460) and submitted it to the leaderboard. I moved up .00001 yep the smallest amount you can register. How disappointing.

That just means my forest had really given all it can give. So Sunday was pretty much the same as Saturday except for one major thing. Having spent all day Saturday cleaning/checking data and trying to hone my results I was out of things to do! I couldn’t even submit my program with a huge set of trees and expect a better result. I already did that. So that was it, I was out of things to do.... Well everything except 1 thing, the elephant in the room. I still could try and work on my GBM model. For the uninitiated (are there any of those out there still?) GBM is a Gradient boosting machine. 

It's been a "thing" for me for a long time. That is I've tried time and time again to implement it only to get poor results back. I could give you my best guesses as to why... I suppose part of it is I'm always trying to do too much at once (improve while implementing). I don’t have a test case to build to, to see it work right at first. I probably implemented parts of it wrong. Perhaps sometimes I don’t stay true to the core idea. Or maybe my trees don't work like conventional trees (they don't sometimes, in the mathematical sense). But you know, those are really guesses on my part. What matters is I decided to try yet again because it’s all I had left to do!

I put together a simple frame work (no frills) that calls my standard tree which uses logistic splits, and set to trying to get some settings right. It didn’t take long before I got some that actually showed an improvement in my local tests! And I mean a sizable improvement. The rubber will meet the road when I actually make a submission of course, but right now things look really good. Granted it didn’t improve accuracy, but that's not important for AUC.  Correlation Coefficient and Gini are all I really care about (well I’d just use AUC if I ever bothered to fix the calculation it *grin*). Incidentally, I can actually get accuracy way up there if I use Platt scaling. If I do though the others but my results go to crap. That just goes to show it’s not what you care about this time.

previous best (46 trees in a random forest - 3 fold CV)
gini-score:0.91988301504356 - 0.96333

current best with GBM (15 trees in a GBM forest - don't judge me, I love forests - 3 fold CV)

You are probably curious about the specifics on my GBM settings. Oddly, or perhaps not, much of it people have already shared on the forums. I'm doing 7 depth trees and I'm taking only 68% of the features for each GBM iteration. I tried using more or less depth and I tried sub selecting rows for each iteration, but it all made the score worse. I tried also knocking down the feature selection to like 33% just to see, but it hurt the results as well. Normally I would hone that too but I’m out of time.

The other settings I'm using don’t translate to the GBMs other people are using. For instance my “eta” is .75 way more than some of the settings I saw. I tested it, that’s the sweet spot. My nrounds is like 15 in that result above. Doing 1800 or whatever wouldn't be feasible with the way my tree works. It would take weeks to run and the improvements would diminish so fast as to not make any sense in doing it. Okay well maybe it would make sense if you had your eta at .01 but again...it would take days to run. Also the difference between 10 nrounds and 15 is pretty small. So clearly that is diminishing quickly as something that is beneficial to ramp up.

So that's been my weekend (Throw in some cold pizza, 2 pots of coffee, a bowl of oatmeal and 2 trips to taco bell and you now have the full experience.) I'll be wrapping it up here in the next half hour or so as I have to work in the morning. Just as soon as I get a few more results back so I know what I can scale up for a run over night to maximize a submission results. Once I got that I'll hit the hay and tomorrow.... oh tomorrow, tomorrow we see if it’s all for not. :)