Dim Red Glow

A blog about data mining, games, stocks and adventures.

Improving t-sne (part 2)

I ended up last time saying "This brings me to the big failing of t-sne. Sometimes (most of the time) the data is scored in a way that doesn't group on it's own." I intend on trying to tackling that in today's post. I will tell you what I know, what I think might work and the initial results I've gotten so far.

Let me start out by saying I've been casually active in two kaggle contests as of late. https://www.kaggle.com/c/santander-customer-satisfaction and https://www.kaggle.com/c/bnp-paribas-cardif-claims-management . Both contests use a binary classifier for scores one scores using log-loss (which penalizes you very harshly for predicting wrong) and the other uses AUC (ROC if you prefer) though their AUC is predicted probability vs observed probability which is a little weird for AUC, but whatever.

Both of these contests are kind of ideal for t-sne graphing in that there are only 2 colors to represent. Also these contests represent different extremes. One has a very weak signal and the other is more of a 50/50. That is in weak signal case that the "No"s far outnumber the "Yes"s. It's easy to have the signal lost in the noise when the data is like that. If we can leverage t-sne to reduce our dimensions down to 2 or 3 and see via a picture that our grouping for positives and negatives are where they need to be, we probably really will have something.

T-sne gives us 1 very easy way to augment results and 1 not-so-easy way to augment then. The very easy way is to change how it calculates distances. It's important to remember at its core it is trying to assign a probability to a distance for every point vs every other point and then rebalance all the point-to-point probabilities in a lower dimensions. If we augment how it evaluates distance by say expanding or collapsing the distance between points in a uniform way we can change our results as certain dimensions will have more or less importance. The not so easy way i'll talk about later down the article.

I tried at first to improve the solutions using various mathematical transforms. Some where me taking dimensional values to powers (or roots). I tried Principle component analysis. And I even tried periodically rebuilding the probabilities based on some adhoc analysis of where the training data was laid out. All of this was a mess. Not only was I never really successful, but in the end I was trying to do build a data mining tool in my data mining tool!

Skipping a few days ahead I started looking at ways to import results from a Data mining tool to augment the dimensions. I tried using data from Gradient boosting. I tracked which features get used and how often. Then when I used that data, I shrunk the dimensions by the proportion they were used. So if a feature/dimension was used 10% of the time (which is a TON of usage) that dimension got reduced to 10% of its original size. If it was used .1% of the time it got reduced to 1/1000th of the size. This produced... better results but how much better wasn't clear and I definitely wasn't producing results that made me go "OMG!" I was still missing something.

Now we come to the hard way to augment the results. We can build better data for the t-sne program to use. This is what I did, I abandoned the usage information entirely and tried using entirely new features generated by GBM. The first step was to build a number based on how the row flowed down the tree. flowing left was a 0 flowing right was a 1. I actually recorded this backwards though since at the final node I want the 0 or 1 to be the most significant digit, so like values ended up on the same end of the number line. Since GBM is run iteratively I could take each loop and use it as a new feature.

This didn't really work. I think because a lot of the splits are really noisy and even with me making the last split the most significant digit, it's still way to noisy to be a useful way to look at the data. This brings me to my most recent idea which finally bears some fruit.

I thought "Well if I just use the final answers instead of the flow of data, not only do i get a method for generating features that can produce either reals or categorical answers (depending on the data you are analyzing) but you also can use any method you like to generate the features. you arent stuck with fixed depth trees."  I stuck with GBM but turned off the fixed depth aspect of the tree and just built some results. I ran 5 iterations with 9 cross validation and kept the training set's predictions from the tree building exercise. The results are really promising. See below (note: I will continue to use my psedo-t-sne which runs in linear time due to time constraints. if you want "perfect results" you will have to go else where for now)

BNP

You can see the different folds being separated since in this implementation they don't cover all the data (it zero fill the rows it was missing) I have since fixed this. But yes there were 9 folds.

Santander

Santander is the one I'm most impressed with its really hard to isolate those few positives in to a group. BNP.... is well less impressive, it was an easier data set to separate so the results are meh at best. The rubber will really meet the road when I put the test data in there and do some secondary analysis on the 2-d version of the data using GBM or random forest.

It's important to know that I'm "cheating" i build my boosted trees from the same training data that I'm then graphing. but due the GBMs baby step approach to solving the problem and the fact that I use bagging at each step of the way to get accuracy the results SHOULD translate in to something that works for the testing data set. I'll only know for sure once I take those results you see above... put the test data in as well and then send the whole 2-d result in to another GBM and see what it can do with it. I'm hopeful (though prior experience tells me I shouldn't be).

*Note: Since I initially put this post together, i sat on it since I wanted to try it before i shared it with the world. I have finished using the initial results on the BNP data and they are biased.. i think for obvious reasons, training on data then using the results to train further... I'll talk more about it in t-sne (part 3). if you want to read more about it right now. check out this thread https://www.kaggle.com/c/bnp-paribas-cardif-claims-management/forums/t/19940/t-sne-neatness/114259#post114259 *

For fun here are the animations of those two image being made. (they are really big files 27 and 14 meg ... also firefox doesn't like them much... try IE or chrome.)

http://dimredglow.com/images/Santander.gif

http://dimredglow.com/images/bnp.gif

 

 

Improving T-SNE (part 1)

Well, that's I've been doing (improving t-sne). What is t-sne? Here's a link and here is another link. The tldr version is t-sne is intended as dimensional reduction of higher order data sets. This helps us make pretty pictures when you reduce the data set down to 2-d (or sometimes 3-d). All t-sne is doing is trying to preserve the statistics of the features and re-represent the data in a smaller set of features (dimensions). This allows to more easily see how the data points were organized. However, all to often your labeling and the reality of how the data is organized have very little in common. But when they do the results are pretty impressive.

This isn't my first foray in to messing with t-sne as you can see here and in other places in kaggle if you do some digging. I should probably explain what I've already done and then i can explain what I've been doing. When I first started looking at the algorithm it was in the context of this contest/post. I did some investigation and found an implementation i could translate to C#. Then I started "messing" with it. It has a few limitations out of the box. First the data needs to be real numbers, not categorical. Second the run time does not scale with large sets of data.

The second problem has apparently been addressed using the Barnes-Hut n-body simulation  technique that (as i understand it) windows areas that have little to no influence on other areas. But you can read about it there. This drops the runtime from N Square to n log n . which is a HUGE improvement. I tried to go one better and make it linear. The results I came up with are "meh". They work in linear time but are fuzzy results at best. Not the perfect ones you see from the barnes-hut simulations.

Before I say what I did, let me explain what the algorithm does. It creates a statistical model of the different features of the data getting a probability picture of any given value. Then it randomly scatters the data points in X dimensions (this will eventually be the output and is where the dimensional reduction comes from). Then it starts moving those points step by step in a process that attempts to find a happy medium where the points are balanced statistically. That is, the points pull and push on each other based on how out of place they are with all other points. They self organize via the algorithm. The movement/momentum each row/point gains in each direction is set by the influence those other points should have based on distance and how far away they should be with their particular likelihood. I've toyed with making an animation out of each step. I'll probably do that some day so people (like me) can see first hand what is going on.

So to improve it, what I did is split the data in a whole bunch of smaller sections (windows) each of the same. Each window though shares a set of points with 1 master section. These master points are placed in the same seed positions in all windows and have normal influence in all windows. They get moved and jostled in each window but then it comes time to display results, we have some calculations to do. The zero window is left unaltered but each other window has it's non-shared points moved to positions relative to the closest shared point. so if say row 4000 isn't shared in all windows and its closest shared row is row 7421 and we see what it's dimensional offset is and place it according to that.

The idea is that statistically speaking the influence of those points is the same as any other points. And the influence the points in the main window have on those points is approximately the same as well. The net effect is that all the points move and are cajoled in to groups as if they were one big window. This of course is why my results are fuzzy. because truly only a tiny amount of data is drawing the picture. normally all the interactions between all the points should be happening. But because I wrote it he way I did, my results scale linearly.

Here's example from the original otto group data. first a rendering of 2048 rows randomly selected (of the 61876 ) with 1500 steps using the traditional t-sne.  It took about 10 minutes. A 4096 row should have taken something like 30 minutes (probably 5 minutes of the 10 minutes of processing was more or less linear)... I'm ball parking it definitely would have take a long while.

Now here's my result of all 61876 rows using the windowing technique where i windowed with a size of 1024. (note if i did 2048, it would have take longer than the above photo since each window would take that long) 1500 steps... run time was like 50 minutes.

As you can see what I mean by it not being as good at making clear separation. the first image is MUCH better about keeping 1 group out of another... but the rendering time is SOOOOO much faster in the 2nd when you think about how much it did. And is parallelized (which I'm taking advantage of) since the various windows can run concurrently. Larger windows make the results better but slow everything down. I believe there are 9 different groups in there (the coloring is kind of bad, i really need a good system for making the colors as far away from each other as possible).

Someday (not likely soon) I will probably combine the n-body solution and my linear rendering to get the best of both worlds. So each window will render as fast as possible (n log n time of the window size) and the whole thing will render-scale out linearly and be parallelize-able.

This brings me to the big failing of t-sne. Sometimes (most of the time) the data is scored in a way that doesn't group on it's own. See the groupings are based on the features you send in, the algorithm doesn't actually look at the score or category you have on the data. The only thing the score is used for is coloring the picture after it is rendered. What I need is a way to teach it what is really important not not just statistically significant. I'll talk about that next time! 

Gradient Boosting (Part 2)

This time around let’s start from scratch. We have some training data. Each row of the data has some various features with real number values and a score attached to it. We want to build a system that produces those scores from the training data. If we forget for a second "how" and think about it generically we can say that there is some magical box that gives us a result. The result isn't perfect put its better than a random guess.

The results that we get back from this black box are a good first step, but what we really need is to further improve the results. To further improve the results we take the training data and instead of using the scores we were given we assign new scores to the rows. This score is how much we were off from the black box prediction.

Now we send this data in to a new black box and get back a result. This result isn't perfect either but it’s still better than a random guess. The two results then combined give us a better answer then just the first black box call. We repeat this process over and over making a series of black box calls that slowly gets the sum of all the results to match the original scores. If we then send in a single test row (that we don't know the score to) in to each of these black boxes that have been trained on the train data, the results can be added up to produce the score the test data has.

In math this looks something like this

f(x) = g(x) + g'(x) + g''(x) + g'''(x) + g''''(x) .... (etc)

Where f(x) is our score and  g(x) is our black box call. g'(x) is the 2nd black box call which was trained with the adjusted scores, g''(x) is the 3rd black box call with the scores still further adjusted. etc...

A few questions should arise. First how many subsequent calls do we do? And 2nd what exactly is the black box?

I'll answer the second question first. The black box (at least in my case) is the lowly decision tree. Specifically it is a tree that has been limited so that it terminates before it gets to a singular answer. In fact it is generally stopped while large fractions the the training data are still grouped together. The terminal nodes just give averages for the scores of the group at the terminal node. It is important that you limit the tree because you want to build an answer that is good in lots of cases.

Why? Because if you build specific answers and the answer is wrong, correcting the result is nearly impossible. Was it wrong because of a lack of data? Was it wrong because you split your decision tree on the wrong feature 3 nodes down? Was it wrong because this is an outlier? Any one of these things could be true so to eliminate them all as possibilities you stop relatively quickly in building your tree and the get an answer that is never right but at least puts the results in to a group. If you go too far down you start introducing noise in the results. Noise that creeps in because your answers are too specific. It adds static to the output.

How far down should you go? It depends on how much data you have how varied the answers are and how the data is distributed. In general I do a depth of ((Math.Log(trainRows.Length) / Math.Log(2)) / 2.0) but it varies from data set to data set. This is just where I start and I adjust it if need be. In short I go half way down to a specific answer.

Now if you limit the depth uniformly (or nearly uniformly) the results from each node will have similar output. That is the results will fall between the minimum and maximum score just like any prediction you might make, but probably somewhere in the middle (that is important). It will have the same amount of decisions used to derive the result as well so the answers information content should on average be about the same.  Because of this next iteration will have newly calculated target values the range of those scores in the worst case is identical to the previous range. In any other case the range is decreased since the low and high ends got smothered in to average values. So it is probably shrinking the range of scores and at worst leaving the range the same.

Also the score input for the next black box call will still have most of the information in it since all we have done is adjust scores based on 1 result and we did it in the same way to a large number of results. Results that from the previous tree's decisions share certain traits. Doing this allows us to slowly tease out qualities of similar rows of data in each new tree. But since we are starting over when a new tree the groupings end up different each time. In this way subtle shared qualities of rows can be expressed together once their remaining score (original score minus all the black box calls) line up.

This brings us to the first question how many calls should I do? To answer that accurately it’s important to know that usually the result that is returned is not added to the rows final score unmodified. Usually they decrease the value by some fixed percentage. Why? This further reduces the influence that decision tree has on the final prediction. In fact I've seen times where people reduce the number to 1/100th of its returned value. Doing these tiny baby steps can help, but sometimes it just adds noise to the system as well since each generation of a tree may have bias in it or might to strongly express a feature. In any case it depends on your decision trees and your data.

This goes for how many iterations to do as well. At some point you have gotten as accurate as you can get and doing further iterations over fits to the training data and makes your results worse. Or worse yet it just adds noise as the program attempts to fit noise of the data to the scores. In my case I test the predictions after each tree is generated to see if it is still adding accuracy. If I get 2 trees in a row that fail to produce an improvement. I quit (throwing away the final tree). Most program just have hard iteration cut off points. This works pretty well, but leads to a guessing game based on parameters you have setup.

 

 

 

Gradient Boosting (Part 1)

Okay! So you want to learn about gradient boosting. Well, first let me point you to the obvious source https://en.wikipedia.org/wiki/Gradient_boosting I'll wait for you to go read it and come back.

Back? Think you understand it? Good! Then you don't need to read further.... probably. I should warn you now in the strictest sense this post is entirely backstory on getting to the point of implementing gradient boosting. You might want to skip to part 2 if you want more explanation of what gradient boosting is doing.

When I first tackled Gradient boosting, I tried it and it didn't work. What I mean to say is I got worse results than Random Forest https://en.wikipedia.org/wiki/Random_forest. Perhaps I'm getting ahead of myself. Let me back up a little more and explain my perspective.

Most people at https://www.kaggle.com/ use tool kits or languages with libraries and built in implementations of all the core functionality they need. That is, the tool kits that they use have everything written for them. They make API calls that perform the bulk of the work. They just assemble the pieces and configure the settings for the calls. 

I write my own code for pretty much everything when it comes to data mining. While I don't reinvent things I have no plans on improving there aren't to many things like that. I didn't write my own version of  PCA https://en.wikipedia.org/wiki/Principal_component_analysis I use the one that was out there in a library on the rare occasion I want to use it. And while I've got my own version of TSNE https://lvdmaaten.github.io/tsne/, it was a rewrite of a javascript implementation that someone else had written. Granted I've tweaked the code a lot for speed and to do some interesting things with it, but I didn't sit down with a blank class file and just write it. But everything else, I've written all by myself.

So why does that make difference (toolkit vs handwritten)? Well, i try stuff and have to figure things out. And because of that my version of the technology might work in a fundamentally different way. Or prehaps what I settle on isn't as good (though it probably seems better at the time). Then when I try and leverage that code for the implementation of gradient boosting, it doesn't work like it should.

The core of gradient boosting and random forests is the decision tree https://en.wikipedia.org/wiki/Decision_tree_learning. When it comes to random forest I have been very pleased with the tree algorithm I've designed. However they just didn't seem to work well stubbed out for gradient boosting. I can only think of three explanations for this.

  1. My stubbed out trees tended to be biased a certain way.
  2. They have a lot of noise in the output.
  3. My gradient boosting algorithm got mucked up due to poor implementation.

That at least is the best I can figure.

Recently I made a giant leap forward in my decision tree technology, I had an 'AH HA!' moment and changed my tree implementation to use my new method. When I got it all right my scores went up like 10% (ball-parking here) and all of a sudden when I tried gradient boosting it worked as well.The results I got with that were fully another 15-20% better still! All at once I felt like a contender. The rest of the world had long since moved on to XGBoost https://github.com/dmlc/xgboost which was leaving me in the dust. Not so much anymore, but i still haven't won any contests or like made million on the stock market :) .

What changed in my trees that made all the difference? I started using the sigmoid https://en.wikipedia.org/wiki/Logistic_function to make my splitters. I had tried that as well, many years ago but the specific implementation was key. The ah-ha moment was when I realize how to directly translate my data in to a perfect splitter without any messing around "honing in" on a the best split. I think this technology not only gives me an great tree based on better statistics model than using accuracy alone (accuracy is how my old tree worked). But the results are more "smooth" so noise is less of an issue.

 

 

Gradient boosting series preamble

Hello folks! I'm going to do another series of blogs this time on gradient boosting. This series will be on-going with no definite time line. I expect it will only take 1 maybe 2 posts to cover the basics of gradient boosting. After that I want to spend some time exploring possible ways to improve it. I may find none, we'll see when we get there. Allow me to catch you up from last week and explain some of my motivation in doing series.

Last time i wrote I had finished 1 kaggle contest and was moving back to another I had previously started. That contest is now over, having ended some 4 hours ago. If you go and look at the leader board you'll actually see I fell some 300 spots after results got posted. It seems I forgot to move my selected submission to the more recent submission I made *grin*. Well, it didn't matter much anyways. I didn't get much of a chance to work on the contest this week and while the gradient boosting version of my code worked much better, nothing was better than my initial submission with it. And that submission is still a seriously far cry from the top of the leader board.

The time I did spend on my code for the contest was spent half trying to improve my code and half trying to find optimal values for the contest. Neither of which was much of a success. The time I spent away from the computer just thinking about boosting in general has led me back here. I find that the way that Gradient boosting has been explained to me over and over again was possibly part of the reason I was complacent about getting it working. I want to present it in another way that perhaps is new to some people, maybe most people. A change of perspective if you will. I'll get on about that when I do my first post in the series.

I'm hoping this all leads to something better. I'm optimistic, but with all things there is a functional best version. You don't see people really improving hash sort or bucket sort. And quicksort is really about as good as it gets for a comparison sort. Compression only goes so far too. I remember back in the 80s and 90s when there always seemed to be a new compression algorithm for better space savings. At some point that stopped. At least when it comes to lossless compression. (I'm still waiting on someone to figure out lossless fractal compression). It seems possible but unlikely GBM is the end of the road for general data mining.

 

A contest in a week (part 5)

It's over! time for the epilogue. The winner scored  0.97024 I came in at  0.96477 on the private leader board with 1056 people between us. :) My GBM submission this morning moved me up a ton. I also did a follow up submission later which moved me up a little. Oh if only i had another week. :) but that's okay! I'm sure I would have improved my score, but getting past .97024 well that would have been something.

I did find there was some more tweaks to be made that i could bring some real improvement but I ran out of time. Specifically, the feature selection  i did on each round. I hadn't honed that very well. Also while making a forest of the results does improve results in general, adding rounds to the GBM was my path to most success. If I had more time, I think I would have at least figured out the breaking point where GBM stops making gains. That is, unlike random forest, when you make too many trees, the noise starts dominating.

My next challenge is one I was actually already working on, https://www.kaggle.com/c/prudential-life-insurance-assessment . I'll take what I've created here and go apply it there. I'm a long way from the top there as well, but well, I didn't have GBM. :) 'sides prudential pays more... ;)

 

A contest in a week (part 4)

Wouldn't it be the case that the one time I try to do a contest in a week (instead of doing it over months) I actually get a big break through with about 17 hours left to go. And now I don't have time to hone the result. Hehehe, oh well. Better that then no result. I'm getting ahead of myself, let me catch you up.

Here’s a rundown of how my weekend went. Friday evening I went out and played cards (magic the gathering, another hobby of mine). Then Friday night I worked on the contest till the wee hours of the morning. Usually this means I coded some then watched TV while things ran. Wash rinse repeat till I final call it a night. You can read the last post for details on how that went.

Then Saturday I skipped Mardis Gras (St. Louis has a rather large celebration, something I rather enjoy doing, great weather for it this year) only to work more on the program. My day involved programming and then play civilization 4 (any ole video game will do, I just happen to like that one :) ) or watch TV while things ran. I spent all day honing the Platt scaling and trying a few variations in my trees. I also added a few features namely month, day, quarter, year and day of week. Usually these are super strong indicators for sales and retail, I wasn’t sure they would be useful here. They get used they don't change my score much.

In the end I came to the conclusion that my best version of Platt scaling was only compensating for the noise in the results. Which is to say it was useless. I was just over fitting my local results. Saturday night I ran my program for about 8 hours doing 1380 tree (3 times my normal run of 460) and submitted it to the leaderboard. I moved up .00001 yep the smallest amount you can register. How disappointing.

That just means my forest had really given all it can give. So Sunday was pretty much the same as Saturday except for one major thing. Having spent all day Saturday cleaning/checking data and trying to hone my results I was out of things to do! I couldn’t even submit my program with a huge set of trees and expect a better result. I already did that. So that was it, I was out of things to do.... Well everything except 1 thing, the elephant in the room. I still could try and work on my GBM model. For the uninitiated (are there any of those out there still?) GBM is a Gradient boosting machine. 

It's been a "thing" for me for a long time. That is I've tried time and time again to implement it only to get poor results back. I could give you my best guesses as to why... I suppose part of it is I'm always trying to do too much at once (improve while implementing). I don’t have a test case to build to, to see it work right at first. I probably implemented parts of it wrong. Perhaps sometimes I don’t stay true to the core idea. Or maybe my trees don't work like conventional trees (they don't sometimes, in the mathematical sense). But you know, those are really guesses on my part. What matters is I decided to try yet again because it’s all I had left to do!

I put together a simple frame work (no frills) that calls my standard tree which uses logistic splits, and set to trying to get some settings right. It didn’t take long before I got some that actually showed an improvement in my local tests! And I mean a sizable improvement. The rubber will meet the road when I actually make a submission of course, but right now things look really good. Granted it didn’t improve accuracy, but that's not important for AUC.  Correlation Coefficient and Gini are all I really care about (well I’d just use AUC if I ever bothered to fix the calculation it *grin*). Incidentally, I can actually get accuracy way up there if I use Platt scaling. If I do though the others but my results go to crap. That just goes to show it’s not what you care about this time.

previous best (46 trees in a random forest - 3 fold CV)
gini-score:0.91988301504356 - 0.96333
CorCoefRR:0.635294182255424
Accuracy:0.887165645525564
LogLoss:NaN
AUC:1
RMSE:1.18455684615341
MAE:0.529971023064858
Rmsle:0.295349461579467
Calibration:0.0801169849564404

current best with GBM (15 trees in a GBM forest - don't judge me, I love forests - 3 fold CV)
gini-score:0.925589843599255
CorCoefRR:0.64803931053647
Accuracy:0.86189710899676
LogLoss:NaN
AUC:1
RMSE:1.15794907138431
MAE:0.588354750026592
Rmsle:0.303147411676309
Calibration:0.074410156400745

You are probably curious about the specifics on my GBM settings. Oddly, or perhaps not, much of it people have already shared on the forums. I'm doing 7 depth trees and I'm taking only 68% of the features for each GBM iteration. I tried using more or less depth and I tried sub selecting rows for each iteration, but it all made the score worse. I tried also knocking down the feature selection to like 33% just to see, but it hurt the results as well. Normally I would hone that too but I’m out of time.

The other settings I'm using don’t translate to the GBMs other people are using. For instance my “eta” is .75 way more than some of the settings I saw. I tested it, that’s the sweet spot. My nrounds is like 15 in that result above. Doing 1800 or whatever wouldn't be feasible with the way my tree works. It would take weeks to run and the improvements would diminish so fast as to not make any sense in doing it. Okay well maybe it would make sense if you had your eta at .01 but again...it would take days to run. Also the difference between 10 nrounds and 15 is pretty small. So clearly that is diminishing quickly as something that is beneficial to ramp up.

So that's been my weekend (Throw in some cold pizza, 2 pots of coffee, a bowl of oatmeal and 2 trips to taco bell and you now have the full experience.) I'll be wrapping it up here in the next half hour or so as I have to work in the morning. Just as soon as I get a few more results back so I know what I can scale up for a run over night to maximize a submission results. Once I got that I'll hit the hay and tomorrow.... oh tomorrow, tomorrow we see if it’s all for not. :)

 

A contest in a week (part 3)

A few more days have passed and I've moved my score up only a tiny bit. I now sit at 0.96333 . I think at this point I might actually be moving down the leader board as people pass me. I've spent quite a bit of time mulling over possible changes in the bagging process and trying a few ideas. I don't think there is anything for me to do there right now. In short what I have now works just fine and there are no obvious improvements.

The platt scaling I mentioned before might still produce some beneficial results. I tried using the scaling I had in place for a different contest but it doesn't seem to translate. It gave me higher accuracy but destroyed the order which is all AUC cares about. I'm going to give it another crack by saving off Cross Validation results and seeing if I improve them by running the sigmoid function over them using different coefficients. I store the total weight (the accuracy from each tree from bagging, used to weight voting of each tree to make the final results). I actually have 2 inputs and can do something more interesting than a straight translation.

When I do the work on that platt scaling i will likely want to graph the formula i create. Years ago I found a really nice online graphing tool and thought I'd share. you can check it out here https://rechneronline.de/function-graphs/ 

I wrote a little pivot sql to help me look at the data. Normally I wouldn't share this kind of thing as it would tend to be specific to your data storage. However, I will share it as it has applications anywhere you have a key-value/name-value pair table you want to pivot where you column names that go 0,1,2,3,4,5 etc... (I have an attribute table with the labels) So for any google searchers that are looking for this sort of thing here you go. I tried making it generic enough to understand and be reusable.

 

declare @sql as nvarchar(max)
declare @nullSql as nvarchar(max)
declare @columnNum as int
declare @n as int

select @columnCount = 1000
select @n = 1
select @sql = '[0]'
select @nullSql = 'isnull([0],0) as [0]'

while (@n < @columnCount)
begin
	select @sql = @sql + ',['+cast(@n as varchar(20))+']'
	select @nullSql = @nullSql + ',isnull(['+cast(@n as varchar(20))+'],0) as ['+cast(@n as varchar(20))+']'
	select @n = @n + 1
end

select  @sql = N'SELECT RowNumber,' + @nullSql + ' FROM (
    SELECT 
        RowNumber,ColumnNumber, value
    FROM KeyValueTable
) as sel
PIVOT
(
		SUM(value)
    FOR ColumnNumber IN (' + @sql + ')
) AS pvt 
'

EXECUTE sp_executesql @SQL

Oh one thing I noticed when I went and used it. I had imported my date field wrong! there was a bug in my code and it turns out my date field was pretty much garbage. I fixed it, but this just goes to show you should always double check your inputs to make sure they are good. This isn't the first time this sort of thing has happened. I've wasted weeks and weeks before on bad data. I did spot check my data i just missed this. That particular column was special being a date and all. The date format was yyyy-mm-dd, my loader had only ever dealt with dates in yyyy/mm/dd format and the difference is what made it not work right.

I took a look at the forums to see if there were any obvious insights people have shared that I needed to implement. I already mentioned i don't spend a lot of time trying learn the data and hand massage it to be exactly what I need. To really excel at any data mining competitions you should do that. in the business world you never touch the algorithms. That's what R&D and PHDs do (and me apparently). You just buy the tools and use them. Which of course means the stronger my tool kit gets the more I do that sort of thing cause that's where the gains are. 

I like to think I get some deep understanding of the nature of data interactions by taking the long road, but i'm probably fooling myself. :) There was at least one obvious thing I found in the forums (there may be more, i need to go back and look).  I needed to  create a column tallying how many pieces of missing data there are in that row. That's exactly the sort of thing my algorithm has a hard time determining on its own. It is what gave me my tiny bump in score. Its also the sort of thing a genetic algorithm might find on its own... but that's something I'll save for another series. (I have such delights to share with you all!)

I also tried some TSNE transforms on the data. There is a nice thread about this on the forums https://www.kaggle.com/c/homesite-quote-conversion/forums/t/18554/visualization-of-observations . One guy in particular managed to get the output to look really nice. Unfortunately the few tests I did produced the stringy looking results you can see in that thread as well. He mentioned he did a replacement on the categorical values with the average score for that category. This makes a lot of sense as raw whole numbers representing a category are meaningless. Also it is probably a far better technique than the one hot encoding method when it comes to that process. TSNE wants related data to be in 1 feature so it can figure out the connection to other features. Separating it out messes this up as it doesn't know two features are actually one. As I only have a week and we are down to 2+ days. I wont be revisiting my TSNE work anymore for this contest. However, it's definitely something to remember: feeding the TSNE results in to your model could very well produce a winning score. 

Incidentally, I do the same category to real number transformation when calculating correlation coefficients on categorical values when I'm figuring splits down the tree. I don't always do one hot encoding. In fact i only do it afterwards when looking for improvements.

So my next steps are looking at making a transformation using platt scaling from my results and looking the data for obvious things I might try to improve the score.  More reviewing the forums to see if there are other "you need to do this" posts. And just general noodling on what might work.

 

 

A contest in a week (part 2)

Here we are a few days later if you look at the leader board  https://www.kaggle.com/c/homesite-quote-conversion/leaderboard the current best is 0.97062 . I am still way below that at 0.96305 . That doesn't sound like much but evidently this is a pretty straight forward set of data and the value is in eeking out that last little bit of score. My score IS higher than last time I posted. It was improved 2 different times.

It first went up when I realized the data I had imported had features that had been flagged categorical. My importer does the best it can to figure that sort of thing out, but its very very far from perfect. Just because values are whole numbers doesn't mean it is necessarily categorical. and even if it DOES have whole real numbers it doesn't mean it should be treated as a linear progression. In short I changed all the features to be considered real numbers and this gave me a large gain.

I should mention i handle categorical features differently than real number features. basically i make bags that represent some of the feature values and try to make the bags the same size. in cases where i need to evaluate feature as a number and its not been changed to a yes or a no, i look at the training data and find an average value for that category's value. its not perfect but in some cases its way faster than splitting out 10000 categories especially when you want to see if a particular feature has some sort of correlation with the scores.

The 2nd much smaller gain came from flagging any features with 10 or less values and telling the system to treat it as a categorical field. (chosen mainly as a cut off to keep total feature count down to a minimum) Then I went ahead and did a "one-hot encoding" style split on those features to get them in to their component parts. that is you take all the possible values for a feature and give it, its own feature. 5 different values means 5 different features. Each feature then has either a 0 or 1 indicating if that value is present. I flag all those new features as real number features and not categorical features. I turn off the original feature. This gave me a small gain.

My current internal testing result looks like this:


rfx3 - 460 trees - new cat setup - 0.96305
gini-score:0.919898981627662
CorCoefRR:0.634927047984772
Accuracy:0.886946585874578
LogLoss:NaN
AUC:1
RMSE:1.18522649708472
MAE:0.530871232940142
Rmsle:0.295421422778189
Calibration:0.0801010183723376

I mentioned before i should probably fix AUC, as that is really what ROC is. I did take a glance at it and didnt see anything obviously wrong, but before this is over I'll almost definitely have to get in there and fix it. I've continued using Gini 'cause it seems pretty close.

My next steps are to figure out if there is a way I can reduce noise and/or if there is a way I can increase the weight of correct values. In a different contest I wrote a mechanism that attempts to do a version of plat scaling https://en.wikipedia.org/wiki/Platt_scaling  (which really, is all my sigmoid splitters are) on the result to better pick whole number answers to an exact value. I didn't do a standard implementation which is very me. I worked really well. This contest does not use whole values though, so I'll have to go take a look at the code to see if I can make it work for that. To be clear, i intend on getting the appropriate weight for any given result from the plat scalling.

This doesn't directly handle another problem I'd like to fix. noise in the data. any given sample is fuzzy in nature. the values may be exact and correct, but the underlining truth of what it means is bell shaped. Ideally we would get at that truth and have empirical flags (or real values) come out of each feature that always gave the right answer when used correctly. I don't have a magic way to do that... yet. :) in the mean time the best thing we can do is find the features that do more harm than good. unfortunately the only way I have to do that is at this point is to brute force test removing each feature and seeing if there are gains or not. terrible, i know. its been running for about 24 hours and has found only 1 feature it thinks is worth turning off. And even that improvement was right on the line of what could be considered variance in my results. This is why you want higher cross validation, so you can say with much more certainty that you should do a change or not. I might stop it and pickup the search later (i can always resume where I left off). It's a good way to fill time productively when you are working or just not interested in programming.

The only other way I can think of off hand to improve my overall score is to increase the data set size i train on. Currently I'm doing my training using a bagged training set. This means I hold out about 1/3 ( 1.0/e ) of the data and use it to score the other 2/3rds (1.0 - 1.0/e) I build the tree on. This then becomes my measure of accuracy for that tree. if I can find a good way increase that I should in principle have a better, more accurate model. ideally the ensemble nature of the random forest takes care of that. combined with the plat scaling I'm looking at adding, there may be no real gains. I would think though if you can minimize the statistics and maximize the use of the data you would get to a precise answer faster. we will see what I come up with.

 

 

A contest in a week (part 1)

I thought I'd give some examples of how things generally go for me. I imported the data from the https://www.kaggle.com/c/homesite-quote-conversion contest. I had to do this 2 times as I messed it up the first time, which is par for the course. I have some stock data importers I wrote that handle most spreadsheet style data with very little tweaking.

Next I ran a few out of the box tests. I get my results back in a standard form. I believe this scoreboard is being calculated using ROC (which is essentially AUC, and to a lesser extent the gini score). You may notice I have AUC of 1 and LogLoss of NAN in all these tests. Well my AUC is probably being calculated wrong (like maybe it has some rounding hardcoded in it or expects values 0 - 1 or some such).  I just haven't gone and looked yet. My logloss is NAN cause that does specifically want values 0 -1.

Generally speaking I use which ever test I have that is an exact match or is the closest. Because they all approach a perfect score, and spending the time implement yet another scoring mechanism really is not very high on list of things to do. Basically, I don’t usually worry about it too much unless its radically different from what I already have.

As for getting the actual scores, I use cross validation like pretty much the rest of the world. I generally only do 3 fold cross validation because significant gains are very apparent even at 3. Increasing it to 4,5,6...9, 10...etc  just end up making the tests take longer for little extra information. Though if you are looking for accuracy go with 10. That seems to be the sweet spot. I’ve toyed with going back to 5 or even 9 cause all too often I do get in to the weeds with this stuff and finding little gains becomes important if you want to win.

Legend

rfx1 = random forest experiment 1 ... basic normal random forest with my custom splitters

rfx2 = random forest experiment 2 ... I tested this but the results were really bad

rfx3 = random forest experiment 3 ... basic normal random forest with my logistic/sigmoid splitters. the number behind the logistic represents how many features are used in each split. I should note I do not randomly select features with this mechanism, i use those that correlate the most with the final score.

rfx4 = random forest experiment 4 ... another logistic splitter this one uses an additional mechanism that figures in parent node scores and accuracies in to the final score for the tree's leaves. It didn’t' improve results but I thought I'd show it here just to give an example of the kind of things I try.

Results
rfx1 - 46 trees
gini-score:0.796266356063114
CorCoefRR:0.378157688020071
Accuracy:0.7456163354227
LogLoss:NaN
AUC:1
RMSE:1.5631840915599
MAE:1.07671401193999
Rmsle:0.405897189976455
Calibration:0.2543836645773

rfx3 - 46 trees (logistic 1) Leaderboard score with 46 trees 0.90187
gini-score:0.795058723660037
CorCoefRR:0.409256922331928
Accuracy:0.780350470167233
LogLoss:NaN
AUC:1
RMSE:1.52284323987427
MAE:0.979367384563889
Rmsle:0.389736465098101
Calibration:0.219649529832767

rfx3 - 46 trees (logistic 2)
gini-score:0.900834289021121
CorCoefRR:0.592472621731979
Accuracy:0.854328645381246
LogLoss:NaN
AUC:1
RMSE:1.25910166514491
MAE:0.658794207857713
Rmsle:0.316163290273993
Calibration:0.145671354618754

rfx3 - 46 trees (logistic 3) Leaderboard score with 460 trees 0.95695
gini-score:0.905913368972696
CorCoefRR:0.600941784768692
Accuracy:0.859935155804304
LogLoss:NaN
AUC:1
RMSE:1.24723196082576
MAE:0.640500763913664
Rmsle:0.311818233193152
Calibration:0.140064844195696

rfx3 - - 46 trees (logistic 4)
gini-score:0.90578470857155
CorCoefRR:0.599962582265556
Accuracy:0.857539596865357
LogLoss:NaN
AUC:1
RMSE:1.25525929882265
MAE:0.657799102194174
Rmsle:0.312383492790915
Calibration:0.142460403134643

rfx4 - 46 trees (logistic 3)
gini-score:0.905049421077021
CorCoefRR:0.599288488034863
Accuracy:0.83152333785067
LogLoss:NaN
AUC:1
RMSE:1.24604271464082
MAE:0.714368432524074
Rmsle:0.326817108831641
Calibration:0.16847666214933

 

Calibration is something I change ad hoc. I have iterative processes that can test things while I'm not at the computer and I use the calibration field to decide what is working and what is not. it currently set to be 1-accurracy

I should talk about accuracy. How do you determine how accurate something is? It’s easy when the values are 0 to 100 and scores are evenly distributed. That’s rarely the case. Here's how i do it. I build a normal distribution of the expected results then I see where the final result fell compared to the where the expected result fell on the normal curve. Then I find the % change for the two results and the 1 - difference is my accuracy.

An example might go something like, test value is -1, and train value is 2. The average value is 0 and let’s say the standard deviation is 1. Well the 50% are left and right of the mean. so -1 standard deviation is 15.9% and +2 standard deviation 97.7% so my accuracy is 1 - (97.7-15.9) or 18.2% (which is terrible!) but you see how it works. I do this for all accuracies I need whether it is bagging, final scores or splitting.

That's enough for today, I’ll follow up in a few days with how things are going.