Dim Red Glow

A blog about data mining, games, stocks and adventures.

Improving t-sne (part 2)

I ended up last time saying "This brings me to the big failing of t-sne. Sometimes (most of the time) the data is scored in a way that doesn't group on it's own." I intend on trying to tackling that in today's post. I will tell you what I know, what I think might work and the initial results I've gotten so far.

Let me start out by saying I've been casually active in two kaggle contests as of late. https://www.kaggle.com/c/santander-customer-satisfaction and https://www.kaggle.com/c/bnp-paribas-cardif-claims-management . Both contests use a binary classifier for scores one scores using log-loss (which penalizes you very harshly for predicting wrong) and the other uses AUC (ROC if you prefer) though their AUC is predicted probability vs observed probability which is a little weird for AUC, but whatever.

Both of these contests are kind of ideal for t-sne graphing in that there are only 2 colors to represent. Also these contests represent different extremes. One has a very weak signal and the other is more of a 50/50. That is in weak signal case that the "No"s far outnumber the "Yes"s. It's easy to have the signal lost in the noise when the data is like that. If we can leverage t-sne to reduce our dimensions down to 2 or 3 and see via a picture that our grouping for positives and negatives are where they need to be, we probably really will have something.

T-sne gives us 1 very easy way to augment results and 1 not-so-easy way to augment then. The very easy way is to change how it calculates distances. It's important to remember at its core it is trying to assign a probability to a distance for every point vs every other point and then rebalance all the point-to-point probabilities in a lower dimensions. If we augment how it evaluates distance by say expanding or collapsing the distance between points in a uniform way we can change our results as certain dimensions will have more or less importance. The not so easy way i'll talk about later down the article.

I tried at first to improve the solutions using various mathematical transforms. Some where me taking dimensional values to powers (or roots). I tried Principle component analysis. And I even tried periodically rebuilding the probabilities based on some adhoc analysis of where the training data was laid out. All of this was a mess. Not only was I never really successful, but in the end I was trying to do build a data mining tool in my data mining tool!

Skipping a few days ahead I started looking at ways to import results from a Data mining tool to augment the dimensions. I tried using data from Gradient boosting. I tracked which features get used and how often. Then when I used that data, I shrunk the dimensions by the proportion they were used. So if a feature/dimension was used 10% of the time (which is a TON of usage) that dimension got reduced to 10% of its original size. If it was used .1% of the time it got reduced to 1/1000th of the size. This produced... better results but how much better wasn't clear and I definitely wasn't producing results that made me go "OMG!" I was still missing something.

Now we come to the hard way to augment the results. We can build better data for the t-sne program to use. This is what I did, I abandoned the usage information entirely and tried using entirely new features generated by GBM. The first step was to build a number based on how the row flowed down the tree. flowing left was a 0 flowing right was a 1. I actually recorded this backwards though since at the final node I want the 0 or 1 to be the most significant digit, so like values ended up on the same end of the number line. Since GBM is run iteratively I could take each loop and use it as a new feature.

This didn't really work. I think because a lot of the splits are really noisy and even with me making the last split the most significant digit, it's still way to noisy to be a useful way to look at the data. This brings me to my most recent idea which finally bears some fruit.

I thought "Well if I just use the final answers instead of the flow of data, not only do i get a method for generating features that can produce either reals or categorical answers (depending on the data you are analyzing) but you also can use any method you like to generate the features. you arent stuck with fixed depth trees."  I stuck with GBM but turned off the fixed depth aspect of the tree and just built some results. I ran 5 iterations with 9 cross validation and kept the training set's predictions from the tree building exercise. The results are really promising. See below (note: I will continue to use my psedo-t-sne which runs in linear time due to time constraints. if you want "perfect results" you will have to go else where for now)

BNP

You can see the different folds being separated since in this implementation they don't cover all the data (it zero fill the rows it was missing) I have since fixed this. But yes there were 9 folds.

Santander

Santander is the one I'm most impressed with its really hard to isolate those few positives in to a group. BNP.... is well less impressive, it was an easier data set to separate so the results are meh at best. The rubber will really meet the road when I put the test data in there and do some secondary analysis on the 2-d version of the data using GBM or random forest.

It's important to know that I'm "cheating" i build my boosted trees from the same training data that I'm then graphing. but due the GBMs baby step approach to solving the problem and the fact that I use bagging at each step of the way to get accuracy the results SHOULD translate in to something that works for the testing data set. I'll only know for sure once I take those results you see above... put the test data in as well and then send the whole 2-d result in to another GBM and see what it can do with it. I'm hopeful (though prior experience tells me I shouldn't be).

*Note: Since I initially put this post together, i sat on it since I wanted to try it before i shared it with the world. I have finished using the initial results on the BNP data and they are biased.. i think for obvious reasons, training on data then using the results to train further... I'll talk more about it in t-sne (part 3). if you want to read more about it right now. check out this thread https://www.kaggle.com/c/bnp-paribas-cardif-claims-management/forums/t/19940/t-sne-neatness/114259#post114259 *

For fun here are the animations of those two image being made. (they are really big files 27 and 14 meg ... also firefox doesn't like them much... try IE or chrome.)

http://dimredglow.com/images/Santander.gif

http://dimredglow.com/images/bnp.gif