Dim Red Glow

A blog about data mining, games, stocks and adventures.

Improving T-SNE (part 1)

Well, that's I've been doing (improving t-sne). What is t-sne? Here's a link and here is another link. The tldr version is t-sne is intended as dimensional reduction of higher order data sets. This helps us make pretty pictures when you reduce the data set down to 2-d (or sometimes 3-d). All t-sne is doing is trying to preserve the statistics of the features and re-represent the data in a smaller set of features (dimensions). This allows to more easily see how the data points were organized. However, all to often your labeling and the reality of how the data is organized have very little in common. But when they do the results are pretty impressive.

This isn't my first foray in to messing with t-sne as you can see here and in other places in kaggle if you do some digging. I should probably explain what I've already done and then i can explain what I've been doing. When I first started looking at the algorithm it was in the context of this contest/post. I did some investigation and found an implementation i could translate to C#. Then I started "messing" with it. It has a few limitations out of the box. First the data needs to be real numbers, not categorical. Second the run time does not scale with large sets of data.

The second problem has apparently been addressed using the Barnes-Hut n-body simulation  technique that (as i understand it) windows areas that have little to no influence on other areas. But you can read about it there. This drops the runtime from N Square to n log n . which is a HUGE improvement. I tried to go one better and make it linear. The results I came up with are "meh". They work in linear time but are fuzzy results at best. Not the perfect ones you see from the barnes-hut simulations.

Before I say what I did, let me explain what the algorithm does. It creates a statistical model of the different features of the data getting a probability picture of any given value. Then it randomly scatters the data points in X dimensions (this will eventually be the output and is where the dimensional reduction comes from). Then it starts moving those points step by step in a process that attempts to find a happy medium where the points are balanced statistically. That is, the points pull and push on each other based on how out of place they are with all other points. They self organize via the algorithm. The movement/momentum each row/point gains in each direction is set by the influence those other points should have based on distance and how far away they should be with their particular likelihood. I've toyed with making an animation out of each step. I'll probably do that some day so people (like me) can see first hand what is going on.

So to improve it, what I did is split the data in a whole bunch of smaller sections (windows) each of the same. Each window though shares a set of points with 1 master section. These master points are placed in the same seed positions in all windows and have normal influence in all windows. They get moved and jostled in each window but then it comes time to display results, we have some calculations to do. The zero window is left unaltered but each other window has it's non-shared points moved to positions relative to the closest shared point. so if say row 4000 isn't shared in all windows and its closest shared row is row 7421 and we see what it's dimensional offset is and place it according to that.

The idea is that statistically speaking the influence of those points is the same as any other points. And the influence the points in the main window have on those points is approximately the same as well. The net effect is that all the points move and are cajoled in to groups as if they were one big window. This of course is why my results are fuzzy. because truly only a tiny amount of data is drawing the picture. normally all the interactions between all the points should be happening. But because I wrote it he way I did, my results scale linearly.

Here's example from the original otto group data. first a rendering of 2048 rows randomly selected (of the 61876 ) with 1500 steps using the traditional t-sne.  It took about 10 minutes. A 4096 row should have taken something like 30 minutes (probably 5 minutes of the 10 minutes of processing was more or less linear)... I'm ball parking it definitely would have take a long while.

Now here's my result of all 61876 rows using the windowing technique where i windowed with a size of 1024. (note if i did 2048, it would have take longer than the above photo since each window would take that long) 1500 steps... run time was like 50 minutes.

As you can see what I mean by it not being as good at making clear separation. the first image is MUCH better about keeping 1 group out of another... but the rendering time is SOOOOO much faster in the 2nd when you think about how much it did. And is parallelized (which I'm taking advantage of) since the various windows can run concurrently. Larger windows make the results better but slow everything down. I believe there are 9 different groups in there (the coloring is kind of bad, i really need a good system for making the colors as far away from each other as possible).

Someday (not likely soon) I will probably combine the n-body solution and my linear rendering to get the best of both worlds. So each window will render as fast as possible (n log n time of the window size) and the whole thing will render-scale out linearly and be parallelize-able.

This brings me to the big failing of t-sne. Sometimes (most of the time) the data is scored in a way that doesn't group on it's own. See the groupings are based on the features you send in, the algorithm doesn't actually look at the score or category you have on the data. The only thing the score is used for is coloring the picture after it is rendered. What I need is a way to teach it what is really important not not just statistically significant. I'll talk about that next time!