Dim Red Glow

A blog about data mining, games, stocks and adventures.

Improving T-SNE (part 1)

Well, that's I've been doing (improving t-sne). What is t-sne? Here's a link and here is another link. The tldr version is t-sne is intended as dimensional reduction of higher order data sets. This helps us make pretty pictures when you reduce the data set down to 2-d (or sometimes 3-d). All t-sne is doing is trying to preserve the statistics of the features and re-represent the data in a smaller set of features (dimensions). This allows to more easily see how the data points were organized. However, all to often your labeling and the reality of how the data is organized have very little in common. But when they do the results are pretty impressive.

This isn't my first foray in to messing with t-sne as you can see here and in other places in kaggle if you do some digging. I should probably explain what I've already done and then i can explain what I've been doing. When I first started looking at the algorithm it was in the context of this contest/post. I did some investigation and found an implementation i could translate to C#. Then I started "messing" with it. It has a few limitations out of the box. First the data needs to be real numbers, not categorical. Second the run time does not scale with large sets of data.

The second problem has apparently been addressed using the Barnes-Hut n-body simulation  technique that (as i understand it) windows areas that have little to no influence on other areas. But you can read about it there. This drops the runtime from N Square to n log n . which is a HUGE improvement. I tried to go one better and make it linear. The results I came up with are "meh". They work in linear time but are fuzzy results at best. Not the perfect ones you see from the barnes-hut simulations.

Before I say what I did, let me explain what the algorithm does. It creates a statistical model of the different features of the data getting a probability picture of any given value. Then it randomly scatters the data points in X dimensions (this will eventually be the output and is where the dimensional reduction comes from). Then it starts moving those points step by step in a process that attempts to find a happy medium where the points are balanced statistically. That is, the points pull and push on each other based on how out of place they are with all other points. They self organize via the algorithm. The movement/momentum each row/point gains in each direction is set by the influence those other points should have based on distance and how far away they should be with their particular likelihood. I've toyed with making an animation out of each step. I'll probably do that some day so people (like me) can see first hand what is going on.

So to improve it, what I did is split the data in a whole bunch of smaller sections (windows) each of the same. Each window though shares a set of points with 1 master section. These master points are placed in the same seed positions in all windows and have normal influence in all windows. They get moved and jostled in each window but then it comes time to display results, we have some calculations to do. The zero window is left unaltered but each other window has it's non-shared points moved to positions relative to the closest shared point. so if say row 4000 isn't shared in all windows and its closest shared row is row 7421 and we see what it's dimensional offset is and place it according to that.

The idea is that statistically speaking the influence of those points is the same as any other points. And the influence the points in the main window have on those points is approximately the same as well. The net effect is that all the points move and are cajoled in to groups as if they were one big window. This of course is why my results are fuzzy. because truly only a tiny amount of data is drawing the picture. normally all the interactions between all the points should be happening. But because I wrote it he way I did, my results scale linearly.

Here's example from the original otto group data. first a rendering of 2048 rows randomly selected (of the 61876 ) with 1500 steps using the traditional t-sne.  It took about 10 minutes. A 4096 row should have taken something like 30 minutes (probably 5 minutes of the 10 minutes of processing was more or less linear)... I'm ball parking it definitely would have take a long while.

Now here's my result of all 61876 rows using the windowing technique where i windowed with a size of 1024. (note if i did 2048, it would have take longer than the above photo since each window would take that long) 1500 steps... run time was like 50 minutes.

As you can see what I mean by it not being as good at making clear separation. the first image is MUCH better about keeping 1 group out of another... but the rendering time is SOOOOO much faster in the 2nd when you think about how much it did. And is parallelized (which I'm taking advantage of) since the various windows can run concurrently. Larger windows make the results better but slow everything down. I believe there are 9 different groups in there (the coloring is kind of bad, i really need a good system for making the colors as far away from each other as possible).

Someday (not likely soon) I will probably combine the n-body solution and my linear rendering to get the best of both worlds. So each window will render as fast as possible (n log n time of the window size) and the whole thing will render-scale out linearly and be parallelize-able.

This brings me to the big failing of t-sne. Sometimes (most of the time) the data is scored in a way that doesn't group on it's own. See the groupings are based on the features you send in, the algorithm doesn't actually look at the score or category you have on the data. The only thing the score is used for is coloring the picture after it is rendered. What I need is a way to teach it what is really important not not just statistically significant. I'll talk about that next time! 

A Standard eldrazi deck and a PPTQ

A PPTQ is a Preliminary Pro Tour Qualifier (for magic the gathering). I've not won one before, but I came as close as I ever have this last weekend. I did it playing this deck http://tappedout.net/mtg-decks/24-03-16-4-color-eldrazi/ . It was a 33 person tournament (6 rounds) I ended up losing the semi finals to a rally deck. Rally the Ancestors has pretty much dominated the tournaments in standard at least locally. Though based on mtgtop8.com it seems to be a thing in other places too.

It uses either Zulaport Cutthroat triggers or an unblocked Nantuko Husk to win. Usually the former in my experience. I tried to prepare for it using hallowed moonlight (which is also good against other things) and a single cranial archive which would force them to reshuffle their graveyard (making the rally worthless) but it wasnt enough. In the end I neglected to always leave my 2 mana open to trigger cranial and lost on a top deck rally. Game one they won as they tend to do without my sideboard.

This leads me to possibly a better solution. I think I'll be trying tainted remedy next time. For the rally match up it should work wonders as they don't run enchantment removal. Most of the time when they start trying to win with the cutthroat they have less life than I do and in short it would kill them. I'll have it as a 2 of in the sideboard somewhere. It's also worth mentioning, it will also would help against any decks running normal life gain monsters like seeker of the way or soulfire grand master. Not that we see a lot of those these days.

On a final note concerning the two planeswalkers I ran. The unsung hero of my deck was Sorin, Solemn Visitor. He pulled his weight more than you would expect, i got paired against two prowess decks and he got me through both matches. As far as Gideon, Ally of Zendikar goes, he's been underwhelming as of late. I think that has more to do with the prevalence of dragons and rally in the format. Both decks try to win in a way that make him more or less a non issue.

Gradient Boosting (Part 2)

This time around let’s start from scratch. We have some training data. Each row of the data has some various features with real number values and a score attached to it. We want to build a system that produces those scores from the training data. If we forget for a second "how" and think about it generically we can say that there is some magical box that gives us a result. The result isn't perfect put its better than a random guess.

The results that we get back from this black box are a good first step, but what we really need is to further improve the results. To further improve the results we take the training data and instead of using the scores we were given we assign new scores to the rows. This score is how much we were off from the black box prediction.

Now we send this data in to a new black box and get back a result. This result isn't perfect either but it’s still better than a random guess. The two results then combined give us a better answer then just the first black box call. We repeat this process over and over making a series of black box calls that slowly gets the sum of all the results to match the original scores. If we then send in a single test row (that we don't know the score to) in to each of these black boxes that have been trained on the train data, the results can be added up to produce the score the test data has.

In math this looks something like this

f(x) = g(x) + g'(x) + g''(x) + g'''(x) + g''''(x) .... (etc)

Where f(x) is our score and  g(x) is our black box call. g'(x) is the 2nd black box call which was trained with the adjusted scores, g''(x) is the 3rd black box call with the scores still further adjusted. etc...

A few questions should arise. First how many subsequent calls do we do? And 2nd what exactly is the black box?

I'll answer the second question first. The black box (at least in my case) is the lowly decision tree. Specifically it is a tree that has been limited so that it terminates before it gets to a singular answer. In fact it is generally stopped while large fractions the the training data are still grouped together. The terminal nodes just give averages for the scores of the group at the terminal node. It is important that you limit the tree because you want to build an answer that is good in lots of cases.

Why? Because if you build specific answers and the answer is wrong, correcting the result is nearly impossible. Was it wrong because of a lack of data? Was it wrong because you split your decision tree on the wrong feature 3 nodes down? Was it wrong because this is an outlier? Any one of these things could be true so to eliminate them all as possibilities you stop relatively quickly in building your tree and the get an answer that is never right but at least puts the results in to a group. If you go too far down you start introducing noise in the results. Noise that creeps in because your answers are too specific. It adds static to the output.

How far down should you go? It depends on how much data you have how varied the answers are and how the data is distributed. In general I do a depth of ((Math.Log(trainRows.Length) / Math.Log(2)) / 2.0) but it varies from data set to data set. This is just where I start and I adjust it if need be. In short I go half way down to a specific answer.

Now if you limit the depth uniformly (or nearly uniformly) the results from each node will have similar output. That is the results will fall between the minimum and maximum score just like any prediction you might make, but probably somewhere in the middle (that is important). It will have the same amount of decisions used to derive the result as well so the answers information content should on average be about the same.  Because of this next iteration will have newly calculated target values the range of those scores in the worst case is identical to the previous range. In any other case the range is decreased since the low and high ends got smothered in to average values. So it is probably shrinking the range of scores and at worst leaving the range the same.

Also the score input for the next black box call will still have most of the information in it since all we have done is adjust scores based on 1 result and we did it in the same way to a large number of results. Results that from the previous tree's decisions share certain traits. Doing this allows us to slowly tease out qualities of similar rows of data in each new tree. But since we are starting over when a new tree the groupings end up different each time. In this way subtle shared qualities of rows can be expressed together once their remaining score (original score minus all the black box calls) line up.

This brings us to the first question how many calls should I do? To answer that accurately it’s important to know that usually the result that is returned is not added to the rows final score unmodified. Usually they decrease the value by some fixed percentage. Why? This further reduces the influence that decision tree has on the final prediction. In fact I've seen times where people reduce the number to 1/100th of its returned value. Doing these tiny baby steps can help, but sometimes it just adds noise to the system as well since each generation of a tree may have bias in it or might to strongly express a feature. In any case it depends on your decision trees and your data.

This goes for how many iterations to do as well. At some point you have gotten as accurate as you can get and doing further iterations over fits to the training data and makes your results worse. Or worse yet it just adds noise as the program attempts to fit noise of the data to the scores. In my case I test the predictions after each tree is generated to see if it is still adding accuracy. If I get 2 trees in a row that fail to produce an improvement. I quit (throwing away the final tree). Most program just have hard iteration cut off points. This works pretty well, but leads to a guessing game based on parameters you have setup.

 

 

 

Detroit Magic the Gathering Grand Prix

I went to the Magic the gathering grand prix event in Detroit. It was interesting but maybe less fun than past events, this is mainly due to me having done it too many times before. The novelty has finally worn off.

I made day two playing an eldrazi tron deck I put together. My record was 7-2 going in to day 2 (I had 2 byes) I think this had less to do with my magical brewing skills and more to do with eldrazi being so easy to play and overly powerful. I fully expect eye of ugin to be banned and possibly eldrazi mimic. Though to be honest temple and mimic would work as well. Why mimic? cause it's a 2 mana creature that enables the super fast wins.

If you curve perfectly with nothing more than eye of ugin then waste waste waste... it's a 3/2 turn 2, 4/4 turn 3, 5/5 turn 4. Granted it might die along the way, but essentially you are paying 2 mana for 2 of whatever your biggest creature is. And if it dies the real creature lives. It gets worse if you get up to ulamog (which my tron deck of course ran). It is also abusive in other ways since any colorless creature you put in to play will trigger it's abbility. Just be happy phyrexian dreadnought isn't available in modern. :)

Don't get me wrong without the rest of the eldrazi it's a 'meh' card. If they leave it, it'll be because they want the eldrazi deck to remain a 'thing' in modern. it still could be with just temple and mimic. I would think it'll still be to strong/consistent with that though. The consistency is what really makes it a good deck. Most games I didn't have an eye of ugin opening hand. Losing eye would slow up my deck maybe half the games and make the end game more difficult. But regardless eldrazi tron would probably remain a viable deck.

I should add I didn't play against much eldrazi. I think this was just luck and part of what got me to day 2. I did play about everything else under the sun though. Day one went something like this: I played infect (lost 1-2), white blue planeswalkers (really surprised to play this! won 2-0), mardu tokens/goodstuff (2-0), black white tokens (2-1), red/green eldrazi (lost 1-2), abzan chord of calling deck (2-1) and affinity (won 2-1). Day Two went: storm (lost 1-2), merfolk (won 2-1), eldazi tron (mirror match! won 2-0),  living end(lost 1-2 drop)

My losses could have gone either way most of the time, but that's the luck of the draw. my deck could have used more testing. there were cards I cut almost every game but that just shows you how good eldrazi is that it could carry itself with maybe 4 so-so cards being run with it.

I'm not going to abandon eldrazi post ban. In fact I've long since put together my legacy eldrazi deck and have been tweeking it. I dont see it going anywhere anytime soon :) it's really competitive.