Dim Red Glow

A blog about data mining, games, stocks and adventures.

Minecraft and data mining (no relation) and stocks (some releation)

It's the 2nd day of 2017 and soon to be 2018 or so it always seems when you look back at years. I've dawdled for to long on some old work I wanted to do and the years turning over has put that in sharp relief.

First I started a you tube channel https://www.youtube.com/channel/UCoTP8WbdsCW_6FSLz0pUSsA (Hardcore in a Hurry) for my video game exploits. I don't play a lot of video games these days, but that being said I like playing games on the hardest setting possible. I'm not a fan cheat modes or easy walk through settings (or AIs that cheat for that matter, I mean seriously? couldn't you write a better AI?). That being said, minecraft is the only thing on there now. I don't have plans to add any other types of videos right now, but things change.

Next Let me say I've spent some more time on my layered Gradient boosting stuff. I learned a little bit about what t-sne can do for me. So feeding t-sne in initially can help if the groups do self organize out of the original data but normally, I think it is something you want to do after a few iterations have passed. it seems the further down the gradient you are the better the effect. The effects arent remarkable, but they aren't terrible either. The unfortunate takeway is that it is very time intensive. running t-sne processing on the current state of the gradient descent and the source data just takes ... well a while. maybe i can cherry pick features to use and speed it up, but as it stands right now unless the data set is pretty small its not a good option for eking out more performance. 

For a couple years now i've wanted to implement a stock analysis program, so i can do personal investment. I did this once along time ago with a friend using far less sophisticated methods. That's a story for another time. I've never been happy enough with my software for datamining to start the code/project for investing. I think i'm happy enough now. The really work will be in getting the data in to a form that is easily updated daily and makes measurable testable predictions. My recent work on https://www.kaggle.com/c/santander-product-recommendation got me to rewrite parts of my code to handle time series in a different way. which is key for the training and evaluation. (a contest i never got anywhere with. I've never implemented the map@<x> evaluation so it makes it hard to train for such a thing. )

 

adjusting the layers to be tsne driven

Hello again, i've slowly turned over the ideas in my head i wrote about last time, and i think there are 2 big flaws in my idea. first fitting a gbm to data you already have in the training and test data adds nothing. (that's huge). this is with regards to me selecting a few features to make something that has a correlation coefficent equal to some fixed amount of the final result. And 2nd making groups in any fashion that is not some form of transform will likely never give me any new information to work with.

I have a fix to both (of course :) ) i mentioned using tsne as well. i think that's where i need to get my groups from. the features i send in to tsne is the how i get various new groups. everthing else i've said though is still relevant. so it is no longer layers of gbm as much as layers of tsne. once i have relevant results from that i send it all in to gbm and let it do its work.

The groups i make out of tsne (2-d btw) will utilize the linear tsne i've already written (As its super fast and does what i need to do pretty well). and while i will send the the results from the tsne right in to the gbm I'll do more than that. I will also isolate out each regions in the results and build groups from each of those.

To do that, i make a gravity-well map for every point on the map... like they are all little planets. the resulting map is evenly subdivided (likes its a giant square map of X by X...i've been using 50 for each side) and have a 0 or larger value in every square.  (inverse squared distance ... 1/(1+ (sourceX - x0)^2 + sourceY - y0)^2 ). i've been limiting the influence to 5 squares from each point just to keep the runtime speedy (its only a 50x50 map anyway so that's pretty good reach and at 6 away the influence is 1/(1+36+36) which is only 0.013 so that's a decent cutoff anyways.  Once i have the map i look for any positive points that have lower or equal points in all directions. those points are then used to make groups. so there may only be 1... but likely there will be bunches.

the groups are based off of distance from the various centers of local low spot in the gravity well map. if you were to take the differential of the map, these would be 0 points.  so we just find the distance of every point from those centers. This has the added effect of possibly making multiple groups in 1 one pass (which is great) not to mention the data is made via a transformation that works completely statistically and has no direct 1 to 1 correlation with the data you feed it.  So you are actually adding something to the gbm mechanism to work with.

At that point the resulting groups should probably be filtered to see if they add any real value using the correlation mechanism i already mentioned in the last post. ideally gbm would just ignore them if they are noisy.

Thoughts on the AI layers

So, I've given my layered GBM a little thought. I'll explain what i want to do by how i came up with it. My goals seem pretty straight forward I think I can describe them with these 2 ideas/rules:

1. I want each layer that is created to contribute to the whole/final result in a unique way. So there is as little redundancy (ie wasted potential) as possible.

2. The number of layers should be arbitrary. If the data drives us to a one layer system, that's all you create. That is, if the data just points directly to a final result, you just figure out the final result.  If the data drives us to 100 layer system... well there is that then too.

To do the fist one, each layer should have access to previous layers to make it's results so it will have more options to make different groups in that layer, with the training data available to everything. Each layer will also know what previous layers produced to keep from making "similar" things. There will likely be some controlling variable for how different a group needs to be.

To do the second one we need to have a way to move towards our goal of predicting the final result. We could just make groups until we happen to make one that fits our results really well, that sort of brute force approach might take forever (depending on the algorithm). If however we have a deterministic way of measuring the predicted groups similarity to each other and the final result we can use that to throw out any groups that are either too similar to previous groups and we will have a way of slowly building a target result that is improved on the previous prediction.

Each layer then will probably (nothing is written) produce a guess for the final answer as well as additional useful groups if it can find any. If at any point the guess fails to be an improvement on the previous result  we stop and take the best guess. we could continue making layers if for some reason we think we might eventually make a better guess. In this way concurrent failures might be our stopping spot or perhaps we will make layers until no new groups can be found or finally at least make sure a minimum number of layers is created if new groups are available but the answers aren't improving in general.

So how do we go about picking these things/groups? That at least isn't to hard using one of my favorite statistical measurements for data mining the Correlation Coefficient. we look at the data and measure each feature's and/or group of feature's movement against the final results and against any other possible groups we've made. The features for now will be selected randomly, though I'll probably find a good mathematically ground way to limit their selection.

There are two types of groups we will make. The type that contributes and the type that is a possible answer. A contributing group wants to have a unique correlation coefficient. that is a number we haven't seen that is at least X away from any other group. the possible answer group is always as close to 1 as possible.

Since I don't really have a way to make multiple groups at once, what I will end up doing is making any old group i can... and again checking against previous groups. then generating a result and seeing if said result is actually still valid. if it is, it goes in to the results. I will probably either alternate between this and an actual prediction or make a fixed number of groups and then make a prediction. (and wash rinse repeat until i'm done in whatever capacity.)

In truth i doubt a created group will ever be very comparable to a result we are looking for. if data mining were that easy we wouldn't spend much time on it.  What I expect is for a bunch of rather generic groups to be made and for them to act like created features to be used in subsequent feature generation... etc till a really useful one is generated that the final answer uses.

The only thing I would add i I would probably throw in a TSNE feature pair as well at each level. this actually would act like a 2nd and 3rd group, ideally any grous the TSNE feature finds will likely be a possible grouping for the final result. since if those two features are paired together and used to build a new feature to predict on then we have something that is a statistically relevant group.

What do I mean by grouping multiple features? basically use the distance equation... figuring distance from of the center (avg) values of the two features from the values of each rows position. said another way f(x,y) =  Sqrt( ((x - avg(x))^2 + (y - avg(y))^2) )

 

 

another long over due blog

Hi again faithful readers (you know who you are). So the marathon is back off the plate. Basically, i can't afford to spend the extra cash right now on a trip. Or rather I'm not going to borrow the money for a trip like that. I over spent when i thought the trip was off and upon reflection when it came time to book the trip, it just didn't make sense. So... maybe next year.

So for the last month i've been toying with things with my data mining code base. I removed tons of old code that wasnt being used/tested. i honed it down to just GBM. then i spent some time seeing if i can make a version that boosts accuracy over log loss (normal gbm produces a very balanced approach) i was successful. I did it by taking the output from one gbm and sending it in to another in essence over fitting.

Why would i want to do that? Well there is a contest ( https://www.kaggle.com/c/santander-product-recommendation ) i tried to work on it a little, but the positives are far and few in between. the predictions generally put any given positive at less than 1% chance of being right so i tried bumping the number since right now it returned all negatives. The results are actually running right now. i still expect all negatives but the potential positives should have higher percentages and I can pick a cut off point to round to a positive that is a little higher than .0005% i would have had to use before. All this, so I can send in a result other than all negatives. (which scores you at the bottom of the leader board) Will this give me a better score than all negatives... no idea :)

I tried making a variation on the GBM tree i was using that worked like some stuff i did years ago. it wasnt bad, but still not as good as the current gbm implementation. I also modified the Tree to be able to handle time series data in that it can lock 1 row to another and put the columns in time sensitive order. this allows me to process multi month data really well. It also gave me a place to feed in fake data if i want to stack 1 gbm on top of another i can send in the previous gbm's results as new features to train on (along with the normal training data).

This leads me to where i think the next evolution of this will be. I'm slowly building a multi layer GBM ... which essentially is a form of neural net. The thing i need to work out is how best to sub divide the things each layer should predict. that is i could make it so the GBM makes 2 or 1000 different groups and predicts rows results for each and feeds those in for the next prediction...etc. till we get to the final prediction. the division of the groups is something that can probably be done using a form of multi variable analysis that makes groups out of variables that change together.  figuring out how to divide it in to multiple layers is a different problem all together.

Do you want an AI cause this is how you get AIs! heh, seriously, thats what it turns in to. once you have a program that takes in data builds a great answer in layers solving little problems and assembles them in to a final answer that is super great. well you pretty much have an AI.

Incidently, TSNE might also help here as it I might just feed the tsne results for that layer's training data (fake data included if we are a level or two down) to give the system a better picture of how things group statistically.

In other news, I started using blue apron. This is my first time trying a service like this and so far I'm really enjoying it. I'm pretty bad about going to the grocery store. And going every week ... well that aint gonna happen. This is my way to do that, without doing that :) . I'm sure most people have similar thoughts when they sign up, even if the selling point is supposed to be the dishes you are making. Honestly, I've just been eating too much take out. I don't mind cooking and the dishes they send you to prepare are for the most part really good.

 

 

Stopping before I start then starting again

I feel like I've started many things and then abruptly stopped before i really got in to the thick of it. I'd like to share some of the highlights. Doing so will give anyone reading a good picture of where everything is with the stuff i normally (its been over two months) blog about.

 

Data Mining: Almost 6 months ago I was working on a GBM in a GBM model and there it has sat since then. It's not that I think it's merit-less, it is just that I doubt it'll produce amazing results. So, I'm not really inspired to finish it. Also, I don't have a kaggle contest to work on right now and that helps drive my interest in the algorithm. Truth be told, I'm beginning to think data mining is getting near the end of its "big gains" period. the human analysis part may very well be improvable but that's never interested me much.

 

Running: I started running, then i stopped, then I started again. to expand on this, it was fine, then i over did it, then I got motivated and eased back in. I've got a marathon in mind I want to go to. It is still 11 weeks away so I'm training up for that. I'll talk more about it when I commit to it fully (basically in about 4 weeks).

 

Math: I spent a lot of time trying to solve the 3 cubes problem. My most recent attempt sent me down a rabbit hole of general factoring. I actually thought I had a method for doing that, only to realize that my solution to the problem was such a tiny corner case as to be unusable. The net-net is I got nothing I'm pursuing here right now.

 

Magic: Things continue to wind down. there is a grand prix tourney in Dallas in 2 weeks but I don't think I'm gonna go. I still have fun playing most weeks but I'm definitely not feeling the drive to brew decks like i did. And, lets be honest, i've never felt the competitive spirit this time around. That is to say, I want to win sure, but deck building always took precedence. as it is so much more interesting than playing a known decent deck, which just makes it hard to be competitive.

 

EM Drive: Have you seen this thing? Its about 2 years old from a "oh hey something new!" perspective. but the last few days I've been reading up on the science and watching youtubes about it. I have to admit I'd like to better understand how it works. I totally get what virtual particles are, but I don't get how they can get the transfer of momentum to them (they are incredibly hard to interact with). Just something I've been messing with and thought I'd include as a bullet point. I doubt I'll build anything in my garage but crazier things have happened.

 

 

Slow going on the GBM in GBM idea

I've been thinking about this for weeks. I'm normally a jump in there and do it kind of guy but with no contests around to motivate me I've been stewing on it more than writing it. I did implement one version of the GBM in GBM mechanism but it underperforms compared to my current normal tree. There are probably more things to do to it to hone it and make better but this is where I stopped and started stewing.

I've been thinking i'm approaching this the wrong way. I think that trees are great for emulating the process of making decisions based on a current known group of data, but don't we know more? Or rather can't we do better? We actually have the answers so isn’t there a way we can tease out the exact splits we want or get the precise score we want? I think there might be.

I've been looking at build a tree from the bottom up. I'm still thinking about the details but the short version is you start out with all the terminal nodes. You then take them in pairs of 2 and construct the tree. Any odd node sits out and gets put in next go. The "take them in pairs of 2" is the part i of been really thinking hard about. Conventionally going down a tree your splits are done through some system of finding features that cut the data in to pieces you are happy with. I'm going to be doing the same thing but in reverse. I want the 2 data pieces paired together to be as close to each other as possible from a Euclidian distance perspective at least with regards to whatever features I use. But (and this is one of the things I debate on) I also want the scores to be far apart.

When you think about what I’m trying to accomplish putting two items with really far apart scores makes sense. You want to figure out shared qualities in otherwise disjointed items. Similar items could be joined as well if we approach it that way the idea is you are building a tree that hones the answer really quickly and exactly. This however wouldn’t do a good job of producing scores we can further boost... we wouldn’t be finding a gradient. Instead we would be finding 1 point and after 1 or 2 iterations we'd be done.

By taking extreme values the separation would ideally be the difference from the maximum value and the minimum value. If we did that though it would only work for 2 of our data points. The rest would have to be closer each other (unless they all shared those 2 extremes) I think it would be best to match items in the center of the distribution with items on the far extreme. Giving all pairs a similar distance of (max-min)/2 and likely a value that actually is 1/2 the max or the min since it would average to the middle.

In this way we merge up items till we get to a fixed depth from the top (top being a root node). we could keep merging till then and try to climb back down the tree, i might try that at some point but since the splitters you would make won’t work well, i think the better way is to then introduce the test data at the closest terminal node (much like how nodes were merged together) and follow it up the tree till you get to a stopping spot. The average answer there is score you return.

Again I still haven’t implemented it, I’ve been stewing on it. The final piece of the puzzle is exactly how I want to do feature selection for the merging. There has to be some system for maximizing score and minimizing distance so it isn’t all ad-hoc.

 

 

Small GBM in GBM and a newish idea

Things are going okay with my GBM in GBM. I've got the boosting in a tree now but the implementation i've got in there is not quite what I want. It's a little bit too much all or nothing. that is since each tree level is a binary splitter and my method for evaluating splits is also binary, boosting the binary splitter doesn't work real well. there is no gradient to boost persay. Everything is a yes or a no

You can change the mechanism to turn it in to a logic gate and essentially each itteration trys to make the final result all zeros (or ones). but this turns out to be so-so at best. You are missing the real advantage of easing in to a solution and picking up various feature to feature information that is weakly expressed. Don't get me wrong, it DOES work. it's just it's no better than a normal classifing tree.

When I do try and ease in to by turning the shrinkage way up. (the amount you multiply the result by before adding it in to your constructed result.) it ends up producing essentially the same result over and over again till you finially cross the threshold and the side the result wants to go on to changes (for right or wrong). the point is the function in my series of functions that add up to a final answer is producing nearly the same result each time it is called.

I can change this by making it randomly select features each call but it turns out not to make much difference in my particular implementation. What I really need is a way to weight the various responses so the influence varies based on probabilities. I thought it might be good to quantize these and make it more tiered as opposed to trying to build a standard accuracy model of sorts. so instead of 0-100% maybe 10%,20%,30% etc.. or some other measured amount. The idea is that like valued rows will group together. Though I don't know if that's really any better than a smooth accuracy distribution.

Update on the updates

First let me share this 'cause you know, ridiculous.

That's pretty much what I'm working on right now in my data mining. it's about half done.

Also I haven't had any marathon training updates. The short version is I've decided to hold off and only give updates when there is something meaningful to share. I never wanted this blog to be work, not to mention I want it to be enjoyable to read. Doing a daily or weekly update for the sake of the update doesn't in my mind fit that. I'm still running, still putting in around 16 miles a week. The weight hasn't really come off. I'm hovering around 197. I figure in a few more weeks if something doesn't give I'll make a more concerted effort to change my diet.

Magic the gathering continues to be a fixture in my life, legacy play especially. I've been contemplating a couple decks in legacy. One I called fraken-mud which was a tool kit mud deck that used birthing pod and eldrazi instead of the usual suspects (like metalworker and forgemaster). It's been inconsistent. Sometimes its really really good, but last week i went 0 for 4... yeah talk about disappointing. I'm also looking at black-white cloudpost deck I want to try that just runs lots of my favorite things (stoneforge, wurmcoil, deathrite, bitter blossom, ugin ..etc) not sure if it'll work or not. I'll try it this friday. I've also got a white painter's deck in the wings I'm gonna try in a few weeks as well. I think going the all red or red white or red blue route on painter grindstone is fine. but I want to try an all white version.

In standard I've been trying to find something interesting to play but eh... it all seems like rehash. I'm probably going to play green white little creatures/good-stuff next time I play. And as for modern, well i have my abzan eldrazi deck still together. I think that's all you get till i get a wild hair and build ... i dunno a new version of affinity. I'd kind of like to try souping up affinity. making it more mid range/stable. but we'll see.

 

 

 

 

Gradient Boosting (Part 3)

I've been trying to find my next logical step in data mining development. I think I've settled on it. I'm going to make a tree that uses boosting in it's fundamental splitting nature. It would be nice if the final results from the tree produced better results in general (from my current trees), but if it can remove all trace of bias and move towards a good uniformly evaluated answer, it will be a better tree to be used in subsequent ensembles.

Currently my tree building mechanism tries to build trees to get the most exact answer possible by grouping like-scoring training rows. The algorithm looks for features to use as splitters that would move rows down the left hand of the tree when they score closer to negative infinity and down the right side of the tree when they score closer to positive infinity. This allows me to do feature selection using the correlation coefficient instead of random selection.

This turns out to be a great way to select features. Not only does the feature selected work towards getting a good separation, but it also has the next step(s) split built-in to the feature separation. That is it correlates with final score as a whole. Subdividing the training set so part goes left and part goes right might be just as accurate using a different feature that doesn't correlate as well, but the mismatches should be closer to correct in the version with a higher correlation coefficient.

I've been making my trees using some form of this method for years. I do lose the ability to discern good secondary features unless they show up due to bagging. This is something I would like to try and address in the new version. This is also what I was talking about in terms of bias nature in the opening paragraph. My bias is to the highly correlated features.

I also much more recently moved to the "logistic function" (sigmoid) style splitter. so I build a curve for each feature that represents the feature's distribution. It then gets combined with 2nd, 3rd .. etc features I choose to use as well (depending on the number of features or type of data) to determine which path down the tree the data is set to go. This works wonderfully, it gave me my last big boost in Random Forest before I moved on to Gradient boosting which also makes good use of those stub trees.

The new tree I'm going to be making will endeavor to take the best of all worlds as it were. I'll be using every feature and building a splitter from the net total sum of where the data says it should go. I'll be weighting each opinion by it's overall correlation to the final score (this might change, but you should have some system of setting importance otherwise its noise) and finially I'll be doing the process itteratively where like in GBM each pass will attempt to adjust the result of the previous to hone in on the actual correct score.

Once a tree is finished I'll return it out to a normal GBM tree ensemble mechanism which will make a new tree and repeat the process. In the case of a binary classifier as you final score. The first tree will more likely than not be a single depth tree because it will have a binary classifier to deal with. But the trees after that will start to get some depth to them as it finds more and more resolution. I'll need to set a maximum depth allowed much like a normal tree building exercise.

Give me a few days and we'll see how it goes.

Improving t-sne (part 3)

This post will be the last for the immediate future on t-sne and really will just cover what I've done and how it all ended up. I have a new direction to head. And while I may use t-sne , I don't think it will be the main focus (though i guess it could be! surprises do happen).

So without further adieu let me surmise what I've written about so far. T-sne is an algorithm that reduces dimensions of a data set while preserving the statistical probability dictated by distance between rows of data and the features there in. It does this by randomly distributing the points in whatever N dimensional space you choose. Then it slowly moves the points iterativly in such a way as best reflects the same probability distribution as what existed in higher order space. You could i suppose increase dimensions in this way too if you want to.

I covered how I would speed this up at the end of part 1. I use a series of smaller "mini" t-sne's and combine them together. All of those t-snes share a lot a large number of the same data points. these were seeded in to the same position. I use those points to stitch together all the various frames. using the closest point as an anchor to find its final relative position in 1 key frame i use to put all the points. In that way all points from all the frames are mapped to 1 of the frames. This seems to work really well though the results aren't exact but the run times are linear. (i'm calling this a win)

In the 2nd part I covered how one can augment the results to get a clear separation of data points based on a particular score.  In short it's not magical, you build features from a data mining tool and provide the results as new features. you then use only those as the t-sne inputs. In my case my GBM does some sub selecting and bagging internally. because of this, my outputs are never exact. This makes for a very good set of inputs for seeing the separation of the scores.

Unfortunately, the results you get don't provide a way for improving the score. I had hoped they would, but truly it's just another way at looking at the final output. If you train on a sub section of the data and feed the t-sne results back in to the GBM machine and re train using the other part and no overall improvements will be found.  (i'm calling this a loss or at best a draw. yes I did get wonderful separation which was my starting goal, but it's nothing more than a visualization tool and didn't lead to something that actually "improves" anything from a data mining perspective.)

It's worth noting that if you don't remove the bias and go ahead and retrain from the t-sne result using the same data you fed it originally to get scores to build the t-sne results, it will slowly get better and better but this is just a serious case of over-fitting. GBM does such a good job of finding connections between data points it will emphasize any aspects that are represented in the new t-sne feed that allow it to get closer to the right score.

I tried this out and found I could turn a .47 results (log-loss) in to a .43 result by adding in the t-sne data. then I could further reduce it all the way to .388 by letting the algorithm pick and choose the features to use (well if brute force trial and error is pick and choose). The point is it's not useful for improving the results of the test set (which actually score .48 when the cross validation set score .388). 

I'll leave this right there for now.