Dim Red Glow

A blog about data mining, games, stocks and adventures.

adjusting the layers to be tsne driven

Hello again, i've slowly turned over the ideas in my head i wrote about last time, and i think there are 2 big flaws in my idea. first fitting a gbm to data you already have in the training and test data adds nothing. (that's huge). this is with regards to me selecting a few features to make something that has a correlation coefficent equal to some fixed amount of the final result. And 2nd making groups in any fashion that is not some form of transform will likely never give me any new information to work with.

I have a fix to both (of course :) ) i mentioned using tsne as well. i think that's where i need to get my groups from. the features i send in to tsne is the how i get various new groups. everthing else i've said though is still relevant. so it is no longer layers of gbm as much as layers of tsne. once i have relevant results from that i send it all in to gbm and let it do its work.

The groups i make out of tsne (2-d btw) will utilize the linear tsne i've already written (As its super fast and does what i need to do pretty well). and while i will send the the results from the tsne right in to the gbm I'll do more than that. I will also isolate out each regions in the results and build groups from each of those.

To do that, i make a gravity-well map for every point on the map... like they are all little planets. the resulting map is evenly subdivided (likes its a giant square map of X by X...i've been using 50 for each side) and have a 0 or larger value in every square.  (inverse squared distance ... 1/(1+ (sourceX - x0)^2 + sourceY - y0)^2 ). i've been limiting the influence to 5 squares from each point just to keep the runtime speedy (its only a 50x50 map anyway so that's pretty good reach and at 6 away the influence is 1/(1+36+36) which is only 0.013 so that's a decent cutoff anyways.  Once i have the map i look for any positive points that have lower or equal points in all directions. those points are then used to make groups. so there may only be 1... but likely there will be bunches.

the groups are based off of distance from the various centers of local low spot in the gravity well map. if you were to take the differential of the map, these would be 0 points.  so we just find the distance of every point from those centers. This has the added effect of possibly making multiple groups in 1 one pass (which is great) not to mention the data is made via a transformation that works completely statistically and has no direct 1 to 1 correlation with the data you feed it.  So you are actually adding something to the gbm mechanism to work with.

At that point the resulting groups should probably be filtered to see if they add any real value using the correlation mechanism i already mentioned in the last post. ideally gbm would just ignore them if they are noisy.

Thoughts on the AI layers

So, I've given my layered GBM a little thought. I'll explain what i want to do by how i came up with it. My goals seem pretty straight forward I think I can describe them with these 2 ideas/rules:

1. I want each layer that is created to contribute to the whole/final result in a unique way. So there is as little redundancy (ie wasted potential) as possible.

2. The number of layers should be arbitrary. If the data drives us to a one layer system, that's all you create. That is, if the data just points directly to a final result, you just figure out the final result.  If the data drives us to 100 layer system... well there is that then too.

To do the fist one, each layer should have access to previous layers to make it's results so it will have more options to make different groups in that layer, with the training data available to everything. Each layer will also know what previous layers produced to keep from making "similar" things. There will likely be some controlling variable for how different a group needs to be.

To do the second one we need to have a way to move towards our goal of predicting the final result. We could just make groups until we happen to make one that fits our results really well, that sort of brute force approach might take forever (depending on the algorithm). If however we have a deterministic way of measuring the predicted groups similarity to each other and the final result we can use that to throw out any groups that are either too similar to previous groups and we will have a way of slowly building a target result that is improved on the previous prediction.

Each layer then will probably (nothing is written) produce a guess for the final answer as well as additional useful groups if it can find any. If at any point the guess fails to be an improvement on the previous result  we stop and take the best guess. we could continue making layers if for some reason we think we might eventually make a better guess. In this way concurrent failures might be our stopping spot or perhaps we will make layers until no new groups can be found or finally at least make sure a minimum number of layers is created if new groups are available but the answers aren't improving in general.

So how do we go about picking these things/groups? That at least isn't to hard using one of my favorite statistical measurements for data mining the Correlation Coefficient. we look at the data and measure each feature's and/or group of feature's movement against the final results and against any other possible groups we've made. The features for now will be selected randomly, though I'll probably find a good mathematically ground way to limit their selection.

There are two types of groups we will make. The type that contributes and the type that is a possible answer. A contributing group wants to have a unique correlation coefficient. that is a number we haven't seen that is at least X away from any other group. the possible answer group is always as close to 1 as possible.

Since I don't really have a way to make multiple groups at once, what I will end up doing is making any old group i can... and again checking against previous groups. then generating a result and seeing if said result is actually still valid. if it is, it goes in to the results. I will probably either alternate between this and an actual prediction or make a fixed number of groups and then make a prediction. (and wash rinse repeat until i'm done in whatever capacity.)

In truth i doubt a created group will ever be very comparable to a result we are looking for. if data mining were that easy we wouldn't spend much time on it.  What I expect is for a bunch of rather generic groups to be made and for them to act like created features to be used in subsequent feature generation... etc till a really useful one is generated that the final answer uses.

The only thing I would add i I would probably throw in a TSNE feature pair as well at each level. this actually would act like a 2nd and 3rd group, ideally any grous the TSNE feature finds will likely be a possible grouping for the final result. since if those two features are paired together and used to build a new feature to predict on then we have something that is a statistically relevant group.

What do I mean by grouping multiple features? basically use the distance equation... figuring distance from of the center (avg) values of the two features from the values of each rows position. said another way f(x,y) =  Sqrt( ((x - avg(x))^2 + (y - avg(y))^2) )

 

 

another long over due blog

Hi again faithful readers (you know who you are). So the marathon is back off the plate. Basically, i can't afford to spend the extra cash right now on a trip. Or rather I'm not going to borrow the money for a trip like that. I over spent when i thought the trip was off and upon reflection when it came time to book the trip, it just didn't make sense. So... maybe next year.

So for the last month i've been toying with things with my data mining code base. I removed tons of old code that wasnt being used/tested. i honed it down to just GBM. then i spent some time seeing if i can make a version that boosts accuracy over log loss (normal gbm produces a very balanced approach) i was successful. I did it by taking the output from one gbm and sending it in to another in essence over fitting.

Why would i want to do that? Well there is a contest ( https://www.kaggle.com/c/santander-product-recommendation ) i tried to work on it a little, but the positives are far and few in between. the predictions generally put any given positive at less than 1% chance of being right so i tried bumping the number since right now it returned all negatives. The results are actually running right now. i still expect all negatives but the potential positives should have higher percentages and I can pick a cut off point to round to a positive that is a little higher than .0005% i would have had to use before. All this, so I can send in a result other than all negatives. (which scores you at the bottom of the leader board) Will this give me a better score than all negatives... no idea :)

I tried making a variation on the GBM tree i was using that worked like some stuff i did years ago. it wasnt bad, but still not as good as the current gbm implementation. I also modified the Tree to be able to handle time series data in that it can lock 1 row to another and put the columns in time sensitive order. this allows me to process multi month data really well. It also gave me a place to feed in fake data if i want to stack 1 gbm on top of another i can send in the previous gbm's results as new features to train on (along with the normal training data).

This leads me to where i think the next evolution of this will be. I'm slowly building a multi layer GBM ... which essentially is a form of neural net. The thing i need to work out is how best to sub divide the things each layer should predict. that is i could make it so the GBM makes 2 or 1000 different groups and predicts rows results for each and feeds those in for the next prediction...etc. till we get to the final prediction. the division of the groups is something that can probably be done using a form of multi variable analysis that makes groups out of variables that change together.  figuring out how to divide it in to multiple layers is a different problem all together.

Do you want an AI cause this is how you get AIs! heh, seriously, thats what it turns in to. once you have a program that takes in data builds a great answer in layers solving little problems and assembles them in to a final answer that is super great. well you pretty much have an AI.

Incidently, TSNE might also help here as it I might just feed the tsne results for that layer's training data (fake data included if we are a level or two down) to give the system a better picture of how things group statistically.

In other news, I started using blue apron. This is my first time trying a service like this and so far I'm really enjoying it. I'm pretty bad about going to the grocery store. And going every week ... well that aint gonna happen. This is my way to do that, without doing that :) . I'm sure most people have similar thoughts when they sign up, even if the selling point is supposed to be the dishes you are making. Honestly, I've just been eating too much take out. I don't mind cooking and the dishes they send you to prepare are for the most part really good.