Dim Red Glow

A blog about data mining, games, stocks and adventures.

Slow going on the GBM in GBM idea

I've been thinking about this for weeks. I'm normally a jump in there and do it kind of guy but with no contests around to motivate me I've been stewing on it more than writing it. I did implement one version of the GBM in GBM mechanism but it underperforms compared to my current normal tree. There are probably more things to do to it to hone it and make better but this is where I stopped and started stewing.

I've been thinking i'm approaching this the wrong way. I think that trees are great for emulating the process of making decisions based on a current known group of data, but don't we know more? Or rather can't we do better? We actually have the answers so isn’t there a way we can tease out the exact splits we want or get the precise score we want? I think there might be.

I've been looking at build a tree from the bottom up. I'm still thinking about the details but the short version is you start out with all the terminal nodes. You then take them in pairs of 2 and construct the tree. Any odd node sits out and gets put in next go. The "take them in pairs of 2" is the part i of been really thinking hard about. Conventionally going down a tree your splits are done through some system of finding features that cut the data in to pieces you are happy with. I'm going to be doing the same thing but in reverse. I want the 2 data pieces paired together to be as close to each other as possible from a Euclidian distance perspective at least with regards to whatever features I use. But (and this is one of the things I debate on) I also want the scores to be far apart.

When you think about what I’m trying to accomplish putting two items with really far apart scores makes sense. You want to figure out shared qualities in otherwise disjointed items. Similar items could be joined as well if we approach it that way the idea is you are building a tree that hones the answer really quickly and exactly. This however wouldn’t do a good job of producing scores we can further boost... we wouldn’t be finding a gradient. Instead we would be finding 1 point and after 1 or 2 iterations we'd be done.

By taking extreme values the separation would ideally be the difference from the maximum value and the minimum value. If we did that though it would only work for 2 of our data points. The rest would have to be closer each other (unless they all shared those 2 extremes) I think it would be best to match items in the center of the distribution with items on the far extreme. Giving all pairs a similar distance of (max-min)/2 and likely a value that actually is 1/2 the max or the min since it would average to the middle.

In this way we merge up items till we get to a fixed depth from the top (top being a root node). we could keep merging till then and try to climb back down the tree, i might try that at some point but since the splitters you would make won’t work well, i think the better way is to then introduce the test data at the closest terminal node (much like how nodes were merged together) and follow it up the tree till you get to a stopping spot. The average answer there is score you return.

Again I still haven’t implemented it, I’ve been stewing on it. The final piece of the puzzle is exactly how I want to do feature selection for the merging. There has to be some system for maximizing score and minimizing distance so it isn’t all ad-hoc.

 

 

Small GBM in GBM and a newish idea

Things are going okay with my GBM in GBM. I've got the boosting in a tree now but the implementation i've got in there is not quite what I want. It's a little bit too much all or nothing. that is since each tree level is a binary splitter and my method for evaluating splits is also binary, boosting the binary splitter doesn't work real well. there is no gradient to boost persay. Everything is a yes or a no

You can change the mechanism to turn it in to a logic gate and essentially each itteration trys to make the final result all zeros (or ones). but this turns out to be so-so at best. You are missing the real advantage of easing in to a solution and picking up various feature to feature information that is weakly expressed. Don't get me wrong, it DOES work. it's just it's no better than a normal classifing tree.

When I do try and ease in to by turning the shrinkage way up. (the amount you multiply the result by before adding it in to your constructed result.) it ends up producing essentially the same result over and over again till you finially cross the threshold and the side the result wants to go on to changes (for right or wrong). the point is the function in my series of functions that add up to a final answer is producing nearly the same result each time it is called.

I can change this by making it randomly select features each call but it turns out not to make much difference in my particular implementation. What I really need is a way to weight the various responses so the influence varies based on probabilities. I thought it might be good to quantize these and make it more tiered as opposed to trying to build a standard accuracy model of sorts. so instead of 0-100% maybe 10%,20%,30% etc.. or some other measured amount. The idea is that like valued rows will group together. Though I don't know if that's really any better than a smooth accuracy distribution.

Update on the updates

First let me share this 'cause you know, ridiculous.

That's pretty much what I'm working on right now in my data mining. it's about half done.

Also I haven't had any marathon training updates. The short version is I've decided to hold off and only give updates when there is something meaningful to share. I never wanted this blog to be work, not to mention I want it to be enjoyable to read. Doing a daily or weekly update for the sake of the update doesn't in my mind fit that. I'm still running, still putting in around 16 miles a week. The weight hasn't really come off. I'm hovering around 197. I figure in a few more weeks if something doesn't give I'll make a more concerted effort to change my diet.

Magic the gathering continues to be a fixture in my life, legacy play especially. I've been contemplating a couple decks in legacy. One I called fraken-mud which was a tool kit mud deck that used birthing pod and eldrazi instead of the usual suspects (like metalworker and forgemaster). It's been inconsistent. Sometimes its really really good, but last week i went 0 for 4... yeah talk about disappointing. I'm also looking at black-white cloudpost deck I want to try that just runs lots of my favorite things (stoneforge, wurmcoil, deathrite, bitter blossom, ugin ..etc) not sure if it'll work or not. I'll try it this friday. I've also got a white painter's deck in the wings I'm gonna try in a few weeks as well. I think going the all red or red white or red blue route on painter grindstone is fine. but I want to try an all white version.

In standard I've been trying to find something interesting to play but eh... it all seems like rehash. I'm probably going to play green white little creatures/good-stuff next time I play. And as for modern, well i have my abzan eldrazi deck still together. I think that's all you get till i get a wild hair and build ... i dunno a new version of affinity. I'd kind of like to try souping up affinity. making it more mid range/stable. but we'll see.