Things are going okay with my GBM in GBM. I've got the boosting in a tree now but the implementation i've got in there is not quite what I want. It's a little bit too much all or nothing. that is since each tree level is a binary splitter and my method for evaluating splits is also binary, boosting the binary splitter doesn't work real well. there is no gradient to boost persay. Everything is a yes or a no
You can change the mechanism to turn it in to a logic gate and essentially each itteration trys to make the final result all zeros (or ones). but this turns out to be so-so at best. You are missing the real advantage of easing in to a solution and picking up various feature to feature information that is weakly expressed. Don't get me wrong, it DOES work. it's just it's no better than a normal classifing tree.
When I do try and ease in to by turning the shrinkage way up. (the amount you multiply the result by before adding it in to your constructed result.) it ends up producing essentially the same result over and over again till you finially cross the threshold and the side the result wants to go on to changes (for right or wrong). the point is the function in my series of functions that add up to a final answer is producing nearly the same result each time it is called.
I can change this by making it randomly select features each call but it turns out not to make much difference in my particular implementation. What I really need is a way to weight the various responses so the influence varies based on probabilities. I thought it might be good to quantize these and make it more tiered as opposed to trying to build a standard accuracy model of sorts. so instead of 0-100% maybe 10%,20%,30% etc.. or some other measured amount. The idea is that like valued rows will group together. Though I don't know if that's really any better than a smooth accuracy distribution.