Dim Red Glow

A blog about data mining, games, stocks and adventures.

the "evolution" of my genetic algorithm

I thought about it a little and I wanted to say what a genetic algorithm is. At least as I understand it so everyone is on the same page.  a schema driven algorithm that has a final result that can be evaluated. the schema changes randomly or by merging good prior results. if a more desirable result is found (in whatever manner) it is kept for future merging. 

So it has been a few weeks and things have really moved forward. I think a full history via bullet point will explain everything

  1. I get a wild hair to retry the writing a GA (genetic algorithm) about a month ago, the first version was pretty amazing but produces nothing immediately useful. It produced features that had high correlation coefficient.
  2. Turns out there were some bugs in scoring. I fixed them and things were no where near as good as I thought but they still had higher than normal feature scores. unfortunately the data mining tools didn't make particularly good use of them (they added nothing over all). I can't figure out if it is over fitting or the results just aren't all that useful, probably both.
  3. I decide to make the GA 2+d (multi-dimensional). In things line tsne this is the way to go, as doing 1 dimensional work  is way to limiting. The way this works is there are basically two (or more) sets of results that work separately to produce an X and a Y. (they don't interact, though sometimes i flip them when loading since they are interchangeable and that is exactly the sort of cross breeding i want to see) . Initially I used distance from the origin point is used to score correlation.
  4. The correlation scoring by using distance from the origin is poor.  So, I create a new mechanism that scores based on distance to a mismatch result in 2-d space. The mini-universe's X and Y points are scaled to between 0 and 1 and a distance of 1/Region size is considered perfect separation. This works but the results don't seem to translate well to being useful for data mining. Also, due to every point interacting with every other point i am forced to make the scoring work on a subset of the data this is less than ideal.
  5. I decide to abandon scoring the way i created and instead make the  0,0 corner of the universe a 0 value and the 1,1 corner a 1 value. to find any points actual value sum the x and y (and divide by 2 if you want it between 0 and 1) this means the diagonals have the same value, but both channels/outputs of the GA can potentially dictate the result of any given point, but usually they need to work together.
  6. I make many improvements to the actual internals of the GA algorithm this is an on going process, but at this point i thought i'd call it out. i'm not to my current version yet, but i've been playing with things internally. things like how often a copy is merged with another copy without error. And the introduction of the last few math functions I'll use. I decided to use the sigmoid (logistic function) and logit (inverse of that) in the actual list of gene functions that are available. even though their functionality could be reproduced with 2 or 3 other functions , it is so very useful from a mutation stand point to make that feature available by just specifying it. this highlights the possibility for other such functions in the future.
  7. For reasons of speed I was using a subset of the data to test and train against. initial results look amazing. The upshot of my new technique is you dont have to mine on it you can directly translate the output to a final result (x+y = binary classifier) I let it run for a few days and get a result of .31 (exceeding the current contests best score of .29) only to realize that i have severely over-fit to the 10% training data i was using (which wasn't randomly selected either it was the first 10% of the data). 2 things come out of this, the idea of scaled-accuracy  and a better way to train in general. Also it turns out certain sections of the train data are very strongly organized. a competitor at kaggle pointed this out. the data as a whole that is much harder to work with.
  8. I figure if a my final score is good with a small sample and with the entire training data then it would be good overall. the logic is that if you get a self similar score at two scales, then no matter the scale you are working at it will be the same (scaled-accuracy). the more divisions you do the less chance of error you have. currently i'm only doing 2 (which is poor). 1 at 15% and the entire training set. You can probably compute an idea split based on the amount of time you want to spend testing your result. I would think a 2 or 3 at 15% (averaged) and 1 at 32% and the full train data would be far better. so far, i submitted a locally tested .244 result to the kaggle website and got back a .241. I also did a .195 and got back a .190 . So for now i'll leave it at %15/%100 and just expect a small drop when i submit.
  9. It turns out there is enough logic to export a mining tree in to genes. I wrote an export feature in to my data miner to see what the results are. This works cause GBM (how i do data mining these days) is the addition of lots of tree results. each tree node can bet turned in to a series of logistic functions multiplied together. These results are then multiplied by the tree nodes output and this by the tree weight. That is added to the running total (which is seeded with an average) you can split up trees in to the various dimensions since final scores are just the sum of the dimensions anyway. the resulting GA schema is huge (21000+ genes) for a 6 depth 16 tree stack. The logic does seem to work and i can generate the output for it. (see below) I'm sure this can be optimized some with the current mechanisms in my GA, it seems unlikely in this instance to be useful though. That schema is to large to process quickly to optimize in the time remaining.

As of the writing of this my genetic algorithm is running (and has been running for a few days. it is currently up to .256 on the training data. if testing holds true i expect to see around a 2% drop in score when i submit (though it will get worse the better my prediction gets as the genetic code isolates scores. which means i can expect a .316 to turn in to around a .31 . that btw is my goal. i doubt i will mess with the program to much more in this contest. (unless it runs for 3-4 hours without an improvement.) since the gains have been very steady. yesterday around now was at the .244 ... so .012 in day isnt bad. I'm sure it will slow but there are 2 weeks left in the contest. this puts me about 2 days out from tying my old score from normal data mining techniques, (assuming linear progression, which i'm sure will slow down) And about 7 days from my target.

here are some results. This is for this contest (kaggle's porto serguro) . The scoring is normalized Gini which is the same as  2 * AUC (receiver operating curve) - 1 . This works to out to be 1 to -1. 1 is perfect, -1 is the inverse of perfect. 0 is the worst. The red dots are 1s the blue dots are 0s in the training data. the top left corner is 0 the bottom right is 1. 

this scored a .241 on the test data, .244 on the training

Here is 1 GBM tree metioned above turned in to a gene and graphed. i didn't test it against test but the training data said it w as around .256 (iirc)

Behold my genetic algorithm

I recently started a new genetic algorithm and I've been really pleased how well it turned out. So let me give you the run down of how it went.

I was working on the zillow contest (which i won't go on about, needless to say you were trying to improve on their predictions) and thought a genetic algorithm might do well. I didnt even have my old code so i started from scratch (which is fine. it was time to do this again from scratch).

about 1 and half to 2 days later i had my first results and i was so happy with how well it worked. i should say what it was doing. it was trying to predict the score of the zillow data using the training features. all missing data was filled with average values and i only used real number data (no categories. if i wanted them they would be encoded in to new yes/no features for each category.. ie hot encoded features).

The actual predictions were ranked by how well they correlated to the actual result. i choose this cause i use the same mechanism in my prediction engine for data mining and it is historically a really good and fast way to know how well your data moves like the actual data moves. I found that it was really good at getting a semi decent correlation coefficient (say 0.07... which sounds terrible but against that data it was good) pretty quickly. the problem was it was over fitting. I could take out half the data at random and the coefficient would drop to crap.

Skipping a long bit ahead, i came to the conclusion that there were bugs in the scoring mechanism though apparent over fitting was there too. i tried doing some folding of the scoring data to help eliminate this but found it didn't really help. fixing the bugs was the big thing. This happened around the end of the zillow contest and by then i was improving my core algorithm to deal with zillows data. the contest ended and i set everything aside.

Skipping ahead about 2 weeks later i had started on a new contest for porto (still going on) and after running through the normal suite of things to try i went back to the genetic algorithm to see what it could do.

I left everything alone and went off going with it. pretty quickly some things occured to me. part of the problem with the genetic program is it is fairly limited in what it can do with 1 prediction. That is if it kicks out 1 feature to use/compare with the actual score this limits you in doing further analysis and makes evolution equally 1 dimensional. So I introduced the idea of 1 cell (1 genetic "thing") producing multiple answers. that is 1 for each dimension you care about. Essentially it would have 2 chains of commands to make two different predictions that in conjunction give you a "score".

But how to evaluate said predictions? there may be some good form of correlation coefficient off of 2 features, but whatever it is I never found it. Instead I decide to do a nearest neighbor comparison.

So basically each test is scored by where it is put in dimensional space vs all the other test points. The test are scored only against those they dont match (porto is a binary contest so this works well, i would probably do a median value if i was using this in normal data or the runtime gets ridiculous as its O^2).

In essence i don't care were training data with the same classifier does seen any of the others with the same classifier. they only see the data points of the opposite type, this seriously cuts down on run-time. In those cases I want as much separation as possible. Also I don't want scale to matter, artificially inflating numbers but keeping everything the same relative distance apart should produce the same score. So I scale all dimensions to be between 0 and 1 before scoring. Then I just use 1/distance  to get a value (you can use 1/distance^2 if you like).

this worked really well, but when is started looking at results i saw some odd things happening. things getting pushed to the 4 corners. I didnt want that specifically and in a lot of ways I just want separation not a case of extremes. To combat this I added a qualifier for scoring, if you are over a certain distances that is the same as a 0 score (the best score). this way you get regional groupings without the genetics pushing things to the corners just to get the tiniest improvement.

Lately i've left the region setting to 3 .. if two points are 1/3 of the map apart, they dont interact. also i cap my sample size to 500 of the positive and 500 of the negative. i could do more but again runtime gets bad quickly and this is about getting an idea how good the score for a cell is.

There are lots of details beyond that but thats the gist of it. The whole thing has come together nicely and is neat to watch run. things do grow in to corners sometimes since i only breed the strong cells (lately i've started throwing in a few week ones to mix it up) I kind of think i should have extinction events to help keep evolution from becoming stagnant in an attempt to get the best score possible. I'll have a follow up post with some pictures and some more specifics.

 

been a long time since I rock and roll..

Wow so I did it again. I stopped writing blog entries. I like to keep ideas going in blogs (a running story if you will) but i guess, things get busy and my focuses change and the next blog doesnt seem relevant thus maybe not something i want to write, then the whole blog writing idea gets out of my mind entirely... and then months have passed. lets set the excuses aside and give some updates to things.

So first I was going to start the one punch man thing. I started to get in to it, then not so much. that is to say every time i started to make headway in to a committed exercise routine, i'd get injured or sick or busy for like 5 days in a row. So long story made short, it never happened. I ~am~ trying to train for another marathon right now, but I'm finding that old foot injuries are getting aggravated and then not healing as fast as I need them to. i'm loathe to run on injured feet. I'm working on the problem by doing less heal striking when i do run, but right now i'm not keeping pace with the training i should be doing because of the longer breaks between runs.

Losing some weight would help all this. I'm not happy with my weight right now and i'm heavier than I have been in years and years. (200-205, usually i'm more like 190-195 and i like being 180-185)  the running hasn't really helped. It usually does some, this time not at all. I think it's because i program/sit all the time. i guess i need a diet on top of it. the extra weight isn't doing my body any favors healing from running either. Also, its harder on the body when i do run. Try running around with 20-25 pounds of top soil... its harder!

Other things that have happened, i went to gen con. It was probably for the last time. I mainly went this year because it was #50 and we humans do love celebrating multiples of 10. It was a bit of a meh experience, to much of it was stuff i either didn't care to do or wasn't all that. I played no board games at all there (which is odd, its a board game convention) I did play some magic, but magic there is not like it is at a grandprix where it is the main focus. And its a lack luster experience because of that. I did see they might be giants (which was pretty great), play some true dungeon (also great) and have dinner with a giant group of people at  fogo de chao a brazilian steakhouse. So there were definitely some good times to be had.

I've more or less quit magic. I might do 1 or 2 more events but I've just not got the interest anymore. I do tend to have a little more fun when i play with the long breaks in between but I really think it is time for new hobbies.

A few things I'm doing right now for hobbies, a automatic trading card sorter (seems ironic to do it now, but i have more time so.... yeah), more data mining improvements/competitions and maybe a feathercoin payment program/plugin for eCommerce. I'll talk more on all that at a later time.

little excercise update

So after I published that last post i got sick... like the very next day. so 2 weeks later i start running again and a week after that i start doing 6.2 mile runs instead of 4. that was a week ago. It's going well! ... but other than a few random sets of sit ups or pushups, i haven't started working the rest of the routine. it's not that it's hard to find time as much as getting started. So yeah, i run Saturday and I think Sunday I'll try and do my first set of 51... I wont go all out at first. after a few days i'll try the full 102 (assuming i can, you know, still sit up.)

One Punch Man excercise crazieness

So lately i've been trying to lose weight. Partly cause i like the way i look when i look in the mirror and partly because it's just healthier for me. I had a goal to lose some weight by June 12th butwith the rate weight loss is working out to be. I dont think i'm gonna really come close to my goal.

The main reasons are my age (i'm 41 and metabolism slows down) and my lack of movement most of the day. I run but its not enough. I generally do 4 miles, 3 times a week but my constant sitting doesn't burn a lot of calories so its kind of a wash. Also my muscles seem conditioned to do said 4 mile run quite well without much training. I tried cutting carbs and that helped, but that's not doing much without enough exercise to burn the fat.

So I'm gonna do something that will really, and I mean really, make me lose weight (in a good way). i'm gonna do Saitama's exercise routine (from One Punch Man). I wouldn't do it if it wasn't possible for me. The "hard" part is the running and that's the one part i wont sweat. So what is it?

  • 100 Push-Ups
  • 100 Sit-Ups
  • 100 Squats
  • 10KM Running (that’s 6.2 miles)

every single day.

Okay really? every single day? no. that wouldn't make sense, even when maintaining yourself that seems overkill unless there was some real need. so I'll probably do it 4 days a week. 6.2 miles a day is another 2.2 miles on my normal run. so it will be a little harder at first. the 100 squats and sits ups and push ups wont be bad as long as i break them up in to sets. I'll probably do 2 work outs a day with 3 sets each. so 51 push ups,sit ups and squats  (3 sets of 17) twice a day.

But that's CHEATING! actually, he doesn't really say it was all in one go. in fact i probably should break up the run too. but honestly, i dont really want to cause its going to eat up enough of my day as is. (quite the time sink) And when i get in better shape i'll probably merge the 2 other work outs in to one so it'll be 3 sets of 34 each. but that wont be for a month or so.

am i really that out of shape? this does seem drastic but, its more just something to amuse me. if i absolutely cant find time or hate it, i'll stop. but baring incident, i'll start tomorrow (though my first day i might do 1/2 and really go 100% Wednesday). I'm abandoning my dieting at least till i get going cause not having carbs while exercising is more brutal than I want to be. right now I weigh around 195 I'd really like to be 180. 175 is like my perfect super-in-shape weight that I really havent been in... er ever (well high school, but that was a not yet adult form of me). even when I was in great shape around 28 (13 years ago ugh) i was still like 178. (splitting hairs i know)

Anyway, we'll see how it goes i"ll keep ya all updated. I may or may not post before and after pictures, you'll just have to wait and see hah! really, i'm thinking from the front there will be THAT much difference my face will look a little thinner.  the main thing is I wont have quite the thick core (torso seen from the side) . which is really my goal.

another mathy thing ... p = 4 * k + 1

So I was looking for an old video in numberphile (one about a particular kind of prime) and re-saw this

https://www.youtube.com/watch?v=yGsIw8LHXM8 (two square theorem)

or if you like the original video that one comes from

https://www.youtube.com/watch?v=SyJlRUBoVp0  The Prime Problem with a One Sentence Proof )

Anyway I found myself rewatching it. and decide to see if I can make a simpler version of the proof. not less space mind you, simpler as in more straight forward. I think I've done that here (it's probably not new, most things in math aren't) but regardless I submit it for your entertainment/utility.

 

Also it's worth saying that the you can then figure out exactly what form the X and Y (K or what have you) need to have by just unraveling all that. Here is that little bit of extra formula stuffs spelled out.

 

 

 

a need for better time warping

Well, weeks and weeks of working on the cancer contest have brought be back to where i started from. I want to use Dynamic time warping to match images. Once the images are matched, I think maybe look at a difference between the original and the target images to see what is left. This is probably your best place to start looking for cancer.

So why don't i do this? Because the run time is abhorrent. For the single comparison of two images I think Naively the Big O notation is like N^4 . N^2 is linear DTW but you can't just add a dimension and go up by 1 power. if i understand it right you have to add 2 to properly do 2d matching. Where N is the number of pixels in the image. So really it's like yeah. bad.

Maybe there is a way it can be done in N^3 and I'm missing something, but really it needs to be done in something like linear or at least N*log(N) time to really work. So that's where I'm leaving it.

There is a cervical cancer contest out there that is very similar except the photos are from some sort of normal optical camera and while maybe it could be done the same way, it has the same problem. I think if we solve this problem the world will have much much better analysis systems (in general).

It's worth mentioning I think most people do their analysis using deep neural networks. Quite honestly I'm not sure how they would do a good job processing 2-d image data but apparently it does work. I've got 3 weeks before the contest is over. if i can come up with a good way to do the DTW I will, otherwise I'm throwing in the towel on this one :( .

Some animations from the cancer research

I ran the data through (well my layered version of t-sne)  and got some pretty good results. good at least visually. I'm not sure if these are going to pan out well, but i'm hopeful. Let say 2 things before i get to them.

First the more I use the new log version of the image, the more i think that is fundamentally the best way to look at the image data. In fact, I think it might be the best way to deal with just about any real world "thing" that can't be modeled in discrete values. it just does everything you want and in a way that makes good sense. It is, essentially a wave form of the whole object broken down in to 1 number. It might actually be a great way to deal with sounds, 3d models, electrical signals, pictures ... you name it. I think there is room for improvement in the details but the idea itself seems good.

The 2nd thing I want to say is the files are huge, so I'm going to share still images of the final result and then have links to the .Gif animations. I might upload them to youtube but i'm pretty sure gif format is the most efficient way to send them to anyone who wants to see. They lossless which is great and smaller than corresponding mp4s/mpeg/fla files etc. This is because normal video compressors don't handle 1000s moving pixels nearly as efficiently as a gif file with its simple difference layers does.

In these images a purple dots represents cat scan images from a patient with no cancer. The green dots represent cat scan images where they had cancer. and in the final image yellow is test data we don't have solutions for. The goal here is to identify which slices/images of actually have cancer telling info in it. So I would hope most of the images  fall a mix of purple and green dots.

Okay this one is a mess. and your initial reaction might be. how is that useful? well there is an important thing to know here. the fields i fed in included the indexed location of the slices after they were sorted. while this is fairly useful for data mining it is all but useless for visualization (a linear set of numbers is not something that has a meaningful standard deviation or localized average for use in grouping.)  I knew this going in to the processing but I wanted to see the result all the same.

So now we remove the indexing features and try again. (remember each pixel is actually a bunch of features created from multiple grids of images)

Okay that looks a LOT better. in the bottom right you see a group of green pixels. that is some very nice auto-grouping. I went ahead and ran this once more this time with the test data in there as well. it is in yellow.

If anything that's even better. Adding more data does tend to help things. the top group is really good. the ones on the left might be something too. its hard to know, but you dont have to! that's what the data miner tools will figure out for you.

I want to do even more runs and see if I can build a better picture. a few of my features are based on index distances and they should probably be based on my log number instead. Either way it's fun to share! the rubber will really meet the road when i see if this actually gives me good results when i make a submission. (probably a day away at least)

here are the videos they are 69, 47 and 77 meg each so... it'll probably take a while to download.

http://dimredglow.com/images/animation1.gif

http://dimredglow.com/images/animation2.gif

http://dimredglow.com/images/animation3.gif

 

First steps to image processing improvement

where to begin... first I started looking at just how big 32x32 is on the 512x512 images. It seemed too small to make good clear identification of abnormalities. So I increased my grid segments to 128 ... this might be too big. I'll probably drop them down to 64 next run.

I found a post by a radiologist in kaggle's forums that further reinforced this as a good size (solely based on his identification of a 33x32 legion, given that the arbitrary location 64x64 would be perfect.) .So that's on the short list of changes to make.

also, I have fixed a bug in my sorting (it's really bad this existed :( ) and added a log version of the number used for sorting. (That's actuallly how I caught the sorting bug.) I don't generally need both but it is useful in some contexts to use 1 and in others to use the other. The Log number alone lacks the specificity since 512x512 image is awful big in number form. So rounding/lack of percusion is a problem.

unfortunately even with me fixing my garbage inputs from before my score didn't improve. This leads me to the real problem I need to solve.

I need to be able to identify which image actually has cancer. The patient level identification is just not enough. That's why my score didn't improve when I fixed the inputs. .. but how?

i hope t-sne can help here. I feed in all images with their respective grid data and let it self organize. That's running right now. Hopefully the output puts the images into groups based on abnormalities. Ideally grouping images with cancer. Since I don't actually know which do have cancer, I'll be looking for a grouping of images all marked 1 (from a cancer patient) and hope he that gets it. there will also hopefully be groups made of a mishmash from non cancer images .

I'm building one of those animations from before for fun. I'll share when it's finished

if this works I either feed that tsne data (x,y) in as a new feature or use kmeans to group up the groups and use that. We will see!

cat scan analysis and first submissions

So things have moved forward and I've made a few submissions. Things have gone okay, but i'm looking for ... more.

It took the best part of a week to generate the data, though i didn't use suffix arrays. i transformed each cat scan image in to a series of images constructed of averages. first image was 1 byte that was an average of the whole thing. then the next 4 were for each quadrant. repeat in this manner till you have the level of detail you want. The point is, once it is sorted that level is done. So data might look something like this.

Image#       1 2 3 4 5 6 7 8

----------------------------------Data below here

sort this --> 4 2 1 1 0 2 1 4   <--- most significant byte (average of the whole image)

then this  -> 4 1 1 2 3 4 5 6   <---- next most significant byte (top left quadrant average)

then this  -> 0 2 1 0 0 0 0 2    <---- etc

repeat...

I broke each image in to lots of little pieces, and each of those pieces got sorted in to 1 gigantic index using the method above. Once I had that data I built 3 different values for the grid elements, index location, average of the cancer content in nearby indexed cells (excluding gid segments from the same person) and finally how far away the nearest grid of the same position is (left and right on the indexed sort). This one is supposed to give me an idea of how out of place the index is. you would think that similar images would cluster and anomalies would be indicative of something special.

On the last method,  I'm not certain the data sample size is large enough for that (or self-similar enough). It might be but without a visual way check it out (i haven't written one) it can be hard to be certain. its the sort of thing i would expect to work real well with maps, I just don't know about body features. I do this to try and solve the simple problem, I know that there is cancer in the patient I just don't know which slice/grid part its in. I would argue that's the real challenge for this contest. (more on that later)

Once I have these statistics I build a few others that relevant to the whole image and then take all the grid data ad the few other statistics and try to make a cancer / no cancer prediction for each image. ... again tons of false positives since each image is one part of the whole and cancer may only be on 1 cat scan.

Once I have the results (using a new form of my GBM) I take the 9 fold cross validation resuilt of the train data and the results from test data and send it all in to another GBM. This one takes the whole body (the slices have been organized in order by a label) an produces a uniform 1024 slices broken out by percentage (i take the results from the nearest image, that becomes my feature for that percentage location.) then i build 512, 256,128... down to 1 . these features dont use the nearest but average of the 2, 4, 8.. etc elements that went in to the 1024. I send all that data in to a GBM, get predictions and ... bob's your uncle.

The accuracy... is okay. the problem i've had is bias is super easy to introduce. local testing puts the results at around .55 log loss. but my submissions were more in the .6 neighborhood. which makes me think that: 1, i got nothing special and 2, all the false positives are screwing things up. Btw, getting to that point took over 3 weeks beginning to end, with many long hours of my computer doing things. I ahve since i started radically improved the speed of the whole process and can probably get from beginning to end in about 2-3 days now. so that's good. The biggest/most important improvement of late was threading the tree generation internal to the tree itself. I've had the trees themselves be threaded for years, but never bothered to thread out the node work. it is now and it really helps make as much use of the cpu as I can. That change if nothing else will be great for the future.

So lets talk about false positives real quick and knowing exactly which slice of cat scan image has the cancer in it. I think most people make a 3d model and handle the data as a whole (getting rid of the problem) that may be my solution. that is essentially what i did with 2nd level of the GBM. I was trying to solve the problem in an image by image way previously but i think there is just to much noise unless i add some insight that is missing. So with regards to that there is maybe 1 possible way to add some insight. consider this

No Cancer                   Cancer

00100000100             00107000100             

00010100100             00900100100             

00100011100 

 

if each 0 or 1 is a whole image the trick is to realize that 7 and 9 are unique to the cancer side. but how do you get to that point? right now i take all the numbers by themselves and say cancer on no cancer. so 0 has a percentage chance.. etc. Even this is a simplification because i dont have a clear picture of "3"  or "2" it might actually just be a special pattern of 1s and 0s  (that is to say it all looks normal and the particular oddness of the slices in the order they show is what makes ID as cancer).

So I'm noodling on this a little to see if I can find a good way to get the insight. if I can get the image analysis to indicate 3 or 2 is present. i'm in there, but more likely I'll make data out of the whole and rethink how to make the predictions there by adding more data to each prediction but removing the false positives.