Dim Red Glow

A blog about data mining, games, stocks and adventures.

suffix arrays maybe

I've been hard at work in my free time working on this cancer image processing. i got my image load and processing down so it takes 7 minutes to load the images (this includes the white-black color balancing that makes "black" consistent on all images) The problem i'm having now is finding the best way to match a 32x32 block from the 512x512 images against all the other images to find similar entities.

I tried a brute force approach that measured differences between blocks. that wasn't gonna work. the runtime would have been centuries and the results were meh at best. So just to speed it up (before i improve the meh results) I then tried making an index out of the data by some averages and what not from each cell, storing them in 4 bytes as an index. I associated all rows that tied to that index in a hashmap for that index. the problem with this is, there are to many similar cells. That is to say the averaging functions weren't doing enough to distinguish some very similar looking cells. So the runtime went to like 7 years... with still meh results.

I tried short circuiting the results and just give me "something" after a few tries but even this was proving to take a loooong time to run. why is this taking so long? well 250,000 base images and i divide each one in to 32x32 segments and then i do a few versions of the grid where i offset by 16. i end up with  16*16+15*16+16*15+15*15 cells for each of the 250,000 images i want to process. since i really dont want to process every single possible cell because of runtime concerns. Or do I?

So sitting here thinking about it. I learned about suffix arrays years ago for some bioinformatics class i took (a story for another time). I've implemented one on 3 separate occasions but its been a while. What are they? they are a clever way of using pointers in to your original array. then sorting the those pointers by what they point at. if you had the string "ABA" your initial pointers would be 0,1,2 then after they are sorted they would be 0,2,1 (assuming you sort "end  of line" after "B") why is this useful? well you can quickly see where similar things the string occur by scooting over one to the right or one to the left.

I could then put all the images grid segments in to a giant array of bytes and sort them (not a fast process, but really not that slow either). I want to keep the little images as they represent my features i'll feed in to the database, i could make smaller images but 961 features seems more than enough for now. the first real problem here is the array of images is about 250,000,000 long and each of those has 1k of bytes in it. so you need a lot of memory (i'm okay there, the indexing i was doing before had the same problem) the 2nd thing you need to think about is the image segment. its really not rendered the way that is useful.

when i worked on the indexing i thought about that some and tried making averages for the whole grid. this kinda worked but really it dances around the issue cause i was only making 4 bytes of averages from a 1k image. I think to do this right, you need to translate the images in to something more... jpeg-y. you want 1 byte that represents some average details then subsequent bytes to refine the average so that as you move from the left to the right of byte array you get more and more detailed information. this will put similiar images next to each other once sorted.  You can certain translate an image in to another form that does something like that. i'll have to.

Once i've done that, and once i have the data loaded in to a suffix array its a simple matter of looking at the nearest cells in the suffix array and seeing if they are in a cancer patient or not (skipping cells that reference the original image for that grid, and cells in test data). once you've done all that you  can get an average value for cancer/nocancer and save it for that feature.

This method leaves the original images on stretched or touched in anyway (other than the color balance) which means a small person may not line up with a large person. and also there are times when the image is zoomed in to much in the cat scan so there is clipping etc... these problems can be resolved if there are enough images (so you can find someone similar to yourself) a bit like sampling enough voices will eventually give you a person who has the same accent as yourself. the question is, is there enough data?

hopefully i can get all this done by monday and have a crack at actually making some predictions with the results using a normal gbm.

oh image process... how difficult you are

I've decided to compete in https://www.kaggle.com/c/data-science-bowl-2017 which is a contest where you process patient cat scan photos and try to identify stage 1 cancer people. there are something like 1500 patients and and each has 100+ cat scan slices. the number of  slices is not consistent and the order of the slices is seemingly random (they have guid's as names).

Until now i haven't done image processing contests. in fact in most of my contests i dont bother with feature creation at all if i can avoid it. That's a different thing than i enjoy working on. That can't be avoided with image processing. It is a whole thing unto itself. So why try one now? I had a close friends die from lung cancer. It is unlikely this technology would have helped him since by the time he went in to get his cough checked on, it was clear he had cancer from cat scans. But it still seems like a good pursuit and hits close to home. The prize money (which is huge) is also nice but so few get that, that it cant be a real draw.

So what will i be doing / what have i do so far? I've loaded the images using some trial software and a simple c# program (they are dicom medical images). It took a long while to realize that was the way to go. I tried using some open source packages and stuff but the images kept coming out a little off and i didn't know why.

I've normalized the images for clarity by removing gray backgrounds and re-balancing the image brightness with that in mind This really makes the details clear and gives all the images the same light levels to compare with each other. I did not try maximizing the light levels, presumably they already are but i might need to implement that just in case.

I tried removing non lung artifacts. things like the clothing and the table they are lying on, but the results were sketchy. I didn't want to lose anything important in the image by accident. So after many attempts i undid the work and decided to come back to it later.

I setup 2 dataminig databases for my data. 1 to load image results in and 1 to produce the actual final prediction. the image results will go in to a normal datamining database. In that each image slice will be a row with a predicted value of cancer or no-cancer (based on the person who it was taken from, not on if that particular image had cancer or not). The images will be split in to a grid of small cells and a 2nd grid that is offset by half a cell so corner regions are not ignored. I probably should do 2 other grids as well (i have not yet) that are offset by half a width or half a height respectfully (not just a 2nd with both). these tiles will be used to compare with all other people's image's to see/find the closest match.

Mapping the data from the image database to the real/result database is a little bit of a mystery. I have a solution to do it but i'm not sure it's the best solution. Normalizing the feature count can be done a number of different ways. We'll just see have to see what works best.

That last part.. compare with all other people's image's to see/find the closest match. is the hard part. that little statement right there is what humans do so well and computers do not. That is where i hope to really add and stand out in this competition. till i get everything working i'm just going to do a simple difference measure in light levels and take the closest matching tile as the winner... but later once i get everything working weill, i've got plans to really improve the matching algorithm... everything from doing a 2-d DTW (which has about the most abhorrent run time ever.. to building a data miner for the tiles... to just doing fuzzy matching of images .. to looking at the best match for each pixel in the entire square... to ???

clearly i've got lots of ideas/things i want to try. but the first step is just to get the whole jalopy running.

 

The legacy Grand Prix in louisvie, ky

I took a deck I've been working on to the legacy grand prix this weekend. It was definitely tier 2 still. I had a lot of near misses (loss in 3 games, 1 match win in 4 rounds) and ended up dropping the main event and testing it a lot in side events this weekend.. I called it aether vise.  I'm not done with the deck. I've gotten a lot of great information to improve it. The deck is as follows (copy-paste from http://tappedout.net )

So my notes go like this:

a little more tweaking and i think the deck will sing. right now it feels like an 8 cylinder car with 1 dead plug. i think the amount of control is right where I want it. It was the win cons that gave me problems.(with an occasional exception of too much or not enough mana. I'm chalking that up to normal game play).

So many near misses. no blow outs in either direction (except me vs elves... poor elves guy. 1 game i lost, i misplayed winter orb over sphere of resistance turn 2. The games weren't much of a challenge. Oh! and me vs dead guy ale.. that deck is seems just too slow or he had absolutely rotten luck both games).

I feel like my deck tries a little to hard to do certain things and not hard enough at other things. I need it to be a smooth tool box deck. almost everything needs to be a 1 of.

So changes: i think every win con should all be 1 of  (Except ghripr aether grid that should be a two of since its value is huge and it shuts off winter orb). which means i have to cut a black vice. I'm going to put Ajani Vegenant in place. I was also pleased with adding a chandra main. that is i ran it with a chandra in it in side events (instead of 1 of the 3 sun droplets). I think she is ideal as a 1 of. sun droplet as a 2 of is fine... 3 was good to but 1 would leave me looking for it too often for spiteful visions or as another mechanism to slow the opponent's win. 

Previously I tried to hard to make all cards tutor-able. But that's not the best thinking. There is no reason i cant run a few win cons that aren't tutor-able ajani and chandra are both good enough to just "show up". I considered koth as well but he doesn't do enough unless you are playing a blood moon heavy deck.

the ensnaring bridges should probably be a 2 of main instead of 3 ghostly 1 ensnaring. it should be 2 and 2. Ever since i migrated away from the 4 howling mines main (down to 1 at this point) and moved away from the black vise heavy build ensnaring bridge has gotten stronger and stronger.

i also think the wastelands can go to a 1 of from 3 ... leave a spot of blink moth well and Academy Ruins (only usable with a mox).  Why? its rare that destroying a land sets me up for later. unlike blood moon wins, wasteland almost always has to combo off of the crucible to be all that good or just gives me a little early game time walk. Since i'm not running creatures to abuse the advantage its not all that valuable most of the time.  Also, as for the combo 1 ghost quarter does the same, which is in there too. All in all, sphere has the same net effect and is way better over all.

As for blink moth well, having a non counter-able way to tap winter orb (miracles) is good and that gives me 4 ways to tap the 4 orbs so hopefully that runs a little smoother. (1 relic barrier, 2 grid and 1 well). I considered a man land too. but, I don't think a man land is necessary with the addition of both planeswalkers, since miracles has a tough time with 4 drops.

The academy ruins, this would be an experiment. more than once i wanted a way to get back an artifact and it makes my expedition map better. it also gives me an out vs slow mill.

Side board thoughts: The porphyry nodes sideboard is great, I might want another or different creature hate card in place of the bridge I'm moving to main deck. the nevermore is great i just to get a lot better at picking things to name (wear/tare is a good one!). I went to 1 from 2 earlier, I can see arguments to going back to 2.  The lay lines of sanctity are great at 3. The same is true for chalice is great at 3. 2 wear and tares was good. It seems like the right amount when i want them, though there might be better choices out there. The 2nd rest in peace was less impact-full than i expected. i need to play more to see if it stays. The extra pithing needle in the side was fine, same with blood moon. I have no idea if sun droplet needs to stay in the sideboard as a 1 of. Two main seems enough except against burn and them 3 might not be a enough anyway.

the current version of the build can be found here (i renamed it)

http://tappedout.net/mtg-decks/ghirapur-prison-deck-of-many-things/

 

 

Minecraft and data mining (no relation) and stocks (some releation)

It's the 2nd day of 2017 and soon to be 2018 or so it always seems when you look back at years. I've dawdled for to long on some old work I wanted to do and the years turning over has put that in sharp relief.

First I started a you tube channel https://www.youtube.com/channel/UCoTP8WbdsCW_6FSLz0pUSsA (Hardcore in a Hurry) for my video game exploits. I don't play a lot of video games these days, but that being said I like playing games on the hardest setting possible. I'm not a fan cheat modes or easy walk through settings (or AIs that cheat for that matter, I mean seriously? couldn't you write a better AI?). That being said, minecraft is the only thing on there now. I don't have plans to add any other types of videos right now, but things change.

Next Let me say I've spent some more time on my layered Gradient boosting stuff. I learned a little bit about what t-sne can do for me. So feeding t-sne in initially can help if the groups do self organize out of the original data but normally, I think it is something you want to do after a few iterations have passed. it seems the further down the gradient you are the better the effect. The effects arent remarkable, but they aren't terrible either. The unfortunate takeway is that it is very time intensive. running t-sne processing on the current state of the gradient descent and the source data just takes ... well a while. maybe i can cherry pick features to use and speed it up, but as it stands right now unless the data set is pretty small its not a good option for eking out more performance. 

For a couple years now i've wanted to implement a stock analysis program, so i can do personal investment. I did this once along time ago with a friend using far less sophisticated methods. That's a story for another time. I've never been happy enough with my software for datamining to start the code/project for investing. I think i'm happy enough now. The really work will be in getting the data in to a form that is easily updated daily and makes measurable testable predictions. My recent work on https://www.kaggle.com/c/santander-product-recommendation got me to rewrite parts of my code to handle time series in a different way. which is key for the training and evaluation. (a contest i never got anywhere with. I've never implemented the map@<x> evaluation so it makes it hard to train for such a thing. )

 

adjusting the layers to be tsne driven

Hello again, i've slowly turned over the ideas in my head i wrote about last time, and i think there are 2 big flaws in my idea. first fitting a gbm to data you already have in the training and test data adds nothing. (that's huge). this is with regards to me selecting a few features to make something that has a correlation coefficent equal to some fixed amount of the final result. And 2nd making groups in any fashion that is not some form of transform will likely never give me any new information to work with.

I have a fix to both (of course :) ) i mentioned using tsne as well. i think that's where i need to get my groups from. the features i send in to tsne is the how i get various new groups. everthing else i've said though is still relevant. so it is no longer layers of gbm as much as layers of tsne. once i have relevant results from that i send it all in to gbm and let it do its work.

The groups i make out of tsne (2-d btw) will utilize the linear tsne i've already written (As its super fast and does what i need to do pretty well). and while i will send the the results from the tsne right in to the gbm I'll do more than that. I will also isolate out each regions in the results and build groups from each of those.

To do that, i make a gravity-well map for every point on the map... like they are all little planets. the resulting map is evenly subdivided (likes its a giant square map of X by X...i've been using 50 for each side) and have a 0 or larger value in every square.  (inverse squared distance ... 1/(1+ (sourceX - x0)^2 + sourceY - y0)^2 ). i've been limiting the influence to 5 squares from each point just to keep the runtime speedy (its only a 50x50 map anyway so that's pretty good reach and at 6 away the influence is 1/(1+36+36) which is only 0.013 so that's a decent cutoff anyways.  Once i have the map i look for any positive points that have lower or equal points in all directions. those points are then used to make groups. so there may only be 1... but likely there will be bunches.

the groups are based off of distance from the various centers of local low spot in the gravity well map. if you were to take the differential of the map, these would be 0 points.  so we just find the distance of every point from those centers. This has the added effect of possibly making multiple groups in 1 one pass (which is great) not to mention the data is made via a transformation that works completely statistically and has no direct 1 to 1 correlation with the data you feed it.  So you are actually adding something to the gbm mechanism to work with.

At that point the resulting groups should probably be filtered to see if they add any real value using the correlation mechanism i already mentioned in the last post. ideally gbm would just ignore them if they are noisy.

Thoughts on the AI layers

So, I've given my layered GBM a little thought. I'll explain what i want to do by how i came up with it. My goals seem pretty straight forward I think I can describe them with these 2 ideas/rules:

1. I want each layer that is created to contribute to the whole/final result in a unique way. So there is as little redundancy (ie wasted potential) as possible.

2. The number of layers should be arbitrary. If the data drives us to a one layer system, that's all you create. That is, if the data just points directly to a final result, you just figure out the final result.  If the data drives us to 100 layer system... well there is that then too.

To do the fist one, each layer should have access to previous layers to make it's results so it will have more options to make different groups in that layer, with the training data available to everything. Each layer will also know what previous layers produced to keep from making "similar" things. There will likely be some controlling variable for how different a group needs to be.

To do the second one we need to have a way to move towards our goal of predicting the final result. We could just make groups until we happen to make one that fits our results really well, that sort of brute force approach might take forever (depending on the algorithm). If however we have a deterministic way of measuring the predicted groups similarity to each other and the final result we can use that to throw out any groups that are either too similar to previous groups and we will have a way of slowly building a target result that is improved on the previous prediction.

Each layer then will probably (nothing is written) produce a guess for the final answer as well as additional useful groups if it can find any. If at any point the guess fails to be an improvement on the previous result  we stop and take the best guess. we could continue making layers if for some reason we think we might eventually make a better guess. In this way concurrent failures might be our stopping spot or perhaps we will make layers until no new groups can be found or finally at least make sure a minimum number of layers is created if new groups are available but the answers aren't improving in general.

So how do we go about picking these things/groups? That at least isn't to hard using one of my favorite statistical measurements for data mining the Correlation Coefficient. we look at the data and measure each feature's and/or group of feature's movement against the final results and against any other possible groups we've made. The features for now will be selected randomly, though I'll probably find a good mathematically ground way to limit their selection.

There are two types of groups we will make. The type that contributes and the type that is a possible answer. A contributing group wants to have a unique correlation coefficient. that is a number we haven't seen that is at least X away from any other group. the possible answer group is always as close to 1 as possible.

Since I don't really have a way to make multiple groups at once, what I will end up doing is making any old group i can... and again checking against previous groups. then generating a result and seeing if said result is actually still valid. if it is, it goes in to the results. I will probably either alternate between this and an actual prediction or make a fixed number of groups and then make a prediction. (and wash rinse repeat until i'm done in whatever capacity.)

In truth i doubt a created group will ever be very comparable to a result we are looking for. if data mining were that easy we wouldn't spend much time on it.  What I expect is for a bunch of rather generic groups to be made and for them to act like created features to be used in subsequent feature generation... etc till a really useful one is generated that the final answer uses.

The only thing I would add i I would probably throw in a TSNE feature pair as well at each level. this actually would act like a 2nd and 3rd group, ideally any grous the TSNE feature finds will likely be a possible grouping for the final result. since if those two features are paired together and used to build a new feature to predict on then we have something that is a statistically relevant group.

What do I mean by grouping multiple features? basically use the distance equation... figuring distance from of the center (avg) values of the two features from the values of each rows position. said another way f(x,y) =  Sqrt( ((x - avg(x))^2 + (y - avg(y))^2) )

 

 

another long over due blog

Hi again faithful readers (you know who you are). So the marathon is back off the plate. Basically, i can't afford to spend the extra cash right now on a trip. Or rather I'm not going to borrow the money for a trip like that. I over spent when i thought the trip was off and upon reflection when it came time to book the trip, it just didn't make sense. So... maybe next year.

So for the last month i've been toying with things with my data mining code base. I removed tons of old code that wasnt being used/tested. i honed it down to just GBM. then i spent some time seeing if i can make a version that boosts accuracy over log loss (normal gbm produces a very balanced approach) i was successful. I did it by taking the output from one gbm and sending it in to another in essence over fitting.

Why would i want to do that? Well there is a contest ( https://www.kaggle.com/c/santander-product-recommendation ) i tried to work on it a little, but the positives are far and few in between. the predictions generally put any given positive at less than 1% chance of being right so i tried bumping the number since right now it returned all negatives. The results are actually running right now. i still expect all negatives but the potential positives should have higher percentages and I can pick a cut off point to round to a positive that is a little higher than .0005% i would have had to use before. All this, so I can send in a result other than all negatives. (which scores you at the bottom of the leader board) Will this give me a better score than all negatives... no idea :)

I tried making a variation on the GBM tree i was using that worked like some stuff i did years ago. it wasnt bad, but still not as good as the current gbm implementation. I also modified the Tree to be able to handle time series data in that it can lock 1 row to another and put the columns in time sensitive order. this allows me to process multi month data really well. It also gave me a place to feed in fake data if i want to stack 1 gbm on top of another i can send in the previous gbm's results as new features to train on (along with the normal training data).

This leads me to where i think the next evolution of this will be. I'm slowly building a multi layer GBM ... which essentially is a form of neural net. The thing i need to work out is how best to sub divide the things each layer should predict. that is i could make it so the GBM makes 2 or 1000 different groups and predicts rows results for each and feeds those in for the next prediction...etc. till we get to the final prediction. the division of the groups is something that can probably be done using a form of multi variable analysis that makes groups out of variables that change together.  figuring out how to divide it in to multiple layers is a different problem all together.

Do you want an AI cause this is how you get AIs! heh, seriously, thats what it turns in to. once you have a program that takes in data builds a great answer in layers solving little problems and assembles them in to a final answer that is super great. well you pretty much have an AI.

Incidently, TSNE might also help here as it I might just feed the tsne results for that layer's training data (fake data included if we are a level or two down) to give the system a better picture of how things group statistically.

In other news, I started using blue apron. This is my first time trying a service like this and so far I'm really enjoying it. I'm pretty bad about going to the grocery store. And going every week ... well that aint gonna happen. This is my way to do that, without doing that :) . I'm sure most people have similar thoughts when they sign up, even if the selling point is supposed to be the dishes you are making. Honestly, I've just been eating too much take out. I don't mind cooking and the dishes they send you to prepare are for the most part really good.

 

 

Stopping before I start then starting again

I feel like I've started many things and then abruptly stopped before i really got in to the thick of it. I'd like to share some of the highlights. Doing so will give anyone reading a good picture of where everything is with the stuff i normally (its been over two months) blog about.

 

Data Mining: Almost 6 months ago I was working on a GBM in a GBM model and there it has sat since then. It's not that I think it's merit-less, it is just that I doubt it'll produce amazing results. So, I'm not really inspired to finish it. Also, I don't have a kaggle contest to work on right now and that helps drive my interest in the algorithm. Truth be told, I'm beginning to think data mining is getting near the end of its "big gains" period. the human analysis part may very well be improvable but that's never interested me much.

 

Running: I started running, then i stopped, then I started again. to expand on this, it was fine, then i over did it, then I got motivated and eased back in. I've got a marathon in mind I want to go to. It is still 11 weeks away so I'm training up for that. I'll talk more about it when I commit to it fully (basically in about 4 weeks).

 

Math: I spent a lot of time trying to solve the 3 cubes problem. My most recent attempt sent me down a rabbit hole of general factoring. I actually thought I had a method for doing that, only to realize that my solution to the problem was such a tiny corner case as to be unusable. The net-net is I got nothing I'm pursuing here right now.

 

Magic: Things continue to wind down. there is a grand prix tourney in Dallas in 2 weeks but I don't think I'm gonna go. I still have fun playing most weeks but I'm definitely not feeling the drive to brew decks like i did. And, lets be honest, i've never felt the competitive spirit this time around. That is to say, I want to win sure, but deck building always took precedence. as it is so much more interesting than playing a known decent deck, which just makes it hard to be competitive.

 

EM Drive: Have you seen this thing? Its about 2 years old from a "oh hey something new!" perspective. but the last few days I've been reading up on the science and watching youtubes about it. I have to admit I'd like to better understand how it works. I totally get what virtual particles are, but I don't get how they can get the transfer of momentum to them (they are incredibly hard to interact with). Just something I've been messing with and thought I'd include as a bullet point. I doubt I'll build anything in my garage but crazier things have happened.

 

 

The Legend of Question Six [Solution] - Numberphile

If you are reading this you might be searching for a solution, saw my comment or just be browsing the internet. So here's the thing, I saw a Numberphile and was like "i gotta try that" https://www.youtube.com/watch?v=Y30VF3cSIYQ . I do recreational math, so you know.... i like a challenge. anyway here's the solution I came up with (I circled) the "magic step". Basically the realization is that the left and right sides are in the same form. and since I made "I" up I can choose it to be equal to B which forces the other side to be A which then of course dictates its value.

This was my 2nd attempt I spent a few minutes going down another path to see if i could turn it in to a quadratic and just get solutions... it was a mess so I went back and looked for well, what you see. Oh and above The J substitution isn't necessary. i was just doing it to see if anything made sense as a next step beyond the obvious. it didn't help you could remove it. I just didnt want to rewrite the work and i did it in ink so... there it stands.

Gen Con and the state of magic

I haven't had a blog in such a long time. The last thing i wrote about was the sum of 3 cubes, which while i'm working on is now in a pot with a bunch of other things I'm working on. this post though is all about gen con and magic. hopefully I'll get another up in the next day or so on some other stuff going on.

First, magic, I went Gen Con again this year, and played a lot of magic among other things there. it was great in general but the magic part of it was pretty lack luster. Not much to say here other than my magic days are winding down. Its not that I don't enjoy the game but the discovery and problem solving aspect of it is almost played out for me. It may be counter intuitive but when you get to a point where you can't sabotage your deck with random crazy ideas (cause you feel like you've tried them all) you start to do better but it becomes a lot more boring. That's a long winded way of saying winning with the expected decks is boring. And really winning in general is kind of boring. The struggle in anything is what makes it exciting.

I don't really plan on doing any more mtg grand prix's though I might squeeze in 1 or 2 next year if the event looks like it'll be a ton of fun. I expect this time next year, i'll either be done with it or have like 1 major event.

Also as a note, Gen Con next year is the 50th anniversary so if you aren't planning on going, change your plans! It'll be one not to miss. Assuming I go (you never know for sure) I won't be playing nearly as much magic, and I'll definitely be getting more sleep. being tired the entire con makes it less fun and less memorable. Also it goes by much quicker which stinks!

Besides magic, I played true dungeon (1st dungeon was meh, 2nd was pretty good). I did AEG big game night (fun when you have others with you). Did the keg tapping on Wednesday for the 20 sided rye ale. I did the orc stomp 5k (probably wont do this next year, cause the 6:30am start time... ie 5:45 wake up ruins a lot of the con cause of the lack of sleep i get). And I got a chance to demo a friends game for an hour and half.  https://www.kickstarter.com/projects/1410499285/dragonstone-mine-a-family-friendly-board-game

Back to magic, what am I playing right now? In standard bant humans. In modern The rock (or my version of it). in legacy eldrazi-post (12 post with eldrazi)  and painter. I'm working on a zur's weirdening/life gain deck in modern. And I'm working a black/white control/life gain deck in standard as well.