Heh, I reread that last blog of mine. Boy talk about poorly written! I went ahead and cleaned it up some and it's still kinda bad. This tends to happen when you write via stream of consciousness and then do ad-hoc editing without fully rereading. I'm sure there are writers and editors out there who saw it and went "for the love of all that is sacred re-read what you type!" ah good times. good times. Anyway sorry about that, I'll try and do better.
Anyways, it's been a busy-ish month. I still haven't found a job but I've picked up the pace of applying for them. I wasn't really sure what kind of response I would get. It feels fairly tame so I've compensated and started to up the number applications I'm putting out. It certainly hasn't been like it was 23 years ago! I mention that specifically because I think that was the time I got the most response with hardly any effort. Yeah back in 2000 it was the end of the tech boom and I was still new at my career. At my price point then the demand was kinda crazy. I probably should have asked for more money but alas I was naive to my value. These days each time I go looking for a job it's hit or miss how fast I find something. Something that works well for me I mean, if you take any old job at any old price point you will probably have work the very next day, making next to nothing and be really unhappy doing it. Don't let that happen to you! Regardless I'm still looking.
Beyond job hunting I've spent a couple weeks exploring more stuff with t-sne. I more or less worked through what i was trying to do, which is to say I was trying to make t-sne a tool for data mining. Something beyond visualization and feature reduction. The thing is, sometimes the data is organized in a way that t-sne will auto-group the data for you nicely, but only sometimes. I was trying to write code so that it to adjust itself towards training data automatically. Needless to say it overfit every time. I think the idea is kind of flawed at a certain level. The problem with t-sne (and umap if that's your bag) is they aren't exact. When you do normal data mining you just refine the approximation of the answer over and over again using the training data and cross validation. With t-sne, i can do the same but the output has noise in it and each iteration uses the output with the noise and introduces error. It treats the noise like facts and they end up dominating any signal in the data. The training data ends up nice and neat and the test data ends up random. In the end I set it aside and called it a failure.
Don't get me wrong it worked really really well at overfitting as if that was some sort of great thing. There just seemed no good way to deal with the noise I was introducing (folding and such wasn't cutting it). I still haven't given up on the idea of doing a 2-d representation of the data and using that to do data mining. It's just i don't feel at this point that t-sne is the way to go. At least not without an 'Ah-ha!' moment where I realize how to stop it from being so biased to the random elements.
I've also been working on the Elliptic Curve Discrete Log Problem (or rather work has continued). It's... gone 'okay'. About 6 months ago I had a small revelation that made it so I could solve a scalar unknown on a prime curve in about 1/2 the steps as before. This built up from an idea I had about 11 months ago. Everything since then has been me trying to push it further (and not doing so). I've learned a lot about some aspects of math I wasn't really well versed in (if at all). I've also done more than a few things that are pretty neat to maybe a small niche crowd but honestly no real progress. I think it's safe to say I'm stuck and just about out of ideas at this point. I say just about cause its hard to let it go. I find myself spending a lot of time thinking and re-thinking certain problems I've come across. Maybe someday I write a blog all about them but for now suffice to say it fills my hours and days when I'm not job hunting.