new job, t-sne, ecdlp and dlp

It's been 8 months! I would say time flies, but really it seems to go in spurts. The last 2 months have zipped by but the 4 before that were painfully slow. I think it has something to do with going back to a day job. So, let's start there. i consulted briefly taking a job with a right to hire at AB. Needless to say I didn't stay. While the consultant company k-force was fine and the client Anheuser Busch was fine, the environment lacked the focus and future I was looking for. Maybe said another way, there were a lot of moving parts and I was inheriting a lot of work that didn't fit the tech stack they used internally which makes for an awkward experience all around. Being relegated to working on old technology because you know it, might be okay for a person in general, but when the rest of the world is doing other things and the old school way of doing things is at odds with the new school, it adds stress and general discomfort. In short it wasn't the places or people it was the nature of the work in the environment they had built.

So what happened? Randomly an old friend of mine texted me looking for programming staff for a startup he was involved in. The company was/is tiny and long story made short I was offered the job and took it. I wasn't looking but it really was the best option. I wasn't very happy at AB and I didnt like my prospects there. I'm MUCH happier where I am now. The company is called Keyrune. Its main product is melee.gg which is software as a service. At melee.gg you can setup and run tournaments. At this point it is principally card tournaments (think magic the gathering and lorcana) though I could see e-sports being a good fit in the future. So yay! job! Oh and if it wasn't clear the 4 months from January to April i was at AB and then the last 2 were at Keyrune ... hence i think that is why the last 2 months have felt like they have gone by much quicker.

What else is in the news? I decided that my t-sne exploits were never really going to work as i envisioned them working. I mentioned before that the inherent noise in the system causes problems when using it for data mining. So... Skip a bit. 2 weeks ago after watching a video about a guy using a genetic algorithm for some other random project I was inspired to re-approach the problem and this time just see if I could do things in a single pass. Part of the problem with injecting noise as you refine results, as the noise takes over eventually. Meaning once the resolution of the signal drops to the noises level... you have no signal. This new technique doesn't refine squat. It does the analysis and creates a result in a single pass. So yes, t-sne will have some noise in it but at least i'm not building on the noise over and over again.

The big oh that works. moment was when i decided to use an old stand by of neural networks as my mechanism for weighting features, the max function. The max function works like this; If the result is less than 0 return 0 otherwise return that value. Without getting in to the weeds this was the tool i needed to build an if statement in to the results. It allowed me to weight features add a bias and either return a new value (weight*value + bias) or 0. The genetic algorithm worked "okay" for picking features to weigh but the weighting mechanism was the shining star.

Does it work? Yes, I think. Visually it does seem to do what i set out to do group classifiers. But i'm selling it short if i only talk about the genetic algorithm. genetic algorithms work when you have way to many options and way to much computational power for trial and error. If you can take an intelligent stab at the problem you will likely do better. This is what I did. So I swapped out the genetic algorithm with some least squares fitted results. Normally, in data mining solving an series of equations with least squares gives you too good a fit, as in an overfit. However, here we are only doing 1 analysis and no subsequent refinement. needless to say it works/worked really well... like really well.

Is it an overfit? It's hard to say. I mean right now I'm just doing visualization stuff I'm not actually making data mining predictions so i haven't evaluated a hold out to see if it's garbage or not. It looks good! and there is all sorts of tuning you can do to which features you are selecting for your visualization. I tried against some datasets i know fairly well. 1 from the biological response contest at kaggle (now 12 years old!!!!) and another from the 1 kaggle days I went too. the kaggle days one is a good litmus test. It was notorious hard to do anything with as it only had 250 rows of data and it had like 400 features. in both cases the visualization was meaningful. Rather that is to say in biological response I saw good grouping in the various scenarios i tested. for the "dontoverfit" kaggle set I saw some minor grouping. But that is good. If I saw strong clear defined grouping it would like indicate that it is just overfitting. Regardless until I actually measure the error on a hold outset it is hard to say more. As I type this I was trying it on some minst visual data for handwritten characters. I didnt really design it to work with more than one classifier so... the results are kinda bleh, right now. But honestly, that's a good sign too. it means it is that much more likely not to be overfitting.

Okay so other than that what have I been up to? I've been working on ECDLP and DLP a lot lately. Which is to say, for a long time i've been working on finding new ways factoring large semi-prime numbers. So basically factoring which long ago took on the form form of solving the DLP (discrete logarithm problem) and the ECDLP (Elliptic Curve Discrete Logarithm Problem). First what is the DLP in a nut shell solve this. a^x ≡ b (mod n) where n is asprime a is a primative root of n and b is known. This means you 'just' have to solve x. Since it is done with integers it is not a trivial problem. The ECDLP version is simply  q ≡ k*p (mod n) in this case q and p are points on an elliptic curve and k is a scaling factor. n is generally a prime here as well. You need to solve k. Also generally p is just "1" (it is an X and Y position on the elliptic curve we have decide is represents 1) and q is the point you move to when you add p to itself  k times. The mod n is there because there are only so many points to go through before it repeats. The semantics aren't too important just know that on an elliptic curve you really don't have all the tools you have in normal math. you can compare points, add points, double points, subtract points and using doubling you can multiply points or divide points by a known value but not an known value. Rather you can't take a random X,Y point on the curve and use it to divide another X,Y point. But you can move to a point that is say, 41 times greater where 41 is just constant you know. So, getting that known value "k" in q ≡ k*p (mod n)  is problematic.

In my case I turn the DLP problem in to the factoring problem by changing the modular value "n" from a prime in to a semi-prime p*q  (prime times a prime). I then produce the value I'm trying to solve b. It can actually be calculated from n. The details are beyond the scope of this blog. Maybe i'll add a blog about it specifically someday but suffice to say I try and solve that problem. As for the ECDLP problem, many of the solution mechanisms you use on one you can use on the other. For a time (the last 4 years or so) I've tried many... many algorithms and bits of math to solve ECDLP faster than known methods. And not without some forms of success. Much of my research the last year and half has had to do with recursive functions. There is a lot of promise there. I'll go on more about this another time, but again. I've made some headway on both DLP and ECDLP in recent months. I think it is time to go to conference on cryptography (and probably one on machine learning to see if I can learn what others are up to and see if what I've figured out is meaningful/exciting to others in the field.

 

 

 

 

 

Comments are closed