The current core of the aproximation algorithm works with pairs of elliptic curves, based on values of n, where n equals
[4,9] [49, 64], [256, 289], [8464..]
the pair 256, 289 for example correspond to the number of elements of the finite group defined by the elliptic curve y^2 = x^3 + x + 1 (mod prime(n)) (256, 289)
given the following python code..
for some value of i, such that (_d42/((int(((ceil(a1/(d4a/_d42))(d4a/_d42))_d42)/d4a))-(1+d(1/2))(1-((d(1/4))-(d(1/9))+(d(1/49)i)-(d(1/64)d(1/i))+(d(1/256)d(i))-(d(1/289)d(1/i))+(d(1/d(922))d(i)))))) ~= d4, where d4a/d4 = a = p
where in standard notions our public key would be n=pq, n being a semiprime, p and q being prime (this a simplification naturally, and leaves out discussion of the modulus).
and where the value being aproximated is
abs((_d42/((int(((ceil(a1/(d4a/_d42))(d4a/_d42))_d42)/d4a))-(1+d(1/2))(1-((d(1/4))-(d(1/9))+(d(1/49)i)-(d(1/64)d(1/i))+(d(1/256)d(i))-(d(1/289)d(1/i))+(d(1/d(92*2))d(i))))))-d4)
Some of the values remain unexplained for now, but the gist to understand whats going on is there.
Testing across a few million random semiprimes from a mere 32 bits, all the way up to 4096 bits found that the i value needed
to closely approximate d4, and thus find p and q is never greater than 75 in all tested cases (and this is a relatively tight upperbound) based on the mean and variance of all semiprimes tested.
Finding p is equivalent to factoring, and thus breaking a public key.
This core method gets us to within 3% of any given key value.
Further elements of the algorithm get us to within 0.1 to 0.0001 of a key, indicating there is significant room for improvement.
It is now possible to get both the magnitude, and first 4-5 leading digits of a public key's factors, as well as the first 2-4 digits at the smallest integer magnitudes of a key's factors, without exhaustive search.
Ongoing research and current progress rates suggest the evolution of the work should yield advancements in the algorithm over the next six months that allow us to retrieve upwards of half the digits of a key's factors regardless of length, with eventual complete factoring in logarithmic time.
That is all for now.
[ + ] FreeinTX
[ - ] FreeinTX 1 point 9 monthsJul 11, 2024 07:25:24 ago (+1/-0)
[ + ] prototype
[ - ] prototype [op] 1 point 9 monthsJul 11, 2024 14:22:03 ago (+1/-0)*
Research avenues so far:
I'm exploring something called the fset, a precise formula that yields d4 exactly if certain unknowns are known, but where if you plug in mere estimates, it yields 4-8 digits of additional precision (digits of the key's factors).
It is also convergent if the target variables are known, though with estimates it only converges on some number > d4.
The fset is a series of k values such that k = |d4/(dn-d4)|
In practice we aproximate k by iteration but there are ways to estimate it as well.
We then do d(n+1) = ((dn/k)-dn) ~= d4.
Though the subsequent pass for a new approximation doesn't yield integers, but numbers with a floating point component and an integer of 0-1 inclusive (not useful).
d(n+1) = ((dn/k)+dn) prevents this local minima, but converges slower.
Each subsequent run of this process generates a new k value, forming the fset.
With three-five days uninterrupted work I could probably eliminate extraneous variables from the f function, and refine the estimates of k, but responsibilities are responsibilities.
In the research front I'm almost certain I can get a tighter estimator for core algorithm, by implementing an evolutionary approach or stochastic search with multiple generations of ranking, branching and drop-out, and I might just do that.
I've tried langrangian approximation of the fset but I only just picked up calculus and polynomials, so its too advanced for me to go beyond some basic usage.
Because the fset represent the zeroes of some function, I'm pretty sure I should be able to do polynomial regression to find them but again, very new to these techniques.
I also considered implementing an attractor, and using a fourier transform to look for the period, but I don't know enough about that to say whether that would have much advantage.
Theres a new technique I'm working on called folds, which treat semiprimes as geometric shapes in the complex plane, and project them onto a folded surface that represents all the potential moduli and mods. By 'poking a hole' through the surface, we can unwind a number into a string along a particular path winding across all folds of the surface from n to 0, and the points of that path potentially represent moduli for which m%(p||q) == 0. I took inspiration from tropical algebra as applied to b spline surfaces.
Theres also a few other random subprojects related to prior work that I threw out way too early in the research, such as modular stochastic estimators, essentially kernal functions, because they were hit or miss at the time, but with more data, namely from the internal variables of the new algorithms, I may be able to find correlates that will let me easily select the correct kernal functions.
It's a lot for one person.
[ + ] Deleted
[ - ] deleted 1 point 9 monthsJul 11, 2024 06:08:21 ago (+1/-0)
[ + ] prototype
[ - ] prototype [op] 0 points 9 monthsJul 11, 2024 14:34:10 ago (+0/-0)*
The current estimator is good enough that state militiaries could save on their budgets implementing it when they bruteforce foreign nation's encryption. Why search p||q when you can bruteforce (p||q)*0.03?
And more intense estimates get you the magnitude and upwards of 8 leading digits, but thats with 2.1 trillion operations on average. Still, thats a far cry from bruteforcing a 2048 bit key. Anyone doing any sort of decryption work would take that in a heart beat over the standard approach. A couple trillion trial-and-error iterations to get an estimate of a public key's factors to within 0.0000000001%?
Thats a decision even a fungi could make.