for example, zip compression works by writing down repeat information once and using an index in its place to save space.
but that will only take you so far.
there is a legend of a type of compression so advanced entire gigabytes could be transferred on a sheet of paper. not in the sense that you were giving them a hyperlink to the file but the information itself could be extracted from a string so short it could be written down on a sheet of paper.
my question to you is, how did your compression algorithm work?
Hash it. Then do an exhaustive ordered search of all collisions of increasing length. Then send the hash and the collision index.
Actually this should show you that the longer the hash, the shorter the index and the shorter the index the longer the hash. So you don't gain anything.
Best I can do is know some constraints on the input, like input is a valid C program. Then the index would be a lot shorter since you could throw away invalid inputs without counting them.
Actually the hash is waste. You enumerate all valid strings and give the index and you're back to square 1. 1:1 compression yaya.
[ + ] PuttitoutIsGone
[ - ] PuttitoutIsGone 1 point 2.9 yearsJul 21, 2021 09:06:27 ago (+1/-0)
[ + ] AloisH
[ - ] AloisH 0 points 2.9 yearsJul 21, 2021 05:28:03 ago (+0/-0)
You could always just index the indexes index.
[ + ] RepublicanNerd
[ - ] RepublicanNerd 0 points 2.9 yearsJul 21, 2021 06:58:54 ago (+0/-0)
[ + ] Teefinyomouf
[ - ] Teefinyomouf 0 points 2.9 yearsJul 21, 2021 20:41:21 ago (+0/-0)
Actually this should show you that the longer the hash, the shorter the index and the shorter the index the longer the hash. So you don't gain anything.
Best I can do is know some constraints on the input, like input is a valid C program. Then the index would be a lot shorter since you could throw away invalid inputs without counting them.
Actually the hash is waste. You enumerate all valid strings and give the index and you're back to square 1. 1:1 compression yaya.