Re: Re: Swap Compression

rmoser (mlmoser@comcast.net)
Sat, 26 Apr 2003 22:24:04 -0400


So what's the best way to do this? I was originally thinking like this:

Grab some swap data
Stuff it into fcomp_push()
When you have 100k of data, seal it up
Write that 100k block

But does swap compress in 64k blocks? 80x86 has 64k segments.
The calculation for compression performance loss, 256/size_of_input,
gives a loss of 0.003906 (0.3906%) for a dataset 65536 bytes long.
So would it be better to just compress segments, whatever size they
may be, and index those? This would, of course, be much more efficient
in terms of finding the data to uncompress. (And system dependant)

The algo is flexible; it doesn't care about the size of the input. If you
pass full segments at once, you could gain a little more speed. Best
part, if you rewrite my decompression code to do the forward calculations
for straight data copy (I'll likely do this before the night is up myself),
you will evade a lot of realloc()'s in the data copy between pointers. This
could also optimize out the decriments of the distance-to-next-pointer
counter, and total this all together gives a lot of speed increase over my
original code. Since this is a logic change, the compiler can't do this
optimization for you.

Another thing, I did state the additional overhead (which now is going to be
64K + code + 256 byte analysis section + 64k uncompressed data)
before, but you can pull in less than the full block, decompress it, put it
where it goes, and pull in more. So on REALLY small systems, you
can still do pagefile buffering and not blow out RAM with the extra 128k
you may need. (heck, all the work could be done in one 64k segment if
you're that determined. Then you could compress the swap on little 6 MB
boxen).

--Bluefox Icy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/