Re: gzip vs. bzip2 (mainly de-)compression "benchmark"

Peter Samuelson (peter@cadcamlab.org)
Tue, 7 May 2002 10:44:13 -0500


[Mark Mielke]
> For floppy disks, it is doubtful that the image could be read off the
> floppy faster than even older computers could un-bzip2 the images,
> meaning that execution time is irrelevant in this context.

Well, first, the disk reads are performed by the boot loader, so to run
the two steps concurrently you'd have to do the decompression stage in
the boot loader as well. In which case what you really want is an
uncompressed image format, bzip2'd after the fact. A hypothetical
bImage.bz2, if you will.

The other problem with your statement is that bzip2 (unlike gzip) is
block-oriented: you can't start until you have a pretty big chunk of
the input data. (I could be wrong, but I believe you need the same
block size you compressed with - whatever 900k compresses to, in the
case of maximum compression.) So that would limit the amount of
parallelisation you could get within LILO anyway.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/