If 10% of the disk is bad, I wouldn't continue using it.
> What is so bad about the simple way: the one who wants to write
> (e.g. fs) and knows _where_ to write simply uses another newly
> allocated block and dumps the old one on a blacklist. The blacklist
> only for being able to count them (or get the sector-numbers) in
> case you are interested. If you weren't you might as well mark them
> allocated and that's it (which I would presume a _bad_ idea). If
> there are no free blocks left, well, then the medium is full. And
> that is just about the only cause for a write error then (if the
> medium is writeable at all).
Modern disks generally do this kind of thing themselves. By the time
a disk actually reports write errors, I wouldn't want to continue
using it. Preferably, I want to know _before_ then, generally by
using S.M.A.R.T. data.
> Don't make the thing bigger than it really is...
The problem you are describing doesn't really exist in a lot of
cases. Modern hard disks do not have fixed areas corresponding to
specific blocks - they allocate the available space to blocks as
required. The disk will just allocate a different area to hold the
block that was originally on the defective part of the media when that
block is re-written.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/