Re: [Ext2-devel] disk throughput

Linus Torvalds (torvalds@transmeta.com)
Mon, 5 Nov 2001 15:50:38 -0800 (PST)


On Mon, 5 Nov 2001, Alexander Viro wrote:
>
> Ideally we would need to predict how many (and how large) files
> will go into directory. We can't - we have no time machines. But
> heuristics you've mentioned is clearly broken. It will end up with
> mostly empty trees squeezed into a single cylinder group and when
> they start to get populated that will be pure hell.

Why? Squeezing into a single cylinder group is _good_. Fragmentation is
bad.

Where's the hell? You move over to the next cylinder group when you have
90% filled one ("pick percentage randomly", and there are others that are
clearly less used.

> Benchmarks that try to stress that code tend to be something like
> cvs co, tar x, yodda, yodda. _All_ of them deal only with "fast-growth"
> pattern. And yes, FFS inode allocator sucks for that scenario - no
> arguments here. Unfortunately, the variant you propose will suck for
> slow-growth one and that is going to hurt a lot.

I don't see any arguments for _why_ you claim it would hurt a lot.

Fast-growth is what we should care about from a performance standpoint -
slow growth can be sanely handled with slower-paced maintenance issues
(including approaches like "defragment the disk once a month").

By definition, "slow growth" is amenable to defragmentation. And by
definition, fast growth isn't. And if we're talking about speedups on the
order of _five times_ for something as simple as a "cvs co", I don't see
where your "pure hell" comes in.

A five-time slowdown on real work _is_ pure hell. You've not shown a
credible argument that the slow-growth behaviour would ever result in a
five-time slowdown for _anything_.

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/