Re: [RFC][DATA] re "ongoing vm suckage"

Linus Torvalds (torvalds@transmeta.com)
Fri, 3 Aug 2001 21:47:16 -0700 (PDT)


On Sat, 4 Aug 2001, Ben LaHaise wrote:
>
> Using the number of queued sectors in the io queues is, imo, the right way
> to throttle io. The high/low water marks give us decent batching as well
> as the delays that we need for throttling writers. If we remove that,
> we'll need another way to wait for io to complete.

Well, we actually _do_ have that other way already - that should be, after
all, the whole point in the request allocation.

It's when we allocate the request that we know whether we already have too
many requests pending.. And we have the batching there too. Maybe the
current maximum number of requests is just way too big?

[ Quick grep later ]

On my 1GB machine, we apparently allocate 1792 requests for _each_ queue.
Considering that a single request can have hundreds of buffers allocated
to it, that is just _ridiculous_.

How about capping the number of requests to something sane, like 128? Then
the natural request allocation (together with the batching that we already
have) should work just dandy.

Ben, willing to do some quick benchmarks?

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/