Re: [patch 9/21] batched addition of pages to the LRU

Rik van Riel (riel@conectiva.com.br)
Sun, 11 Aug 2002 12:23:25 -0300 (BRT)


On Sun, 11 Aug 2002, Andrew Morton wrote:

> Also. This whole patch series improves the behaviour of the system
> under heavy writeback load. There is a reduction in page allocation
> failures, some reduction in loss of interactivity due to page
> allocators getting stuck on writeback from the VM. (This is still bad
> though).

> It appears that this simply had the effect of pushing dirty, unwritten
> data closer to the tail of the inactive list, making things worse.

This nicely points out the weak points in having the VM block
on individual pages ... the fact that there are so damn many
of those pages.

Every time the VM is shifting workloads we can run into the
problem of having a significant part (or even the whole) of
the inactive list full of dirty pages and with the inactive
list being 1/3rd of RAM you could easily run into 200 MB of
dirty pages.

If you insist on deferring the "waiting on IO" for these
pages too much, you'll end up blocking in __get_request_wait
instead, until you've scheduled all 200 MB of dirty pages
for IO.

By the time you've gotten out of __get_request_wait and
scheduled the last page for IO, you can be pretty sure the
first 120 MB of dirty pages have been written to disk already
and would have been reclaimable for many seconds already.

I really need to look at fixing this thing right ...

kind regards,

Rik

-- 
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/ http://distro.conectiva.com/

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/