Re: qsbench, interesting results

Daniel Phillips (phillips@arcor.de)
Tue, 1 Oct 2002 20:35:28 +0200


On Tuesday 01 October 2002 20:04, Andrew Morton wrote:
> I'm fairly happy with 2.5 page replacement. It's simple, clean
> and very, very quick to build up a large pool of available memory
> for what ever's going on at the time.
>
> Problem is, it's cruel. People don't notice that we shaved 15 seconds
> off that three minute session of file bashing which they just did.
> But they do notice that when they later wiggle their mouse, it takes
> five seconds to pull the old stuff back in.
>
> The way I'd like to address that is with a "I know that's cool but I
> don't like it" policy override knob. But finding a sensible way of
> doing that is taking some head-scratching. Anything which says
> "unmap pages much later" is doomed to failure I suspect. It will
> just increase latency when we really _do_ need to unmap, and will
> cause weird OOM failures.
>
> So hm. Still thinking.

That would be process RSS management you're talking about, which Rik
has bravely volunteered to tackle. The object would be to respond to
-nice values as sanely as possible, so that a reasonable portion of
those pages touched by the mouse tend to stick around in memory under
all but the highest pressure loads.

Then there is the 'updatedb paged out my desktop in the middle of the
night', related but even harder because of the long timeframe. To fix
this really well and not kill other, more critical loads requires some
kind of memory of what was paged out when so that when updatedb goes
away, something approximating the former working set pops back in. A
little low hanging fruit can be gotten by just reading all of swap
back in when the load disappears, which will work fine when swap is
smaller than RAM and there isn't too much shared memory.

-- 
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/