Re: [patch 3/4] slab reclaim balancing

Andrew Morton (akpm@digeo.com)
Fri, 27 Sep 2002 12:52:15 -0700


Manfred Spraul wrote:
>
> Andrew Morton wrote:
> >
> >>* After flushing a batch back into the lists, the number of free objects
> >>in the lists is calculated. If freeable pages exist and the number
> >>exceeds a target, then the freeable pages above the target are returned
> >>to the page buddy.
> >
> >
> > Probably OK for now. But slab should _not_ hold onto an unused,
> > cache-warm page. Because do_anonymous_page() may want one.
> >
> If the per-cpu caches are enabled on UP, too, then this is a moot point:
> by the time a batch is freed from the per-cpu array, it will be cache cold.

Well yes, it's all smoke, mirrors and wishful thinking. All we can
do is to apply local knowledge of typical behaviour in deciding whether
a page is likely to be usefully reused.

> And btw, why do you think a page is cache-warm when the last object on a
> page is freed? If the last 32-byte kmalloc is released on a page, 40xx
> bytes are probably cache-cold.

L2 caches are hundreds of K, and a single P4 cacheline is 1/32nd of
a page. Things are tending in that direction.

> Back to your first problem: You've mentioned excess hits on the
> cache_chain_semaphore. Which app did you use for stress testing?

I think it was dd-to-six-disks.

> Could you run a stress test with the applied patch?

Shall try to.

> I've tried dbench 50, with 128 MB RAM, on uniprocessor, with 2.4:
>
> There were 9100 calls to kmem_cache_reap, and in 90% of the calls, no
> freeable memory was found. Alltogether, only 1300 pages were freed from
> the slabs.
>
> Are there just too many calls to kmem_cache_reap()? Perhaps we should
> try to optimize the "nothing freeable exists" logic?

It certainly sounds like it. Some sort of counter which is accessed
outside locks would be appropriate. Test that before deciding to
take the lock.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/