Re: [patch 1/13] misc fixes

Daniel Phillips (phillips@arcor.de)
Mon, 29 Jul 2002 23:51:24 +0200


On Monday 29 July 2002 22:00, Andrew Morton wrote:
> > The idea I'm playing with now is to address an array of locks based on
> > something like:
> >
> > spin_lock(pte_chain_locks + ((page->index >> 4) & 0xff));
> >
> > so that 16 consecutive filemap pages use the same lock and there is a limited
> > total number of locks to keep cache pressure down. Since we are creating the
> > vast majority of the pte chain nodes while walking across page tables, this
> > should give nice locality.
>
> Something like that could help.
>
> At some point, when the reverse map is as CPU efficient as we can make it,
> we need to decide whether the remaining cost is worth the benefit. I
> wonder how to do that.

We need measurements for a few other loads I think.

> > For this to work, anon pages will need to have something in page->index.
> > This isn't too much of a challenge. A reasonable value to put in there is
> > the creator's virtual address, shifted right, and perhaps mangled a little to
> > reduce contention.
>
> Well you want the likely-to-be-temporally-adjacent anon pages to
> use the same lock. So maybe
>
> page->index = some_global_int++;

Yes, that's better.

> Except ->index gets stomped on when the page gets added to swapcache.
> Which means that the address of its lock will change. I can't immediately
> think of a fix for that.

We'd have to hold the lock while changing the page->index. Pte_chain_lock
would additionally have to check the page->index after acquiring the lock
and, if changed, drop it and take the new one. I don't think the overhead
for this check is significant.

Add_to_page_cache would want new flavor that shortens up the pte chain lock
hold time, but it looks like it should have a swap-specific variant anyway.

-- 
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/