Re: [2.4.17/18pre] VM and swap - it's really unusable

Martin Knoblauch (Martin.Knoblauch@TeraPort.de)
Mon, 14 Jan 2002 17:25:06 +0100


> Re: [2.4.17/18pre] VM and swap - it's really unusable
>
>
> Ken,
>
> Attached is an update to my previous vmscan.patch.2.4.17.c
>
> Version "d" fixes a BUG due to a race in the old code _and_
> is much less agressive at cache_shrinkage or conversely more
> willing to swap out but not as much as the stock kernel.
>
> It continues to work well wrt to high vm pressure.
>
> Give it a whirl to see if it changes your "-j" symptoms.
>
> If you like you can change the one line in the patch
> from "DEF_PRIORITY" which is "6" to progressively smaller
> values to "tune" whatever kind of swap_out behaviour you
> like.
>
> Martin
>
Martin,

looking at the "d" version, I have one question on the piece that calls
swap_out:

@@ -521,6 +524,9 @@
}
spin_unlock(&pagemap_lru_lock);

+ if (max_mapped <= 0 && (nr_pages > 0 || priority <
DEF_PRIORITY))
+ swap_out(priority, gfp_mask, classzone);
+
return nr_pages;
}

Curious on the conditions where swap_out is actually called, I added a
printk and found actaully cases where you call swap_out when nr_pages is
already 0. What sense does that make? I would have thought that
shrink_cache had done its job in that case.

shrink_cache: 24 page-request, 0 pages-to swap, max_mapped=-1599,
max_scan=4350, priority=5
shrink_cache: 24 page-request, 0 pages-to swap, max_mapped=-487,
max_scan=4052, priority=5
shrink_cache: 29 page-request, 0 pages-to swap, max_mapped=-1076,
max_scan=1655, priority=5
shrink_cache: 2 page-request, 0 pages-to swap, max_mapped=-859,
max_scan=820, priority=5

Martin

-- 
------------------------------------------------------------------
Martin Knoblauch         |    email:  Martin.Knoblauch@TeraPort.de
TeraPort GmbH            |    Phone:  +49-89-510857-309
C+ITS                    |    Fax:    +49-89-510857-111
http://www.teraport.de   |    Mobile: +49-170-4904759
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/