Re: page_launder() on 2.4.9/10 issue

Daniel Phillips (phillips@bonn-fries.net)
Thu, 6 Sep 2001 19:51:26 +0200


On September 6, 2001 03:10 pm, Stephan von Krawczynski wrote:
> > Blindly delaying all the writes in the name of better read performance isn't
> > the right idea either. Perhaps we should have a good think about some
> > sensible mechanism for balancing reads against writes.
>
> I guess I have the real-world proof for that:
> Yesterday I mastered a CD (around 700 MB) and burned it, I left the equipment
> to get some food and sleep (sometimes needed :-). During this time the machine
> acts as nfs-server and gets about 3 GB of data written to it. Coming back today
> I recognise that deleting the CD image made yesterday frees up about 500 MB of
> physical mem (free mem was very low before). It was obviously held 24 hours for
> no reason, and _not_ (as one would expect) exchanged against the nfs-data. This
> means the caches were full with _old_ data and explains why nfs performance has
> remarkably dropped since 2.2. There is too few mem around to get good
> performance (no matter if read or write). Obviously aging did not work at all,
> there was not a single hit on these (CD image) pages during 24 hours, compared
> to lots on the nfs-data. Even if the nfs-data would only have one single hit,
> the old CD image should have been removed, because it is inactive and _older_.

OK, this is not related to what we were discussing (IO latency). It's not too
hard to fix, we just need to do a little aging whenever there are allocations,
whether or not there is memory_pressure. I don't think it's a real problem
though, we have at least two problems we really do need to fix (oom and
high order failures).

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/