We will be fine up until the point that the set of all readahead fills the
entire cache, then we will start dropping *some* of the readahead. This will
degrade gracefully: if the set of readahead is twice as large as cache then
half the readahead will be dropped. We will drop the readahead in coherent
chunks so that it can be re-read in one disk seek. This is not such bad
behaviour.
All this assuming you don't enforce the 1 second size limit on the inactive
queue, of course.
We probably could squeeze a little better performance out of this case by
magically knowing that no input page will ever be reused, as you suggest.
We risk getting such an improvement at the expensive of other, more typical
loads.
That said, I think I might be able to come up with something that uses
specific knowledge about readahead to squeeze a little better performance out
of your example case without breaking loads that are already working pretty
well. It will require another lru list - this is not something we want to do
right now, don't you agree?
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/