Good point. Even with a fifo queue we can deal with this nicely by modifying
the insertion step to scan forward past other pages of the same file. So the
readahead pages end up being inserted in reverse order locally, while
chunkwise we still have a fifo.
> This seems even more imporant when considering multiple streams,
> as if you drop the least recently 'used' (i.e. read in from disk),
> you will instantly create a thrashing storm.
The object is to avoid getting into the position of having to drop readahead
pages in the first place, by properly throttling the readahead. When we do
have to drop readahead it's because the active list expanded. Hopefully we
will stabilize soon with a shorter readahead list. Yes, it may well be
better to drop from the head of the queue instead of the tail because the
dropped pages will come from a smaller set of files. On the other hand, we
will penalize faster streams that way. Furthermore, sometimes readahead
pages may never be used in which case we would keep them forever.
> And an idea: when dropping read-ahead pages, you might be better
> dropping many readahed pages for a single stream, rather than
> hitting them all equally, else they will tend to run out of
> readahead in sync.
Yes, this requirement is satisfied by the arrangement described.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/