Yes, good point.  OK, I'll re-examine the dropping logic.  Bear in mind, 
dropping readahead pages is not supposed to happen frequently under 
steady-state operation, so it's not that critical what we do here, it's going 
to be hard to create a load that shows the impact.  The really big benefit 
comes from not overdoing the readahead in the first place, and not underdoing 
it either.
> > > If you are streaming dropping all should be no great loss.
> >
> > The quesion is, how do you know you're streaming?  Some files are
> > read/written many times and some files are accessed randomly.  I'm trying
> > to avoid penalizing these admittedly rarer, but still important cases.
> 
> For those other case we have pages on the active list which are properly 
> aged, haven't we ?
Not if they never get a change to be moved to the active list.  We have to be 
careful to provide that opportunity.
> And readahead won't help you for random access unless you can cache it all.
> You might throw in a few extra blocks if you have free memory, but then you 
> have free memory anyway.
Readahead definitely can help in some types of random access loads, e.g., 
when access size is larger than a page, or when the majority of pages of a 
file are eventually accessed, but in random order.  On the other hand, it 
will only hurt for short, sparse, random reads.  Such a pattern needs to be 
detected in generic_file_readahead.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/