I disagree that the linear (which I assume means walk linearly through 
process memory) and random patterns aren't worth testing.  The former should 
produce very understandable behaviour and that's always a good thing.  It's 
an idiot check.  Specifically, with the algorithms we're using, we expect the 
first-touched pages to be chosen for eviction.  It's worth verifying that 
this works as expected.
Random gives us a nice baseline against which to evaluate our performance on 
more typical, localized loads.  That is, we need to know we're doing better 
than random, and it's very nice to know by how much.
The gaussian distribution is also interesting because it gives a simplistic 
notion of virtual address locality.  We are supposed to be able to predict 
likelihood of future uses based on historical access patterns, the question 
is: do we?  Comparing the random distribution to gaussian, we ought to see 
somewhat fewer evictions on the gaussian distribution.  (I'll bet right now 
that we completely fail that test, because we just do not examine the 
referenced bits frequently enough to recover any signal from the noise.)
I'll leave the more complex patterns to you and Mel, but these simple 
patterns are particularly interesting to me.  Not as a target for 
optimization, but more to verify that basic mechanisms are working as 
expected.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/