I am using an open-source application on ix86 to perform a task which
is cache-intensive. When I run consecutive iterations of the task on
a fixed input, the variance in timing of each iteration is extemely
high. Needless to say, the test machine is always non-occupied.
On every other OS I tried, Solaris, HPUX, FreeBSD and Tru64, the
timing is very consistent between the iterations. My question is, are
there known issues with L2 cache reuse in Linux kernel?
I can provide any necessary information for anyone interested in
addressing this issue, but I purposely skipped most technical details
in this post to keep it simple.
Thanks in advance.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/