Re: What to expect with the 2.6 VM

Andrea Arcangeli (andrea@suse.de)
Wed, 2 Jul 2003 05:04:43 +0200


On Tue, Jul 01, 2003 at 10:54:56AM -0700, Martin J. Bligh wrote:
> --On Tuesday, July 01, 2003 09:22:04 -0700 William Lee Irwin III <wli@holomorphy.com> wrote:
>
> > At some point in the past, I wrote:
> >>> First I ask, "What is this exercising?" That answer is largely process
> >>> creation and destruction and SMP scheduling latency when there are very
> >>> rapidly fluctuating imbalances.
> >>> After observing that, the benchmark is flawed because
> >>> (a) it doesn't run long enough to produce stable numbers
> >>> (b) the results are apparently measured with gettimeofday(), which is
> >>> wildly inaccurate for such short-lived phenomena
> >
> > On Tue, Jul 01, 2003 at 07:24:03AM -0700, Martin J. Bligh wrote:
> >> Bullshit. Use a maximal config file, and run it multiple times. I have
> >> sub 0.5% variance.
> >
> > My thought here had more to do with the measurements becoming dominated
> > by ramp-up and ramp-down and than the thing literally producing unreliable
> > timings. "Instability" was almost certainly the wrong word.
> >
> > I'm also skeptical of its usefulness for scalability comparisons with a
> > single-threaded phase like the linking phase and the otherwise large
> > variations in concurrency. It seems much more like a binary test of
> > "does it get slower when I add more cpus?" than a measure of scalability.
> >
> > For instance, if you were to devise some throughput measure say per
> > gigacycle based on this and compare efficiencies on various systems
> > with it so as to measure scaling factors, what would it be? Would you
> > feel comfortable using it for scalability comparisons given the
> > concurrency limitations for a single compile as a benchmark?
>
> I'm not convinced it's that limited. I'm getting about 1460% cpu out
> of 16 processors - that's pretty well parallelized in my mind.

yes, especially considering what William said above.

But the point here is not to measure scalability in absolute terms, the
point IMHO is the scalability regression introduced by rmap (w/o
objrmap).

The fact the kernel workload isn't the most scalable thing on earth,
simply means other workloads doing the same thing that make+gcc does - but
never serializing in linking - will be hurted even more.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/