Re: Don't use dbench for benchmarks

Nigel Gamble (nigel@nrg.org)
Mon, 28 Jan 2002 08:56:25 -0800 (PST)


On Mon, 28 Jan 2002, Richard B. Johnson wrote:
> It seems that compiling the Linux Kernel while burning a CDROM gives
> a good check of "acceptable" performance. But, such operations are
> not "benchmarks". The trick is to create a benchmark that performs
> many "simultaneous" independent and co-dependent operations using
> I/O devices that everyone is likely to have. I haven't seen anything
> like this yet.
>
> Such a benchmark might have multiple tasks performing things like:
>
> (1) Real Math on large arrays.
>
> (2) Data-base indexed lookups.
>
> (3) Data-base keys sorting.
>
> (4) Small file I/O with multiple creations and deletions.
>
> (5) Large file I/O operations with many seeks.
>
> (6) Multiple "network" Client/Server tasks through loop-back.
>
> (7) Simulated compiles by searching directory trees for
> "include" files, reading them and closing them, while
> performing string-searches to simulate compiler parsing.
>
> (8) Two or more tasks communicating using shared-RAM. This
> can be a "nasty" performance hog, but tests the performance
> of threaded applications without having to write those
> applications.
>
> (9) And more....
>
>
> These tasks would be given a "performance weighting value", a heuristic
> that relates to perceived overall performance.

It sounds like you are describing the Aim Benchmark suite, which has
been used for years to compare unix system performancem, and was
recently released under the GPL by Caldera.

See http://caldera.com/developers/community/contrib/aim.html

Nigel Gamble nigel@nrg.org
Mountain View, CA, USA. http://www.nrg.org/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/