kernel benchmark

gis88530 (gis88530@cis.nctu.edu.tw)
Fri, 16 Mar 2001 10:52:27 +0800


Hi,

I use kernprof+gprof to measure the 2.2.16 kernel,
but the scale is mini-second.
So I use do_gettimeofday( ) kernel function call to measure
the latency. (This function support micro-second scale.)

Moreover, I use SmartBits packet generator to generate
the specific network traffic load. The environment is
as follows. However, the result are very funny. I think
that latency should increase progressively when load
increase, but the result are unable explaining.
Could you give me some hint? Thanks a lot.

1518byte packet
load latency(us)
1% 13.1284
10% 14.1629
20% 12.6558
30% 11.1056
40% 10.7510
50% 10.4148
60% 10.3337
70% 10.1038
80% 10.1103
90% 10.3634
100% 11.2367

64byte packet
load latency(us)
1% 3.6767
10% 2.7696
20% 4.3926
30% 2.8135
40% 8.2552
50% 5.3088
60% 9.3744
70% 23.6247
80% 8.5351
90% 9.7217
100% 13.065

Benchmark Environment:
+---smartbits<---+
| |
+---->Linux-----+

* The do_gettimeofday function call is as follows:
--------------
do_gettimeofday(&begin);
...
(kernel do something)
...
do_gettimeofday(&end);
if (end.tv_usec < begin.tv_usec) {
end.tv_usec += 1000000; end.tv_sec--;
}
end.tv_sedc -= begin.tv_sec;
end.tv_usec -= begin.tv_usec;
return ((end.tv_sec*1000000) + end.tv_usec);

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/