I think one of the problems with this entire debate is lack of meaningful
numbers. Not for the first time, I propose that you test with something
that tests application benefits instead of internal numbers that may not
mean anything. For example, there is a simple test
/* user code */
count = 200*3600; /* one hour */
read cycle timer
read cycle timer
compute actual delay and difference from 5 milliseconds
store the worst case
printf("After one hour the worst deviation is %d clock ticks\n",worst);
printf("This was supposed to take one hour and it took %d", compute_elapsed());
> Begin working on the worst-case locks. Solutions like Andrew's
> low-latency and my lock-break are a start. Better (at least in general)
> solutions are to analyze the locks. Localize them; make them finer
> grained. Analyze the algorithms. Find the big problems. Anyone look
The theory that "fine grained = better" is not proved. It's obvious that
"fine grained = more time spent in the overhead of locking and unlocking locks and
potentially more time spent in lock contention
and lots more opportunities of cache ping-pong in real smp
and much harder to debug"
But the performance gain that is supposed to balance that is often elusive.
> at the tty layer lately? Ugh. Using the preemptive kernel as a base
> and the analysis of the locks as a list of culprits, clean this cruft
> up. This would benefit SMP, too. Perhaps a better locking construct is
> The immediate result is good; the future is better.
Removing synchronization by removing contention
is better engineering than fiddling about with synchronization
primitives, but it is much harder.
-- --------------------------------------------------------- Victor Yodaiken Finite State Machine Labs: The RTLinux Company. www.fsmlabs.com www.rtlinux.com
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to firstname.lastname@example.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/