There are more serious problems with this implementation:
1) It's *very* imprecise. Even on an 1 GHz CPU and with HZ = 100 precision
is ~86 nsec. With HZ = 1000 it's almost unusable on *any* CPU.
2) Additional delay caused by integer multiplication, cache misses
and so on may be large enough, especially on older processors.
On 233 MHz PII it's 600-700 nsec (perfectly repeatable), on 600 MHz
alpha ev56 - 200-300 nsec.
As of current 2.4, there is the only user of ndelay() - ide_execute_command()
that calls ndelay(400). Why udelay(1) cannot be used instead?
Ivan.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/