because just making ticks shorter/more frequent just increases timer overhead 
all the time whether you are actually doing anything requiring it or not. 
this is a big waste of cpu cycles.
using the partial tick method put forward by george, you only pay the price 
for higher resolution timers WHEN YOU WANT TO.
most things that want say 1usec precision dont want to do something EVERY us, 
just something every now and then with 1us precision.  things like programs 
that want to block for a 350 usec. but waiting 10 or even 1 msec would be too 
long. 
the timer overhead using fixed interval timers (as you suggest) to support 
that occaisional 350 usec block would eat too much cpu to be practical.
increasing timer frequency penalizes ALL users/processes with increased timer 
overhead all the time for the benefit of the small number of tasks that need 
better resolution.  the sub-jiffie/partial tick model only pays that price 
when there is an actual timed event that needs to occur at that higher 
resolution and the rest of the time the timer overhead remains as it is today 
(which to my mind is 10 times what it needs to be, but that is an argument 
for another day)
embedded systems in particular need higher resolution and these types of 
systems are precisely the systems that can't afford to multiply their timer 
overhead by a factor of 10 or more (as increasing HZ to 1000 does).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/