Seems reasonable to me. Increasing HZ adds overhead -
it makes sense to incur the interrupt overhead only when it's
needed. In my case, we want to provide fairly precise
network delays (we're doing a WAN simulator), and still hit
line rate. Now, I'm way far from the code, but I suspect that
the interrupt overhead needed to get the precision the customer
is calling for would be totally prohibitive. I dunno if we'll
get the precision the customer wants with George's approach,
but we'll get a lot closer than we would setting HZ to 10000
on our wimpy little embedded platform.
George's approach would work a lot better when doing lots of UML VM's
on a single box, too, wouldn't it?
- Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/