Re: select() - Linux vs. BSD

Jamie Lokier (lk@tantalophile.demon.co.uk)
Sat, 2 Jun 2001 23:46:40 +0200


lost@l-w.net wrote:
> Of course, not looking at the sets upon a zero return is a fairly obvious
> optimization as there is little point in doing so.

No; a fairly obvious optimisation is to avoid calling FD_ZERO if you
can clear the bits individually when you test them.

When you examine the sets, you can clear each bit that you examine and
then you know you have a zero set. Then you can set only the relevant
bits for the next call to select().

If you can't rely on the sets being cleared on a timeout, then you
will have to call FD_ZERO in that case, or you will have to go through
the list of descriptors and clear them individually. (This can be
avoided but it means keeping track of state between successive calls
to select()). This is contrary to the non-timeout case, where you
stop checking bits when you have counted N of them set.

So you see, there is a handy optimisation if you can assume the sets
are zeroed on timeout.

-- Jamie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/