some questions using rdtsc in user space

Alexandre P. Nunes (alex@PolesApart.wox.org)
Fri, 02 Aug 2002 14:08:51 -0300


Hi,

Both I and a friend have with an interesting scenario, maybe someone can
help us.

We have to access a device connected to parallel port, which works in
the following way: you send a byte to the port, to turn some bits on
(reflecting on some pins on the parallel port), which is interpreted by
the device as a command. Then you are supposed to sleep about ~200ns
(maybe more, just can't be much less), and then you send a byte which is
received by the device as data, pertinent to command.

We wrote a program which accomplishes this by doing outb() to
appropriate address(es), followed by usleep(1), but that seems to take
about 10 ms at average or so, which is far from good for our application.

I read somewhere that putting the process in real-time priority could
lead the average to 2ms, but I had this though that I could solve this
by using rdtsc instruction, because as far as I know it won't cause a
trap to kernel mode, which maybe expensive, am I right?

I don't have the need to use real time linux (though I'm considering
real-time priority), nor desperate time precision needs, what I don't
want is to have huge delays. I cannot relay on the low-latency patches
too, if possible (though I know it could help), because the program
will eventually run on standard kernels.

If using rdtsc is a good way, someone knows how do I do some sort of
loop, converting the rdtsc difference (is it in cpu clocks, right?) to
nano/microseconds, and if there could be bad behaviour from this (I
believe there could be some SMP issues, but for now this is irrelavant
for us).

Thanks!

Alexandre

P.S..: carbon-copy me, since I'm not subscribed to the list.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/