performance impact of using noapic

Stephane Charette (scharette@packeteer.com)
Tue, 02 Jul 2002 12:22:34 -0700


I was reading through "FreeBSD Versus Linux Revisited" by Moshe Bar at "http://www.byte.com/documents/s=1794/byt20011107s0001/1112_moshe.html".

One paragraph in particular caught my eye:

On the Linux side, I attached all interrupts coming
from the network adaptor to one CPU. With the new
TCP/IP stack in the 2.4 kernels this really becomes
necessary. Otherwise, you might find the incoming
packets arranged out of order, because later interrupts
are serviced (on another CPU) before earlier ones, thus
requiring a reordering further down the handling layers.

Is this a widely-known issue? Or is this simply theory? I'd never heard this mentionned until I read the article.

I ran some web-based performance tests with the 2.4.19-pre9-SMP kernel on a dual-CPU Athlon 1600MHz box, and found that running with "noapic" actually improved network performance. (Negligable -- only 1% improvement in the small webstone-based test I ran.)

As I wrote in another post concerning performance from earlier today, the actual values of my performance tests are not important -- the trend is what I wish to higlight.

My questions are:

1) am I right in thinking that "noapic" will force all interrupts to be handled by 1 CPU?

2) how would you force all interrupts from only 1 hardware device (and not all devices) to be handled by 1 processor, as hinted in the paragraph quoted above?

Thanks,

Stephane Charette

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/