Re: [RFC][PATCH] nbd driver for 2.5.72

Lou Langholtz (ldl@aros.net)
Sun, 22 Jun 2003 08:15:47 -0600


Jens Axboe wrote:

>On Sat, Jun 21 2003, Lou Langholtz wrote:
>
>
>>>Because often that lock protects driver-internal objects that are used
>>>by all queues.
>>>
>>>
>>>
>>Not sure I understand what you're saying... I was going by the kernel
>>blk_init_queue(q, rfn, lock) . . .
>>
>. . . One example of such is IDE, which has two drives on one
>channel and the channel is the syncronization point. So it actually
>makes sense to have one lock per channel, and have that lock be shared
>by the two queues (drives) on that channel.
>
>Seems to me, you are suffering somewhat from the 'more locks is just
>faster' disease. This is often not the case. ->queue_lock being a
>pointer is just more powerful than having the lock embedded, because it
>gives you the option to make your locking hierachy the way you want it.
>
>So please, leave the single global nbd_lock and just use that for all
>queues until you have anything close to resembling real evidence that
>splitting it up is worth it. I do guarentee you that for X busy queues,
>the patch you sent will be _slower_ than maintaining one single lock due
>to dirty cache line bouncing.
>
IDE seems like a great example, thanks. In this sense each NIC on the
host then (more especially slower ones like 100Mbps NICs) is something
of a sync point relative to the IDE channel. Very good image indeed!!! I
will redo the patch shortly with just a single lock. Most hosts have
just a single NIC anyway and the argument for saving space is compelling
too!! I suppose based on the NIC being the sync point it could be argued
that I should check and then use literally a lock per NIC, but I'm not
going to make this argument. Simpler (especially to start with) is just
better by me. :-) As is saving space. :-)

As for the disease... personally I just didn't even begin to understand
what you've explained until now. I suspect others who're suffering were
just thinking - as I did - that these were in simplistic terms just
logically independent queues and so deserved logically independent
locks. The cache line problem seems like an entirely seperate problem
that has its very own merits (ie. is arguably reason enough to have just
the single lock).

Maybe there's a good place to add some cautionary comments about this if
it's not somewhere already (like under
Documentation/block/request_queue.txt)??? That's just a thought
though... I'd like to just focus on NBD myself. ;-)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/