Re: recursive spinlocks. Shoot.

Peter T. Breuer (ptb@it.uc3m.es)
Mon, 19 May 2003 01:15:16 +0200


In article <20030518202013$5297@gated-at.bofh.it> you wrote:
>>
>> > #define nestlock_lock(snl) \
>> > do { \
>> > if ((snl)->uniq == current) { \
>>
>> That would be able to read uniq while it is being written by something
>> else (which it can, according to the code below). It needs protection.

> No it does not, look better.

I'm afraid I only see that it does!

>> > atomic_inc(&(snl)->count); \
>> > } else { \
>> > spin_lock(&(snl)->lock); \
>> > atomic_inc(&(snl)->count); \
>> > (snl)->uniq = current; \
>>
>> Hmm .. else we wait for the lock, and then set count and uniq, while
>> somebody else may have entered and be reading it :-). You exit with

> Nope, think about a case were it breaks. False negatives are not possible
> because it is set by the same task and false positives either.

No. This is not true. Imagine two threads, timed as follows ...

.
.
.
.
if ((snl)->uniq == current) {
atomic_inc(&(snl)->count); .
} else { .
spin_lock(&(snl)->lock); .
atomic_inc(&(snl)->count); .
(snl)->uniq = current; <-> if ((snl)->uniq == current) {
atomic_inc(&(snl)->count);
} else {
spin_lock(&(snl)->lock);
atomic_inc(&(snl)->count);
(snl)->uniq = current;

There you are. One hits the read exactly at the time the other does a
write. Bang.

>> Well, it's not assembler either, but at least it's easily comparable
>> with the nonrecursive version. It's essentially got an extra if and
>> an inc in the lock. That's all.

> Well, there's a little difference. In case of contention, you loop with
> your custom try lock while I used the optimized asm code inside spin_lock.
> But again, I believe we didn't lose anything with the removal of this code
> (nested/recursive locks) from the kernel.

We lose hours of programmers time, looking for deadlocks caused by
accidently taking the same spinlock twice and not knowing it.

A question in my mind is whether a fault in a third thread, like
sleeping with a spinlock held, can make a recursive spinlock into
a fault causer ... no, I don't see any way.

Peter
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/