RE: [PATCH] udev enhancements to use kernel event queue

David Schwartz (davids@webmaster.com)
Fri, 13 Jun 2003 04:17:54 -0700


Pual Mackerras is said to have opined:

> Patrick Mochel writes:

> > +static inline int atomic_inc_and_read(atomic_t *v)
> > +{
> > + __asm__ __volatile__(
> > + LOCK "incl %0"
> > + :"=m" (v->counter)
> > + :"m" (v->counter));
> > + return v->counter;
> > +}

> BZZZT. If another CPU is also doing atomic_inc_and_read you could end
> up with both calls returning the same value.
>
> You can't do atomic_inc_and_read on 386. You can on cpus that have
> cmpxchg (e.g. later x86). You can also on machines with load-locked
> and store-conditional instructions (alpha, ppc, probably most other
> RISCs).

You can also do it with a conditional move instruction, but it's kind of
ugly. No help on a '386 though.

There are ways to do it that work on a 386, but they are all basically
equivalent to (or worse than) acquiring a spinlock, doing the deed, and then
releasing it.

You could also do (in pseudo-code):

top:
ret <- v->counter
inc ret
LOCK incl v->counter
cmp v->counter, ret
jz end
LOCK decl v->counter
jmp top:
end:
return ret

This does not strictly guarantee in order return values, but that's
meaningless without a lock anyway.

DS

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/