No, you might end up waking someone who did the pthread_cond_wait()
after you did the pthread_cond_broadcast in place of one of the
existing pthread_cond_wait() threads.
I don't believe this is allowed.  
> But:
> static int __pthread_cond_wait(pthread_cond_t *cond,
>                                pthread_mutex_t *mutex,
>                                const struct timespec *reltime)
> {
>         int ret;
> 
>         /* Increment first so broadcaster knows we are waiting. */
> 	futex_down(&cond->lock);
>         atomic_inc(cond->num_waiting);
> (*)     futex_up(&mutex, 1);
> a)	futex_up(&cond->lock, 1);  [move into syscall]
>         do {
> b)                ret = futex_down_time(&cond, ABSTIME); [cond_timed_wait]
>         } while (ret < 0 && errno == EINTR);
> 	[futex_up(&cond->lock, 1); /* release condvar */]
> 
>         futex_down(&mutex->futex);
>         return ret;
> }
> 
> With the original code, we have a "signal/broadcast lost window (a->b)" 
> that shouldn't be there:
Where?  Having done the inc, the futex_up at (a) will fall through,
giving the "thread behaves as if it [signal or broadcast] were issued
after the about-to-block thread has blocked."
> So we would need to enhance the futex_down_timed() call, to
> atomically release the cond->lock on entry, re-aquiring on exit (because
> of the loop).
> This boils down to a cond_var syscall to me (wouldn't sys_ulock(,,OP)
> a better name ? with OPs like MUTEX_UP,MUTEX_DOWN, SEMA_UPn, SEMA_DOWNn, 
> COND_WAIT, COND_TIMED_WAIT, COND_SIGNAL, COND_BROADCAST, RWLOCK_WRLOCK,
> RWLOCK_RDLOCK,RWLOCK_UNLOCK)
You're talking about a completely different beast.
So the summary is: futexes not sufficient to implement pthreads
locking.  That's fine; I can drop all the "extension for pthreads"
futex patches and leave the code as-is ('cept the UP_FAIR patch, which
is independent of this debate).
Thanks!
Rusty.
-- Anyone who quotes me in their sig is an idiot. -- Rusty Russell. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/