Re: spinlock efficiency problem [was 2.5.57 IO slowdown with CONFIG_PREEMPT enabled)

Joe Korty (jak@rudolph.ccur.com)
Mon, 20 Jan 2003 17:58:07 -0500 (EST)


[ resend - forgot to send this to the list, also forgot intro text ]

> Robert Macaulay <robert_macaulay@dell.com> wrote:
>>
>> On Wed, 15 Jan 2003, Andrew Morton wrote:
>>> if you could please test that with CONFIG_PREEMPT=y
>>
>> Reverting that brings the speed back up
>
> OK. How irritating.
>
> Presumably there's a fairness problem - once a CPU goes in there to start
> spinning on the lock, the length of the loop is such that it's easy for
> non-holders to zoom in and claim it first. Or something.
>
> Unless another way of solving the problem which that patch solves presents
> itself we may need to revert it.
>
> Or not. Should a CONFIG_PREEMPT SMP kernel compromise its latency because of
> overused locking??

Andrew, Everyone,

The new, preemptable spin_lock() spins on an atomic bus-locking
read/write instead of an ordinary read, as the original spin_lock
implementation did. Perhaps that is the source of the inefficiency
being seen.

Attached sample code compiles but is untested and incomplete (present
only to illustrate the idea).

Joe

--- 2.5-bk/kernel/sched.c.orig 2003-01-20 14:14:55.000000000 -0500
+++ 2.5-bk/kernel/sched.c 2003-01-20 17:31:49.000000000 -0500
@@ -2465,15 +2465,13 @@
_raw_spin_lock(lock);
return;
}
-
- while (!_raw_spin_trylock(lock)) {
- if (need_resched()) {
- preempt_enable_no_resched();
- __cond_resched();
- preempt_disable();
+ do {
+ preempt_enable();
+ while(spin_is_locked(lock)) {
+ cpu_relax();
}
- cpu_relax();
- }
+ preempt_disable();
+ } while (!_raw_spin_trylock(lock));
}

void __preempt_write_lock(rwlock_t *lock)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/