I suspect that a number of these statistical accounting mechanisms
are going to break.  The new irq-affinity code works awfully well.
The kernel profiler in 2.5 doesn't work very well at present.
When investigating this, I ran a busy-wait process.  It attached
itself to CPU #3 and that CPU received precisely zero interrupts
across a five minute period.  So the profiler cunningly avoids profiling
busy CPUs, which is rather counter-productive.  Fortunate that oprofile
uses NMI.
> ...
> ===== fs/buffer.c 1.64 vs edited =====
> --- 1.64/fs/buffer.c    Mon May 13 19:04:59 2002
> +++ edited/fs/buffer.c  Mon May 13 19:16:57 2002
> @@ -156,8 +156,10 @@
>         get_bh(bh);
>         add_wait_queue(&bh->b_wait, &wait);
>         do {
> +               atomic_inc(&nr_iowait_tasks);
>                 run_task_queue(&tq_disk);
>                 set_task_state(tsk, TASK_UNINTERRUPTIBLE);
> +               atomic_dec(&nr_iowait_tasks);
>                 if (!buffer_locked(bh))
>                         break;
>                 schedule();
Shouldn't the atomic_inc cover the schedule()?
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/