Re: [PATCH] io stalls (was: -rc7 Re: Linux 2.4.21-rc6)

Chris Mason (mason@suse.com)
10 Jun 2003 20:44:14 -0400


On Tue, 2003-06-10 at 20:16, Andrea Arcangeli wrote:
> On Tue, Jun 10, 2003 at 07:13:45PM -0400, Chris Mason wrote:
> > On Mon, 2003-06-09 at 18:19, Andrea Arcangeli wrote:
> > The avg throughput per process with vanilla rc7 is 3MB/s, the best I've
> > been able to do was with nr_requests at higher levels was 1.3MB/s. With
> > smaller of iozone threads (10 and lower so far) I can match rc7 speeds,
> > but not with 20 procs.
>
> at least with my patches, I also made this change:
>
> -#define ELV_LINUS_SEEK_COST 16
> +#define ELV_LINUS_SEEK_COST 1
>
> #define ELEVATOR_NOOP \
> ((elevator_t) { \
> @@ -93,8 +93,8 @@ static inline int elevator_request_laten
>
> #define ELEVATOR_LINUS \
> ((elevator_t) { \
> - 2048, /* read passovers */ \
> - 8192, /* write passovers */ \
> + 128, /* read passovers */ \
> + 512, /* write passovers */ \
> \
>

Right, I had forgotten to elvtune these in before my runs. It shouldn't
change the __get_request_wait numbers, except for changes in the
percentage of merged requests leading to a different number of requests
overall (which my numbers did show).

> you didn't change the I/O scheduler at all compared to mainline, so
> there can be quite a lot of difference in the bandwidth average per
> process between my patches and mainline and your patches (unless you run
> elvtune or unless you backed out the above).
>
> Anyways the 130671 < 100, 0 < 200, 0 < 300, 0 < 400, 0 < 500 from your
> patch sounds perfectly fair and that's unrelated to I/O scheduler and
> size of runqueue. I believe the most interesting difference is the
> blocking of tasks until the waitqueue is empty (i.e. clearing the
> waitqueue-full bit only when nobody is waiting). That is the right thing
> to do of course, that was a bug in my patch I merged by mistake from
> Nick's original patch, and that I'm going to fix immediatly of course.

Ok, Increasing q->nr_requests also changes the throughput in high merge
workloads.

Basically if we have 20 procs doing streaming buffered io, the buffers
end up mixed together on the dirty list. So assuming we hit the hard
dirty limit and all 20 procs are running write_some_buffers() the only
way we'll be able to efficiently merge the end result is if we can get
in 20 * 32 requests before unplugging.

This is because write_some_buffers grabs 32 buffers at a time, and each
caller has to wait fairly in __get_request_wait. With only 128 requests
in the run queue, the disk is unplugged before any of the 20 procs has
submitted each of their 32 buffers.

It might make sense to change write_some_buffers to work in smaller
units, 32 seems like a lot of times to wait in __get_request_wait just
for an atime update.

-chris

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/