I was excited to report the significant improvement of 2.4.19rc1+fsync fix 
over 2.4.18 and didn't realize that the improvement was not due to the fsync 
patch.   I'm so glad Richard did a careful check, I was on my way out the 
door for my vacation :)
I would like to know what's so good about 2.4.19rc1 that gives our block I/O 
benchmark that significant improvement over 2.4.18.
> > It appears from these results that there is no appreciable improvement
> > using the
> > fsync patch - there is a slight loss of top end on 4 adapters 1 drive.
>
> that's very much expected, as said with my new design by adding an
> additional pass (third pass), I could remove the slight loss that I
> expected from the simple patch that puts wait_on_buffer right in the
> first pass.
>
> I mentioned this in my first email of the thread, so it looks all right.
> For a rc2 the slight loss sounds like the simplest approch.
>
> If you care about it, with my new fsync accounting design we can fix it,
> just let me know if you're interested about it. Personally I'm pretty
> much fine with it this way too, as said in the first email if we block
> it's likely bdflush is pumping the queue for us. the slowdown is most
> probably due too early unplug of the queue generated by the blocking
> points.
I don't care about the very slight (and possibly in the noise floor of our 
test) reduction in throughput due to the fsync fix.  I think your's and 
Andrews'  assertion of the bdflush / dirty page handling getting stopped up 
is likely the problem preventing scaling to my personal goal of 250 to 
300MB/sec on our setup.
Thanks,
Mark Gross
PS I had a very nice time on mount hood.  I didn't make it to the top this 
time too much snow had melted off the top of the thing to have a safe attempt 
at the summit.  It was a guided (http://www.timberlinemtguides.com) 3 day 
climb. 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/