Just to follow up on this. I did some testing on my test machine
which has two u160 adaptec scsi chains (one on the mother board, on
a separate pci card) with a bunch of seagate st318451LC 18Gig 15000rpm
(Cheetah X15) drives which claim a transfer rate of 37.4 to 48.9
MB/sec.
That transfer rate is off a single track. This might be a bit
simplist, but it takes 4ms to read a track, and 0.7 to step to the
next track, so thats a 16% drop in throughput when writing multiple
consecutive tracks, so I would expect a maximum sustained throughout
of 31.8 to 41.6 MB/sec when writing.
If I create a raid1 array with one drive on each scsi chain, I get a
rebuild rate of about 40M/sec, which is what I would expect.
If I create a second while the first is still building, the rates drop
to about 38M/sec. I guess there is a bit more buss contention as we
are now at 50% of buss utilisation.
A third one and the speeds drop to around 31 M/sec, which is 93M/sec
on each buss.
If I create a raid5 array using 9 drives (4 on one channel, 5 on
other), and create it with one 8 working drives, one failed, and one
spare, then reconstruction starts on the spare at about 22Mbytes/sec.
This sees 110 Mbytes/sec pass over one of the SCSI channels.
So the drives are not maxing out, and nor are the scsi busses.
I'm curious to know where the speed loss is coming from, but I think
that on the whole, the raid layer is going quite a good job of
keeping the drives busy.
Note that if I create a RAID5 array without any failed or spare
drives, then reconstruction speed is much lower. I get 13M/sec.
This is because the "resync" process is optimised for an array that is
mostly in sync. "reconstruction" is a much more efficient way to
create a new raid5 array.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/