Re: kernel freeze on 2.4.0.prerelease (smp,raid5)

Neil Brown (
Fri, 5 Jan 2001 08:53:37 +1100 (EST)

On Wednesday January 3, wrote:
> On Tue, 02 Jan 2001 18:19:41 +0100, Otto Meier wrote:
> >>Dual Celeron (SMP,raid5)
> >> As stated in my first mail I run actually my raid5 devices in degrated mode
> >> and as I remenber there has been some raid5 stuff changed between
> >> test13p3 and newer kernels.
> >So tell us, why do you run your raid5 devices in degraded mode?? I
> >cannot be good for performance, and certainly isn't good for
> >redundancy!!! But I'm not complaining as you found a bug...
> I am actually in the middle of the conversion process to raid5 but it takes a while
> I am to lazy :-) to get the next drive free to get raid5 into the
> fully running mode.

If "necessity is the mother of invention", then I think laziness is
the father :-)

> btw what does this message in boot.msg mean?
> <4>raid5: switching cache buffer size, 4096 --> 1024
> <4>raid5: switching cache buffer size, 1024 --> 4096

The raid5 module maintains a stripe cache. The width of this cache
needs to be the same as the size of requests that are received.
The initial default size if 4096.
When you mkfs or fsck, the I/O requests that arrive are 1024 bytes
long, so the cache is flushed and rebuilt with a different size.
After you mount a filesystem, requests start coming at filesystem
blocksize, which is typically 4096 bytes.
If you happen to use LVM to partition a raid5 device, and have a
1K-block filesystem in one partition and a 4k-block filesystem in
another, then requests of different sizes will arrived mixed together
and the stripe cache will constantly be flushed and rebuilt and you
will gets lots of these messages together with a performance hit as
lots of requests will get serialised by the cache flushing.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at