Re: "bio too big" error

Wil Reichert (wilreichert@yahoo.com)
12 Dec 2002 12:33:06 -0500


> ie. previously we were accidentally comparing bytes with sectors to
> verify the device sizes. So either I'm being very stupid (likely) and
> the above patch is bogus, or you really don't have room for this lv.
> Can you send me 3 bits of information please:
Well, it works fine w/ the 2.4 kernel & prior 2.5's, I think my lv is
fine...

> 1) disk/partition sizes for your PVs
spans 4 entire discs and one partition
Disk /dev/discs/disc4/disc: 80.0 GB, 80039116800 bytes
Disk /dev/discs/disc1/disc: 123.5 GB, 123522416640 bytes
Disk /dev/ide/host2/bus1/target0/lun0/disc: 100.0 GB, 100030242816 bytes
Disk /dev/ide/host2/bus0/target1/lun0/disc: 10.1 GB, 10110320640 bytes
/dev/discs/disc0/part4 40072 119150 39855816 8e Linux LVM

Dunno if it matters, but the 80G is 2 striped 40s on a 3ware controller,
the 120, 100, and 10 are on a Promise U133 card, and the 40 gig
partition is on the native VIA controller. Top it all of this is an SMP
box.

> 2) an LVM2 backup of the metadata (the nice readable ascii one).
/etc/lvm/backup/cheese_vg -

# Generated by LVM2: Tue Dec 10 21:11:37 2002

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgconvert -M2 cheese_vg'"

creation_host = "darwin" # Linux darwin 2.4.19 #1 SMP Wed Nov 13
16:54:28 EST 2002 i686
creation_time = 1039572697 # Tue Dec 10 21:11:37 2002

cheese_vg {
id = "WF3vAx-k1r3-NUjU-az7z-I4SM-oorx-rvoYSt"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
system_id = "darwin1025684717"
extent_size = 32768 # 16 Megabytes
max_lv = 256
max_pv = 256

physical_volumes {

pv0 {
id = "XFexK7-KqnW-dt7I-JHfB-gC8t-8Z45-RLiCEW"
device = "/dev/discs/disc4/disc" # Hint
only
status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 4769 # 74.5156 Gigabytes
}

pv1 {
id = "7swUJv-wGiq-9uCz-xiKK-owvf-p77g-zGU1C5"
device = "/dev/discs/disc1/disc" # Hint
only

status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 7361 # 115.016 Gigabytes
}

pv2 {
id = "z1Zxq5-X1JX-q08r-epqS-T0V7-003q-admD5T"
device =
"/dev/ide/host2/bus1/target0/lun0/disc"# Hint only

status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 5961 # 93.1406 Gigabytes
}

pv3 {
id = "zfuSRQ-mYYI-pGHR-9Mu2-uFWu-JQiH-JZyO7I"
device = "/dev/discs/disc0/part4" # Hint
only

status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 2431 # 37.9844 Gigabytes
}

pv4 {
id = "TNYATl-VrjS-Dt4T-e906-Ilb3-bgu7-JQS1sb"
device =
"/dev/ide/host2/bus0/target1/lun0/disc"# Hint only

status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 601 # 9.39062 Gigabytes
}
}

logical_volumes {

blah {
id = "000000-0000-0000-0000-0000-0000-000000"
status = ["READ", "WRITE", "VISIBLE"]
allocation_policy = "next free"
read_ahead = 1024
segment_count = 7

segment1 {
start_extent = 0
extent_count = 4769 # 74.5156
Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 4769
extent_count = 1409 # 22.0156
Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv2", 4552
]
}
segment3 {
start_extent = 6178
extent_count = 2255 # 35.2344
Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv3", 0
]
}
segment4 {
start_extent = 8433
extent_count = 7361 # 115.016
Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv1", 0
]
}
segment5 {
start_extent = 15794
extent_count = 4552 # 71.125
Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv2", 0
]
}
segment6 {
start_extent = 20346
extent_count = 176 # 2.75 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv3", 2255
]
}
segment7 {
start_extent = 20522
extent_count = 601 # 9.39062
Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv4", 0
]
}
}
}
}

> 3) The version of LVM that *created* the lv.
1.0.3, I **think**. I know it rev'd to 1.0.4 or 1.0.5 before I added
another disc or two. Upgraded to lvm2 a couple months back. Just
recently did a 'vgconvert -M2 cheese_vg' to see if that helped things,
but didn't seem to matter.

Wil

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/