Re: Horrible drive performance under concurrent i/o jobs (dlh problem?)

Torben Frey (kernel@mailsammler.de)
Thu, 19 Dec 2002 15:29:43 +0100


Ok, so now I have set up a backup software Raid 0 formatted with

mke2fs -b 4096 -j -R stride=16

and mounted that device. After starting to backup stuff from the 3ware
controller to the software Raid I soon had complaints from my collegues
because the system load went up to 4 and they had the same bad
responsiveness as before. Of course I CTRL-C'ed my "cp -av" while
watching "vmstat 1" in another window - and this is what surprised me:
when I stopped the copy job, there were 22 more seconds when data was
still written to the backup software raid. Is this a hint where the
problem could be? I have the same "feature" when I write to my 3ware.

My kernel is 2.4.20 with Andrew's patch from last night.

Greetings,
Torben

0 1 3 25292 2188 72056 759644 0 0 4168 23732 1069 780 6 26 68
0 1 3 25292 2176 72056 759644 0 0 0 15728 523 147 1 10 89
2 0 4 25292 2292 72056 759548 0 0 30404 20084 820 1149 4 85 11
0 1 3 25292 2828 72048 759012 0 0 40716 23772 845 1307 2 62 36
0 1 2 25292 3208 72280 758372 0 0 0 16532 573 231 6 13 81
1 0 2 25292 3216 72276 758404 0 0 4880 23800 530 264 2 10 88
0 1 3 25292 2224 72224 759420 0 0 22596 15620 695 602 5 38 57
0 1 3 25292 3996 72204 757692 0 0 26704 23808 765 924 3 38 59
1 0 3 25292 3932 72208 757760 0 0 14380 23996 651 548 3 23 74
0 1 3 25292 2180 72296 759408 0 0 39024 15948 850 1089 3 56 41
1 0 3 25292 3180 72308 758416 0 0 39028 17568 957 1265 1 57 42
0 1 2 25292 3296 72260 758328 0 0 36976 24000 837 1194 5 47 49
1 0 2 25292 3212 72264 758428 0 0 6164 22000 594 330 2 15 83
0 1 3 25292 2212 72268 759412 0 0 44492 16116 878 1433 2 63 35
1 0 3 25292 2896 72056 758952 0 0 19180 24556 683 912 1 32 67
0 0 2 25292 3300 72068 758672 0 0 10564 24296 636 518 1 23 76
HERE WAS MY CTRL-C
1 0 2 25292 3292 72068 758672 0 0 0 15820 511 146 1 15 84
0 0 2 25292 3276 72068 758672 0 0 0 27720 607 341 0 46 54
0 0 2 25292 3276 72068 758672 0 0 0 15912 529 167 1 12 87
0 0 2 25292 3232 72112 758672 0 0 0 23880 537 199 5 7 88
0 0 2 25292 3232 72112 758672 0 0 0 15872 558 198 0 8 92
0 0 2 25292 3232 72112 758672 0 0 0 23740 517 168 4 6 90
0 0 2 25292 4620 72112 757320 0 0 0 23800 528 1044 8 12 80
0 0 2 25292 4620 72112 757320 0 0 0 16100 522 177 4 6 90
0 0 2 25292 4516 72216 757320 0 0 0 24268 558 192 2 5 93
0 0 2 25292 4516 72216 757320 0 0 0 23552 525 179 3 2 95
0 0 2 25292 4516 72216 757320 0 0 0 15872 521 137 0 9 91
0 0 2 25292 4516 72216 757320 0 0 0 31924 515 179 4 7 89
0 0 2 25292 4516 72216 757320 0 0 0 17368 526 144 2 2 96
0 0 1 25292 4488 72244 757320 0 0 0 25308 533 195 1 8 91
0 0 1 25292 4488 72244 757320 0 0 0 16368 504 145 3 12 85
0 0 1 25292 4488 72244 757320 0 0 0 25320 558 247 0 28 72
0 0 1 25292 4484 72244 757320 0 0 0 16376 529 187 3 6 91
0 0 1 25292 4484 72244 757320 0 0 0 24576 508 140 1 3 96
0 0 1 25292 4464 72264 757320 0 0 0 24356 523 197 4 5 91
0 0 1 25292 4464 72264 757320 0 0 0 16364 516 148 1 1 98
0 0 1 25292 4464 72264 757320 0 0 0 24552 507 138 2 1 97
0 0 1 25292 4464 72264 757320 0 0 0 21020 522 174 2 17 81
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 25292 4448 72280 757320 0 0 0 656 347 142 0 8 92
0 0 0 25292 4432 72296 757320 0 0 0 796 353 179 4 2 94

HERE THE WRITING OUT STOPPED, 22 seconds later!!!

0 0 0 25292 4432 72296 757320 0 0 0 0 176 141 1 1 98
0 0 0 25292 4432 72296 757320 0 0 0 0 194 184 2 1 97
0 0 0 25292 4432 72296 757320 0 0 0 0 176 137 1 1 98
0 0 0 25292 4432 72296 757320 0 0 0 0 179 175 2 1 97
0 0 0 25292 4292 72312 757328 116 0 124 32 188 165 1 1 98
0 0 0 25292 4292 72312 757328 0 0 0 0 248 310 2 12 86
0 0 0 25292 4292 72312 757328 0 0 0 0 178 148 1 2 97
0 0 0 25292 4292 72312 757328 0 0 0 0 184 144 0 1 99
0 0 0 25292 4292 72312 757328 0 0 0 0 193 185 2 3 95
0 0 0 25292 4272 72332 757328 0 0 0 640 360 200 1 2 97

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/