The variance is presumably because of the naive read/write
implementation. It sucks in 16 megs and writes out out again.
With a 100 megabyte file you'll get aliasing effects between
the sampling interval and the client's activity.
You will get more repeatable results using smaller files. I'm
just sending /usr/local/bin/* ten times, with
./zcc -s otherhost -c /usr/local/bin/* -n10 -N2 -S
Maybe that 16 meg buffer should be shorter... Yes, making it
smaller smooths things out.
Heh, look at this. It's a simple read-some, send-some loop.
Plot CPU utilisation against the transfer size:
8192 bytes is best.
I've added the `-b' option to zcc to set the transfer size. Same
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
Please read the FAQ at http://www.tux.org/lkml/