So for gigabit Ethernet driver, what is the optimal mem configuration
for performance and reliability?
From: Jes Sorensen [mailto:email@example.com]
Sent: Tuesday, May 14, 2002 10:59 AM
To: chen, xiangping
Cc: 'Steve Whitehouse'; firstname.lastname@example.org
Subject: Re: Kernel deadlock using nbd over acenic driver.
>>>>> "xiangping" == chen, xiangping <email@example.com> writes:
xiangping> Hi, When the system stucks, I could not get any response
xiangping> from the console or terminal. But basically the only
xiangping> network connections on both machine are the nbd connection
xiangping> and a couple of telnet sessions. That is what shows on
xiangping> "netstat -t".
xiangping> /proc/sys/net/ipv4/tcp_[rw]mem are "4096 262144 4096000",
xiangping> /proc/sys/net/core/*mem_default are 4096000,
xiangping> /proc/sys/net/core/*mem_max are 8192000, I did not change
Don't do this, setting the [rw]mem_default values to that is just
insane. Do it in the applications that needs it and nowhere else.
xiangping> The system was low in memory, I started up 20 to 40 thread
xiangping> to do block write simultaneously.
If you have a lot of outstanding connections and active threads, it's
not unlikely you run out of memory if each socket eats 4MB.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/