In a transport protocol I'm implementing, I've adapted this TCP hashing
static __inline__ int dcp_hashfn(__u32 laddr, __u16 lport,
__u32 faddr, __u16 fport)
int h = ((laddr ^ lport) ^ (faddr ^ fport));
h ^= h>>16;
h ^= h>>8;
/* make it always < size : */
return h & (MY_HTABLE_SIZE - 1); /* MY_HT... = 128 */
Although I am treating it as a blackbox and it works fine for me, my
professor pointed the following about this function:
If both IP addresses have the same upper 16 bits (like 220.127.116.11 and
18.104.22.168), then the 1st 4-way XOR will put 16 bits of zero in h.
Then "h ^= h>>16" will preserve the upper 16 bits as zero. Then
"h ^= h>>8" will preserve the upper 24 bits!
Also, he says:
Having the same class B network number seems like a common case. In
that case, I don't see much difference between doing the last two
XORs and leaving them out.
It is indeed true that in mentioned case one will end up with many zeros.
Does it mean this function is in-efficient? Having many sockets linked to
the same hash entry means long lookups... Or is there another reason for
this function being implemented this way?
What are yours opinions?
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/