This is exactly what I do.
> Most well constructed
> hash functions perform well when the strings are random, however real
> world data (e.g. directory contntent) is not random at all.
I think you meant to say there, "even many poorly constructed hash
functions perform well when..."
> Efficiency should measure both space and time resources. If it should
> work in a multithreaded situation then another level of complexity is
Sure, I could have added "how big is it". For me, that's just
another kind of efficiency. Writing the code so it's reentrant is
just good practice. There is no excuse whatsoever for not doing
that for something simple like a hash function, even if you
yourself never expect to run two copies concurrently.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to email@example.com More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/