Hmm, I just re-read your numbers above. Supposing you have 256 GB of
'installed' memory, divided into 256K chunks at random places in the 52
bit address space, a hash table with 1M entries could map all that
physical memory. You'd need 16 bytes or so per hash table entry, making
the table 16MB in size. This would be about .0006% of memory.
More-or-less equivalently, a tree could be used, with the tradeoff being
a little better locality of reference vs more search steps. The hash
structure can also be tweaked to improve locality by making each hash
entry map several adjacent memory chunks, and hoping that the chunks tend
to occur in groups, which they most probably do.
I'm offering the hash table, combined with config_nonlinear as a generic
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to firstname.lastname@example.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/