Re: Bug: Discontigmem virt_to_page() [Alpha,ARM,Mips64?]

William Lee Irwin III (wli@holomorphy.com)
Thu, 2 May 2002 17:05:00 -0700


On Thursday 02 May 2002 02:20, Anton Blanchard wrote:
>> From arch/ppc64/kernel/iSeries_setup.c:
>> * The iSeries may have very large memories ( > 128 GB ) and a partition
>> * may get memory in "chunks" that may be anywhere in the 2**52 real
>> * address space. The chunks are 256K in size.
>> Also check out CONFIG_MSCHUNKS code and see why I'd love to see a generic
>> solution to this problem.

On Fri, May 03, 2002 at 01:05:45AM +0200, Daniel Phillips wrote:
> Hmm, I just re-read your numbers above. Supposing you have 256 GB of
> 'installed' memory, divided into 256K chunks at random places in the 52
> bit address space, a hash table with 1M entries could map all that
> physical memory. You'd need 16 bytes or so per hash table entry, making
> the table 16MB in size. This would be about .0006% of memory.

Doh! I made all that noise about "contiguously allocated" and the
relaxation of the contiguous allocation requirement on the aggregate
was the whole reason I liked trees so much! Regardless, if there's
virtual contiguity the table can work, and what can I say, it's not my
patch, and there probably isn't a real difference given that your
ratio to memory size is probably small enough to cope.

On Fri, May 03, 2002 at 01:05:45AM +0200, Daniel Phillips wrote:
> More-or-less equivalently, a tree could be used, with the tradeoff being
> a little better locality of reference vs more search steps. The hash
> structure can also be tweaked to improve locality by making each hash
> entry map several adjacent memory chunks, and hoping that the chunks tend
> to occur in groups, which they most probably do.
> I'm offering the hash table, combined with config_nonlinear as a generic
> solution.

Is the virtual remapping for virtual contiguity available at the time
this remapping table is set up? A 1M-entry table is larger than the
largest available fragment of physically contiguous memory even at
1B/entry. If it's used to direct the virtual remapping you might need
to perform some arch-specific bootstrapping phases. Also, what is the
recourse of a boot-time allocated table when it overflows due to the
onlining of sufficient physical memory? Or are there pointer links
within the table entries so as to provide collision chains? If so,
then the memory requirements are even larger... If you limit the size
of the table to consume an entire hypervisor-allocated memory fragment
would that not require boot-time allocation of a fresh chunk from the
hypervisor and virtually mapping the new chunk? How do you know what
the size of the table should be if the number of chunks varies dramatically?

Cheers,
Bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/