Ew! 64GB doesn't need or want any of that. It just needs bugfixes more
badly than most machines & page clustering to keep mem_map down to a
decent size IIRC. The "highmem ratio" is irrelevant, it's just
implementing page clustering and bugfixing. The core kernel doesn't
need to know the page size is fake, and needs the rest of the bugfixes
for e.g. buffer_heads proliferating out of control anyway. No magic.
32GB has already been booted & verified to run Linux. The only badness
is sizeof(mem_map) and the usual contingent of "buffer_heads are out of
control and shrink_cache() can't figure out when to shoot down a slab"
issues. And, well, performance issues show up now and then but those
improvements propagate back to the smaller machines, albeit in smaller
This notion of cutting highmem boxen off at 16GB really does not sound
hot at all. At the very least webserving goes on at 32+GB and that has
no use whatsoever for the hugetlb stuff. Still other workloads, e.g.
client front ends for databases, are in similar positions. People are
actually trying to get things done that need more than 32GB pagecache
on i386 machines.
The big reason the database people are interested in hugetlb stuff is
to work around the pagetable proliferation bugs. They got a backdoor
to take over the VM, so they're even happier than they would be with
a bugfix. But they're not the only workloads around, and the bugfix is
still needed for all 64-bit and more general 32-bit workloads (it was
not fixed by pte-highmem).
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/