It's not the process memory, and it is a whole lot than a "few meg" if 
your page size is 2M.
Look at "free" output one day, and notice that "cached" line? On my 2G 
machine, I usually have about a gig cached or so. Guess what the most 
common thing in that case is? Yeah, the kernel. 
And my kernel tree (with bk overhead etc) is right not about 25.000 files. 
That's without object files etc. At 2M a pop in the page cache, that's a 
whole lot more memory for caching than I have in my machine.
Ok, so assume you compress that, and you only actually use the full 2M 
when mapping into user space, you now added a lot of complexity, but at 
least you make the ridiculous memory use go down. 
But even in the process space, I've got about 150 processes quite 
normally, and while most of them are idle, if we had 2M pages most of them 
would waste at least 2M of memory (probably more - the stack doesn't even 
need half a page, and the data section would probably waste half a page on 
average).
That's 300M just wasted.
Tell me that's peanuts even if you've got a few gigs of ram on your 
machine. 
Admit it, you're just wrong. 2M page sizes are _not_ useful for the common
case, and won't be for years to come.
In short, youäre 
> They say:
> 	Hammer microarchitecture features a flush filter allowing multiple
> 	processes to share TLB without SW intervention.
> 
> Not a lot of technical detail in that.
I suspect it's some special case for windows with a special MSR that 
enables something illegal that just works well for whatever patterns 
windows does.
		Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/