Think about a while - if a file write (say 1K in length) must include 10000
groups (at 16 bits/group? 32bits?) then the resulting transaction becomes
1K + 2*10K -> 21K (or 1K+4*10K -> 41K). This is over 20 times the overhead.
No I don't expect the structures to be changed that much.
I am in favor of being able to change the limits, but lets be reasonable.
I can see a use for 64 groups, but not much over. Even that introduces
a practical security problem (leakage of data from one authorized group
to unauthorized groups).
By the time you get one or more people with that many groups you
may as well put everybody in one group and be done with it.
ACLs would be much more usefull, and controllable than 10000 groups
anyway. At least the owner of the file would be able to specify exactly
what access is being granted, and to whom.
BTW, a bsearch algorithm is slow compaired to a radix search in
sparse trees...
-- ------------------------------------------------------------------------- Jesse I Pollard, II Email: pollard@navo.hpc.milAny opinions expressed are solely my own. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/