That's my /proc/meminfo:buffermem counter-upper.  I said it would suck ;)
Probably count_list(&inode_unused) can just be nuked.  I don't think blockdev
inodes ever go onto inode_unused.
>   4608 __rdtsc_delay                            164.5714
>   2627 __generic_copy_to_user                    36.4861
>   2401 count_list                                42.8750
>   1415 find_inode_fast                           29.4792
>   1325 do_anonymous_page                          3.3801
> 
> It also looks like there's either a bit of internal fragmentation or a
> missing kmem_cache_reap() somewhere:
> 
>   ext3_inode_cache:    20001KB    51317KB   38.97
>       dentry_cache:     4734KB    18551KB   25.52
>    radix_tree_node:     1811KB     1923KB   94.20
>        buffer_head:     1132KB     1378KB   82.12
That's really outside the control of slablru.  It's determined
by the cache-specific LRU algorithms, and the allocation order.
You'll need to look at the second-last and third-last columns in
/proc/slabinfo (boy I wish that thing had a heading line, or a nice
program to interpret it):
ext3_inode_cache     959   2430    448  264  270    1
That's 264 pages in use, 270 total.  If there's a persistent gap between
these then there is a problem - could well be that slablru is not locating
the pages which were liberated by the pruning sufficiently quickly.
Calling kmem_cache_reap() after running the pruners will fix that up.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/