OK, I see what the problem is. Regular memory users are consuming memory
right down to the emergency reserve limit, beyond which only PF_MEMALLOC
users can go. Unfortunately, since atomic memory allocators can't wait,
they tend to fail with high frequency in this state. Duh.
First, there's an effective way to make these particular atomic failures
go away almost entirely. The atomic memory user (in this case a network
interrupt handler) keeps a list of pages for its private use, starting with
an empty list. Each time it needs a page it gets it from its private list,
but if that list is empty it gets it from alloc_pages, and when done with
it, returns it to its private list. The alloc_pages call can still fail of
course, but now it will only fail a few times as it expands its list up to
the size required for normal traffic. The effect on throughput should be
roughly nothing.
Let's try another way of dealing with it. What I'm trying to do with the
patch below is leave a small reserve of 1/12 of pages->min, above the
emergency reserve, to be consumed by non-PF_MEMALLOC atomic allocators.
Please bear in mind this is completely untested, but would you try it
please and see if the failure frequency goes down?
--- ../2.4.9.clean/mm/page_alloc.c Thu Aug 16 12:43:02 2001
+++ ./mm/page_alloc.c Wed Aug 29 23:47:39 2001
@@ -493,6 +493,9 @@
}
/* XXX: is pages_min/4 a good amount to reserve for this? */
+ if (z->free_pages < z->pages_min / 3 && (gfp_mask & __GFP_WAIT) &&
+ !(current->flags & PF_MEMALLOC))
+ continue;
if (z->free_pages < z->pages_min / 4 &&
!(current->flags & PF_MEMALLOC))
continue;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/