Re: [Lse-tech] [PATCH 1/2] node affine NUMA scheduler

Martin J. Bligh (mbligh@aracnet.com)
Sun, 22 Sep 2002 12:20:16 -0700


I tried putting back the current->node logic now that we have
the correct node IDs, but it made things worse (not as bad as
before, but ... looks like we're still allocing off the wrong
node.

This run is the last one in the list.

Virgin:
Elapsed: 20.82s User: 191.262s System: 59.782s CPU: 1206.4%
7059 do_anonymous_page
4459 page_remove_rmap
3863 handle_mm_fault
3695 .text.lock.namei
2912 page_add_rmap
2458 rmqueue
2119 vm_enough_memory

Both numasched patches, just compile fixes:
Elapsed: 28.744s User: 204.62s System: 173.708s CPU: 1315.8%
38978 do_anonymous_page
36533 rmqueue
35099 __free_pages_ok
5551 page_remove_rmap
4694 handle_mm_fault
3166 page_add_rmap

Both numasched patches, alloc from local node
Elapsed: 21.094s User: 195.808s System: 62.41s CPU: 1224.4%
7475 do_anonymous_page
4564 page_remove_rmap
4167 handle_mm_fault
3467 .text.lock.namei
2520 page_add_rmap
2112 rmqueue
1905 .text.lock.dec_and_lock
1849 zap_pte_range
1668 vm_enough_memory

Both numasched patches, hack node IDs, alloc from local node
Elapsed: 21.918s User: 190.224s System: 59.166s CPU: 1137.4%
5793 do_anonymous_page
4475 page_remove_rmap
4281 handle_mm_fault
3820 .text.lock.namei
2625 page_add_rmap
2028 .text.lock.dec_and_lock
1748 vm_enough_memory
1713 file_read_actor
1672 rmqueue

Both numasched patches, hack node IDs, alloc from current->node
Elapsed: 24.414s User: 194.86s System: 98.606s CPU: 1201.6%
30317 do_anonymous_page
6962 rmqueue
5190 page_remove_rmap
4773 handle_mm_fault
3522 .text.lock.namei
3161 page_add_rmap

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/