Re: [PATCH 2.5.53] NUMA scheduler (1/3)

Michael Hohnbaum (hohnbaum@us.ibm.com)
05 Jan 2003 19:58:46 -0800


On Sat, 2003-01-04 at 21:35, Martin J. Bligh wrote:
> >> Here comes the minimal NUMA scheduler built on top of the O(1) load
> >> balancer rediffed for 2.5.53 with some changes in the core part. As
> >> suggested by Michael, I added the cputimes_stat patch, as it is
> >> absolutely needed for understanding the scheduler behavior.
> >
> > Thanks for this latest patch. I've managed to cobble together
> > a 4 node NUMAQ system (16 700 MHZ PIII procs, 16GB memory) and
> > ran kernbench and schedbench on this, along with the 2.5.50 and
> > 2.5.52 versions. Results remain fairly consistent with what
> > we have been obtaining on earlier versions.
> >
> > Kernbench:
> > Elapsed User System CPU
> > sched50 29.96s 288.308s 83.606s 1240.8%
> > sched52 29.836s 285.832s 84.464s 1240.4%
> > sched53 29.364s 284.808s 83.174s 1252.6%
> > stock50 31.074s 303.664s 89.194s 1264.2%
> > stock53 31.204s 306.224s 87.776s 1263.2%
>
> Not sure what you're correllating here because your rows are all named
> the same thing. However, the new version seems to be much slower
> on systime (about 7-8% for me), which roughly correllates with your
> last two rows above. Me no like.

Sorry, I forgot to include a bit better description of what the
row labels mean.

sched50 = linux 2.5.50 with the NUMA scheduler
sched52 = linux 2.5.52 with the NUMA scheduler
sched53 = linux 2.5.53 with the NUMA scheduler
stock50 = linux 2.5.50 without the NUMA scheduler
stock53 = linux 2.5.53 without the NUMA scheduler

Thus, this shows that the NUMA scheduler drops systime by ~5.5 secs,
or roughly 8%. So, my testing is not showing an increase in systime
like you apparently are seeing.

Michael

-- 
Michael Hohnbaum            503-578-5486
hohnbaum@us.ibm.com         T/L 775-5486

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/