Re: [patch] O(1) scheduler, -G1, 2.5.2-pre10, 2.4.17 (fwd)

Timothy Covell (timothy.covell@ashavan.org)
Fri, 11 Jan 2002 15:46:24 -0600


On Friday 11 January 2002 15:42, François Cami wrote:
> Ingo Molnar wrote:
> > On Thu, 10 Jan 2002, Mike Kravetz wrote:
> >>If I run 3 cpu-hog tasks on a 2 CPU system, then 1 task will get an
> >>entire CPU while the other 2 tasks share the other CPU (easily
> >>verified by a simple test program). On previous versions of the
> >>scheduler 'balancing' this load was achieved by the global nature of
> >>time slices. No task was given a new time slice until the time slices
> >>of all runnable tasks had expired. In the current scheduler, the
> >>decision to replenish time slices is made at a local (pre-CPU) level.
> >>I assume the load balancing code should take care of the above
> >>workload? OR is this the behavior we desire? [...]
> >
> > Arguably this is the most extreme situation - every other distribution
> > (2:3, 3:4) is much less problematic. Will this cause problems? We could
> > make the fairness-balancer more 'sharp' so that it will oscillate the
> > length of the two runqueues at a slow pace, but it's still caching loss.
> >
> >>We certainly have optimal cache use.
> >
> > indeed. The question is, should we migrate processes around just to get
> > 100% fairness in 'top' output? The (implicit) cost of a task migration
> > (caused by the destruction & rebuilding of cache state) can be 10
> > milliseconds easily on a system with big caches.
> >
> > Ingo
>
> I do vote for optimal cache use. Using squid (200MB process in my case)
> can be much faster if squid stays on the same CPU for a while, instead
> of hopping from one CPU to another (dual PII350 machine).
>
> François Cami

But, given the above case, what happens when you have Sendmail on
the first CPU and Squid is sharing the second CPU? This is not optimal
either, or am I missing something?

-- 
timothy.covell@ashavan.org.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/