Re: linux scheduler limitations?

J . A . Magallon (jamagallon@able.es)
Thu, 29 Mar 2001 23:35:21 +0200


On 03.29 Fabio Riccardi wrote:
>
> I've found a (to me) unexplicable system behaviour when the number of
> Apache forked instances goes somewhere beyond 1050, the machine
> suddently slows down almost top a halt and becomes totally unresponsive,
> until I stop the test (SpecWeb).
>

Have you though about pthreads (when you talk about fork, I suppose you
say literally 'fork()') ?

I give a course on Parallel Programming at the university and the practical
work was done with POSIX threads. One of my students caught the idea and
used it to modify his assignment from one other matter on Networks, and
changed the traditional 'fork()' in a simple ftp server he had to implement
by 'pthread_create' and got a 10-30 speedup (conns per second).

And you will get rid of some process-per-user limit. But you will fall into
an threads-per-user limit, if there is any.

And you cal also control its scheduling, to make each thread fight against
the whole system or only its siblings.

-- 
J.A. Magallon                                          #  Let the source
mailto:jamagallon@able.es                              #  be with you, Luke... 

Linux werewolf 2.4.2-ac28 #1 SMP Thu Mar 29 16:41:17 CEST 2001 i686

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/