ps performance sucks (was Re: dcache_rcu [performance results])

Martin J. Bligh (mbligh@aracnet.com)
Mon, 04 Nov 2002 17:14:19 -0800


> Clearly ps could do with a cleanup. There is no reason to
> read environ if it wasn't asked for. Deciding which files
> are needed based on the command line options would be a
> start.
>
> I'm thinking that ps, top and company are good reasons to
> make an exception of one value per file in proc. Clearly
> open+read+close of 3-5 "files" each extracting data from
> task_struct isn't more efficient than one "file" that
> generates the needed data one field per line.

I think it's pretty trivial to make /proc/<pid>/psinfo, which
dumps the garbage from all five files in one place. Which makes
it 5 times better, but it still sucks.

> Don't get me wrong. I believe in the one field per file
> rule but ps &co are the exception that proves (tests) the
> rule. Especially on the heavily laden systems with
> tens of thousands of tasks. We could do with a something
> between /dev/kmem and five files per pid.

I had a very brief think about this at the weekend, seeing
if I could make a big melting pot /proc/psinfo file that did
seqfile and read everything out in one go, using seq_file
internally to interate over the tasklist. The most obvious
problem that sprung to mind seems to be the tasklist locking -
you obviously can't just hold a lock over the whole thing.
As I know very little about that, I'll let someone else suggest
how to do this, but I'm prepared to do the grunt work of implementing
it if need be.

M.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/