Given amount of (trivially fixable) breakage we _regularly_ see in the form
of missing #includes, I want to find a way to stop it once and for all time.
What about generating one rather big include file out of .config first, and
including it in each .c file? Aha, device drivers... Ok, what about this:
foo.c (core kernel source)
=====
/* All kernel infrastructure structs, funcs, global vars, macros... */
include <kernel_common.h>
...
ide_viaxxx.c (device driver)
============
/* All kernel infrastructure structs, funcs, global vars, macros... */
include <kernel_common.h>
/* Device specific bits not belonging to kernel infrastructure */
include <driver_ide.h>
...
This can slow down compilation on a box with fast disk and slow CPU, but can
_speed up_ comilation if disk is slow and CPU is fast.
Why? Parsing .h file contents (declarations, almost no code) is much faster
than .c, feeding unneeded declarations is cheaper that it seems to be.
Note that in future CPU/disk speed gap will only increase.
Surely I can miss a reason why this won't work at all (messed up
dependencies?), feel free to enlighten me.
-- vda - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/