Re: Linux vs Windows temperature anomaly

Trever L. Adams (
05 Mar 2003 19:47:05 -0500

> The behavior you describe is when you increase the power output of a
> chip beyond normal specifications (overclocking) then the temperature of
> failure is lowered. eg. A chip that would run normally at 50C now can
> only run stable at 45-40.

You are the one mistaken. Most CPUs don't dissipate a constant amount
of power as heat. That depends on what the CPU is doing. For example,
even the Athlon without disconnect will cool some when it is 'halt'ed.
If a CPU is working more, accomplishing more than it was at another
time, it will be needing to rid itself of more heat. Hence, the fact
that the external temperature becomes the limiting factor (along with
how good the heat exchange system is [i.e. heat sink/fan]).

I do believe the previous poster was incorrect about the mathematical
relationship between case and CPU temperatures. They are NOT a 1:1.
However, he is right, they are mathematically related. Just as the heat
dissipated and the work done are related.

You do not need to overclock a CPU to get this kind of a change. The
change in the efficiency (memory management, task switching, etc.) of
how the work is done can cause the CPU to be worked harder... and when
the CPU is worked harder, so is memory and quite often just about
everything else.


One O.S. to rule them all, One O.S. to find them. One O.S. to bring them
all and in the darkness bind them.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to More majordomo info at Please read the FAQ at