Re: VM-related Oops: 2.4.15pre1

Eric W. Biederman (ebiederm@xmission.com)
19 Nov 2001 11:03:20 -0700


Linus Torvalds <torvalds@transmeta.com> writes:

> That's fine - if you have two threads modifying the same variable at the
> same time, you need to lock it.
>
> That's not the case under discussion.
>
> The case under discussion is gcc writing back values to a variable that
> NEVER HAD ANY VALIDITY, even in the single-threaded case. And it _is_
> single-threaded at that point, you only have other users that test the
> value, not change it.
>
> That's not an optimization, that's just plain broken. It breaks even
> user-level applications that use sigatomic_t.
>
> And note how gcc doesn't actually do it. I'm not saying that gcc is broken
> - I' saying that gcc is NOT broken, and we depend on it being not broken.

Linus I agree that gcc works. And even if page->flags is written
to, with two separate write operations page->flags & PG_locked should
still be true.

However this case seems to violate code clarity. If you can have
other users testing PG_locked it is non-intuitive that you can still
normally assign to page->flags.

Would it make sense to add a set_bits macro that is a just an
assignment except on extremely weird architectures or to work
around compiler bugs? I'm just thinking it would make sense
to document that we depend on the compiler not writing some
strange intermediate values into the variable.

I can't imagine why a compiler ever would but it is remotely possible
a compiler but generate an instruction sequence like:
xorl flags, $0xFFFFFFFF
xorl flags, $0xFFFFFFFe

To flip the low bit. I would be terribly surprised, and it would
certainly break sigatomic_t if it was a plain typedef, but stranger
things have happened.

Eric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/