How does a driver writer determine if his driver can cope with inconsistent
memory? If their view is a 32-byte cache line, and their descriptors are
32 bytes long, they could well say "we can cope with inconsistent memory".
When 64 byte cache lines are the norm, the driver magically breaks.
I think we actually want to pass the minimum granularity the driver can
cope with if we're going to allocate inconsistent memory. A driver
writer does not have enough information to determine on their own
whether inconsistent memory is going to be usable on any architecture.
-- Russell King (email@example.com) The developer of ARM Linux http://www.arm.linux.org.uk/personal/aboutme.html
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to firstname.lastname@example.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/