Perhaps provide macros in asm/pci.h that will:
- Take a buffer size and add an appropriate amount (one cache line for
alignment and the remainder to fill out the last cache line) to be
used for kmalloc(), etc, eg:
#define DMA_SIZE_ROUNDUP(size) \
((size + 2*SMP_CACHE_BYTES - 1) & ~(SMP_CACHE_BYTES - 1))
- Take a buffer address (as returned from kmalloc() with the modified
size from above) and round it up to a cacheline boundary, eg:
#define DMA_BUFFER_ALIGN(ptr) \
(((unsigned long)ptr + SMP_CACHE_BYTES - 1) & ~(SMP_CACHE_BYTES - 1))
These two, in conjunction, would provide a buffer that's aligned on a
cacheline boundary and ends on a cacheline boundary. Kind of ugly, but
would be sufficient and would hide the cacheline size specifics.
Cache-coherent platforms would just returned the original argument.
Thanks,
Will
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/