I have a semaphore serializing allocation already. (-:
>/* PSEUDO-CODE */
>
>while ( 1 ) {
> disable_preemption();
> cpu = current_cpu();
> if ( decompression_buffers[cpu] ) {
> do_decompression(decompression_buffers[cpu]);
> enable_preemption();
> break; /* DONE, EXIT LOOP */
> } else {
> enable_preemption();
> down_sem(allocation_semaphore);
> /* Avoid race condition here */
> if ( !decompression_buffers[cpu] )
> decompression_buffers[cpu] = vmalloc(BUFFER_SIZE);
> up_sem(allocation_semaphore);
> }
>}
>
>Note that there is no requirement that we're still on cpu "cpu" when
>we allocate the buffer. Furthermore, if we fail, we just loop right
>back to the top.
What is the point though? Why not just:
if (!unlikely(decompression_buffers)) {
down_sem();
allocate_decompression_buffers();
up_sem();
}
And be done with it?
I don't see any justification for the increased complexity...
Best regards,
Anton
-- "I've not lost my mind. It's backed up on tape somewhere." - Unknown-- Anton Altaparmakov <aia21 at cantab.net> (replace at with @) Linux NTFS Maintainer / IRC: #ntfs on irc.openprojects.net WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/