You mean we could pass the iocb as a parameter all the way down
for the async versions of the ops and do_sync_op() could just do
a wait_for_sync_iocb() ?
That was what I'd originally intended to do.
But then I experimented with the current->iocb alternative
because:
1. I wasn't sure how much API fixing, we could do at this stage.
(it is after all pretty late in the 2.5 cycle)
If you notice I've been trying to tread very carefully in
terms of the modifications to interfaces, especially anything
that requires changes to all filesystems.
2. I wanted to quickly have something we could play with and run
performance tests on, with minimal changes/impact on existing
code paths and sync i/o operations. Additionally current->iocb
gave me an simple way to detect blocking operations (schedules)
during aio, no matter how deep a subroutine we are in. (I have
been using those indicators to prioritize which blocking
points to tackle)
3. After a first pass of trying to use retries for sync ops
as well, it seemed like being able to continue from a blocking
point directly as we do today would be more efficient (In
this case, we do care more about latency than we do for async
ops). So that meant a switch between return -EIOCBQUEUED and
blocking depending on whether this was an async or sync
context. I could do that with an is_sync_iocb() check as
well (vs current->iocb), but even that would be changing
semantics.
So if (1) is sorted out, i.e. we still have the opportunity
to alter some APIs, then we could do it that way.
Do we ?
Regards
Suparna
-- Suparna Bhattacharya (suparna@in.ibm.com) Linux Technology Center IBM Software Labs, India- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/