The drive is always supposed to generate an interrupt when it sets the
service bit indicating it is ready to receive a service command.
The release interrupt tells the host the drive is doing a bus release
following a queued data command.
The service interrupt tells you you're going DRQ after receiving the service
Maybe there are drives out there that don't support the configuration of
these interrupts... if that is the case, TCQ will never work "well" with
them since you'll need to poll on timer ticks or something, resulting in a
huge performance loss.
I can also picture autopolling for speed on a host controller, done by the
firmware/asic in the background, but all the SATA goodness should be
"achievable" using the interrupt architecture in the design.
>Then there's the debate of whether TCQ is worth it at all, in general. I
>feel that a few tags just to minimize the time spent when ending a
>request to starting a new one is nice.
TCQ shouldn't benefit writes significantly from a performance perspective if
the drive is reasonably smart. TCQ *will* have a huge performance
improvement for random reads since the drive can order responses based on
minimal rotational latency.
Increasing queue depth reduces the average seek time between commands, both
in distance and rotational latency. Provided a drive doesn't do dumb stuff
like we discussed earlier, then it should be good.
>> Personally I'd like to see the option stay in there as experimental, it
>> helps us drive folks test stuff when we can just flip an option off/on to
>> get that functionality.
>I agree, besides it just needs a bit of fixing, can't be much.
I'll do what I can to help in my spare time.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/