Re: 2.5.69, IDE TCQ can't be enabled

Jens Axboe (axboe@suse.de)
Mon, 12 May 2003 21:42:45 +0200


On Mon, May 12 2003, Jeff Garzik wrote:
> On Mon, May 12, 2003 at 01:19:36PM -0600, Mudama, Eric wrote:
> > >You are ignoring the host side of things. PATA TCQ is basically
> > >unsupportable without some hardware support (auto-poll). It's my
> > >understanding that all SATA controllers do that.
> >
> > The drive is always supposed to generate an interrupt when it sets the
> > service bit indicating it is ready to receive a service command.
> >
> > The release interrupt tells the host the drive is doing a bus release
> > following a queued data command.
> >
> > The service interrupt tells you you're going DRQ after receiving the service
> > command.
>
> Most Linux people with TCQ drives seem to have Hitachi (nee IBM)
> ones AFAICS. These do not have a service interrupt (or at least,
> do not report such)

Nonsense, it supports the service interrupt just fine. It will just
complain if you try to turn it off, iirc.

> They do have the release interrupt.

Which we don't use. To be interesting, you need to speculatively turn on
the dma engine for each command you want to start. If you don't do that,
then it's faster just to poll for release/no-release at command start
time.

You should probably read ide-tcq to see what we do :-)

> > >Then there's the debate of whether TCQ is worth it at all, in general. I
> > >feel that a few tags just to minimize the time spent when ending a
> > >request to starting a new one is nice.
> >
> > TCQ shouldn't benefit writes significantly from a performance perspective if
> > the drive is reasonably smart. TCQ *will* have a huge performance
> > improvement for random reads since the drive can order responses based on
> > minimal rotational latency.
>
> You hit the nail on the head.
>
> With the host interface limitation of a single scatterlist
> particularly, writes do not benefit very much at all. However,
> since reads can be queued and buffered internally in the drive,
> TCQ will definitely show benefits.

I don't think the multiple pending _and_ active is that big a deal, and
besides _everybody_ uses write back caching on IDE which makes TCQ for
writes very uninteresting.

> Coming from an OS perspective, I think we really want to be able to
> queue up a bunch of scatterlists, like the new AHCI spec does.

I have to agree with Eric that the largest win is potentially not
getting hit by the rotational latency all the time. I don't think you'll
get much extra from actually having more than one active from the dma
POV.

-- 
Jens Axboe

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/