Re: Is TCP CA_LOSS to CA_RECOVERY possible

Mika Liljeberg (Mika.Liljeberg@welho.com)
17 Jul 2002 19:44:47 +0300


On Wed, 2002-07-17 at 19:18, spy9599 wrote:
> In the present TCP (2.5.x) implementation, the TCP
> sender never exits TCP_CA_Loss state until all packets
> upto high_seq are acknowledged. But lets say while
> doing retransmissions, some packet less than high_seq
> are lost again. Ideally the TCP sender should just
> enter fast retransmit and fast recovery, but from the
> present implementation it seems the only way to come
> out of it is after a timeout.
>
> Could somebody explain this to me please.

The only reliable way to detect that a retransmitted segment has been
lost is timeout. You can't use dupacks, because at this point they are
not necessarily caused by lost retransmissions. They might be caused by
duplicates, delayed packets, or the acks themselves might be delayed.
You could end up with shrinking the congestion window unnecessarily or
just plain bad retransmission behaviour. Note that if SACK is enabled,
the transmitter will not retransmit too many unnecessary segments
anyway.

MikaL

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/