[MUD-Dev] [TECH] TCP fundamental throughput limits?

Jeremy Noetzelman jjn at kriln.com
Tue Oct 7 22:56:00 CEST 2003


On Tue, 7 Oct 2003, ceo wrote:

> Someone recently pointed out to me that TCP is limited to a
> theoretical maximum throughput for any given pair of RTT and
> percent packet loss.

Maximum TCP throughput depends on RTT, link speed, and buffer sizes
on the client and the server.  Packet loss can certainly reduce
throughput, but TCP will, in general, recover from packet loss,
increasing throughput to the theoretical maximum.

> I'd never thought about it before, but it certainly makes perfect
> sense when using AIMD (basically: when packets are not being
> dropped, it increases speed linearly, when they ARE being dropped,
> it decreases speed exponentially). The decrease massively
> outstrips the increase, so it makes sense that for given RTT +
> packet loss, you will only be able to hit a given throughput, on
> average.

Reno, the most common TCP implementation today, doesn't quite work
that way.  It uses methods called 'slow start' and 'congestion
avoidance'.

Slow Start basically means your connection starts slow, and every
time the connection has successfully transmitted a congestion
window's worth of packets, it doubles the transmission rate.  This
results in an exponential increase in speed.  Once a packet is lost,
Reno drops off to the last successful rate, and sets a threshold
there.  It then does the whole 'slow start' dance all over again
until it hits that threshold.  It then starts 'congestion avoidance'

Congestion Avoidance is basically a gentle increase of the
congestion window, one packet at a time, until a packet is lost.
When a packet is lost, the stack does essentially the same thing it
does in the slow start phase.

> This of course matters hugely when you have a huge bandwidth - or
> very high RTT or packet loss. The latter comes into play with
> mobile device gaming, and e.g. MUDding from a handheld client over
> 3G systems, where RTT's have been historically very high.

Some things of interest:

You can easily calculate the required tcp buffer size for a given
amount of bandwidth and RTT:

   Bandwidth * RTT = Buffer Size

To fill a gigabit ethernet link, assuming a 70ms RTT (typical RTT
for a coast to coast abilene connection) works out to be 8.75
megabytes.

If you wanted to calculate the maximum tcp throughput of a given
connection, you'd just tweak the above equation a bit:

  Buffer Size
  ----------- = throughput
     RTT

I should note that throughput on a connection is also limited by the
congestion window size.  For a 16 bit uint size, your effective
bandwidth is:

  2^16
  ----
  RTT

For a 32 bit uint size, use 2^32 instead of 2^16.

As you can see, packetloss itself will affect the real world
performance of a connection by triggering back off.  On the other
hand, it doesn't figure into the theoretical numbers at all.

> However, I thought that very few (if any) TCP implementations were
> using simple AIMD, and that most were considerably better. I know
> that my old favourite (TCP Vegas) is not in general usage (sob,
> sob :)), but I thought other improvements were.

> However, there are several companies making grand claims that
> their TCP replacements are "up to 100 times faster than TCP, and
> tupically 30 times faster...at least 3 times faster".

Problem with these is that you have to use it on both ends.
Sometimes feasable, often not.

> I was wondering if anyone on this list is acquainted with current
> typical TCP implementations and/or knows anything about quite how
> serious these theoretical limits are in practice?

Until a few months ago when I went to work on high speed video
streaming for Time Warner Cable, I was working heavily on high speed
TCP throughput on Abilene.  We didn't use any special TCP
implementations, and used mostly Linux and Windows devices for our
tests.  We could routinely get full line rate gigabit ethernet (and
OC48 speeds as well) from coast to coast by tweaking window and
buffer sizes appropriately.

> Certainly, it can potentially put a new spin on the "TCP vs UDP"
> debate which I've not seen before...e.g. perhaps something like
> "for mobile devices with 500ms+ RTT's, don't bother with TCP at
> all". (nb: I've not done the maths to work out what the critical
> threshold for RTT/packet losses is, but the sales literature is
> based on 200ms and fairly small packet loss).

The TCP v UDP issue isn't really about throughput as much as it is
consistency.  TCP tends to be very cyclical, with a single lost
packet causing a significant fluctuation in throughput.  UDP
connections tend to just retrans the lost packet at the same
bitrate.  Thus, for things that are very sensitive to latency (such
as real time games, or my current toys, high speed video streams)
the cyclical nature of TCP makes UDP a requirement.

Hope this helps, and isn't too technical.  This stuff can get a bit
arcane at times, particularly when used in the wild instead of the
lab.

J
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list