Lacking in Performoodles.
So as some of you know I work at a company that deals with satellite data services on aircraft. These services are both slow and expensive. So trying to use them is actually an act of trying to avoid using them…if that makes any sense. Also, to complicate things, the company I work for is related to, and gets proceeds from, the amount of bandwidth used. So we both want to use as much bandwidth as possible, but also to use a little as possible. ugh.
One of our soon-to-be products is an inflight internet service. This provides a mostly transparent connection for the passenger’s laptop or the IFE screen in the seat.
The connection we are using for this service provides 492kbps maximum. On the surface, that doesn’t seem too bad. It’s 4 ISDN lines. Or half a 1MB DSL/Cable modem. And it’s like 8 56k modems!
The latency is killer. The satellites are in Geosynchronous orbit. Thats 22,000 miles away. And 22,000 miles back. And then, the data lands at one of three ground stations, none of which are in very good locations, so there is a several thousand mile trip over fiber back to my companies data center. Ugh.
That all adds up to between 0.9 and 3 (usually about 1.5) seconds of latency for a round trip. Ugh^2.
So that means that any time you ask the ground for something, there is at least 2 seconds before you are going to START getting a response. The ground won’t even know you asked the question for almost a second!
Of course, it’s actually worse than that.
What if you ask the question over TCP? TCP takes 1.5 round trips to even being transferring data. Ugh.
Then, of course, TCP is very bad at making efficient use of long latency links (known as ‘long fat pipes’). TCP generally starts a new connection slowly, so as to not immediately fill it up and risk packet loss. This is called Slow Start, and is a form of Congestion Avoidance. It works by waiting for the first several packets to be ACKed by the receiving side before speeding up the transmit rate. Once the expected ACKs stop coming, the sending side knows what the link capacity is and will slow down a little to fit. This works very well for short pipes with low latency. However, since we have such a long latency, and packet loss is actually rather rare, this is an extremely inefficient way to go about things. If the response data set is small, the connection will in fact never make it out of the Slow Start phase. This is because by the time the ACKs which would allow the sender to start sending faster get there all the data is already in the pipe. What could have been a very short 400kbps burst turns into a 1 second long 10kbps dribble.
Then we have DNS. Before a TCP connection can even be established, you have to do a DNS lookup to turn ‘www.google.com’ into ’22.214.171.124′. DNS lookups take about 3 seconds. Thank god they are cached. But still. Ugh.
So here are some load times for web sites over this satellite link.
These times were collected using a perl script I wrote for the purpose. I was unable to find any preexisting tools worth a damn. The tool emulates a sort of worse-case situation: no cache, full page load from scratch. However, the script will make use of keep-alives if the server supports them. And it can make use of a proxy server to avoid the DNS delay.
For comparison here is a bunch of web sites loaded by the same script over 786k ADSL.