I’ve been messing around with UDP and TCP and am currently working on my own implementation of the BULLET technique…
However, in BULLET, UDP is sent as a window of packets in order to cover the lag spikes in TCP.
Why? I’ve been testing, and I’ve found that UDP is actually quite reliable, I’m getting around 80+ lost packets every 1000 packets, and around 200 packets sent out of order. Of course, TCP has 0 packet loss and 0 out of order packets, but UDP is much faster…
On average UDP is going out latencies of 10 at best and 100 at worst, with the most dense at around 30.
In contrast, TCP is pretty inconsistent, anywhere between 30 and 260.
So why not use UDP as the main protocol, and instead have TCP cover the UDP’s lost packets instead of the other way around? I’m implementing this right now, so I’ll let u know how it turns out.
At the moment my plan is:
have a constant flow of both TCP and UDP and whenever a TCP packet is received before a UDP packet, you can assume that the corresponding UDP packet was dropped.
Well you can’t guarantee how reliable UDP will be. On my local network I send 10,000 messages and receive 10,000 messages. On the internet depending on distance, hops, latency issues per hop, etc. all act as factors into the packet loss. Just because in your environment you can consistently get a specific reliability doesn’t mean it will maintain across the board.
Just means you need to do a significantly larger amount of re-ording work. With TCP as the primary you just grab what you need from the woindow to fiull in.
If you really want to use UDP as the primary then you need to do internal queuing up of packets to rearrange the order when theya rrive out of order.
I’m just saying that UDP should be used as primary because its always faster to send/receive that TCP, so surely for performance reasons the slower protocol should be used as a backup when the faster one fails.
This is all assuming that the packets out of order doesnt occur very often in UDP, and that the re-ordering process is good.
I guess all I can do is experiment with different ways and see what works best in my situation.
Ya know, I’m a big fan of UDP for its advantages over TCP for the “fire and forget” capabilities, but there really is a lot of TCP bashing from people who obviously don’t know the actual performance benchmarking between TCP and UDP.
I think realistically games needs both. Use UDP for messages are not absolutely necessary to reach their destination and do not require a specific order. Then use TCP for aspects that need guaranteed delivery. The implementation of such an idea may be something very much like BULLET, or it may be something completely different, but I think a statement of saying that one is always better than another completely ignores the fact that both still exist and are still commonly used. There’s a real need for both.
I’m not targeting this specifically to you phi6, but really also mentioning myself as one of the original TCP bashers in favor of UDP. As I’ve looked into it deeper I’ve realized the necessity of both protocols.
That would depend on the specific of your network and its operation.
It also depends on what you mean by “faster”. The word has many definitions… are your referring to net latencies? Bandwidth?
Do you have any idea if you are saturating bandwidth? Because then bandwidth turns into latencies. What size are your test apckets? Are you close to the MTU? If so then packet over head could havbe a major effect by causing sub-division of one kind of packet and not the other one.
Also what kind of link to the net do you have? Analog? Digital? All of these effect the results of a ping type test.
Fundementally however, TCP uses IP to transfer packets. UDP is just a user interface to IP that layers an EDC on it to detect bad packets.
So a TCP or a UDP packet move across the net at exactly the same rate because they are the same thing.
After that its a matter of how the packets are processed.
Thats not really true either is it? I mean if you had a high latency connection (say from the UK to the US) and a small window size TCP may have to wait for a acknowledgement from the reciever before sending any more data on the window.
For instance, say we send we want to send 128k of data with a window size of 64k.
UDP just spits out a bunch of packets for the 128k and assuming a latency of N seconds the 128k gets to the end point N seconds later (assuming no packet loss).
In TCP the 64k can be sent (the window size) and then the sender must wait for an ack before proceeding. Assuming symetric lag this would be 64k to the end point in N seconds, the ack comes back in N seconds, the next 64k reaches the end point N seconds after that. So 3xN time before the whole 128k gets there. From what I understand reading the RFC (for the 400th time) if the receiver only acked 32k of the 64k (say some was still in transit when it was time to send the ACK) the sender still isn’t allowed to send any until they’ve recieved an ack for the full 64k window.
This TCP behaviour is of course completely justified in preventing network congestion and that in itself may speed things up (low congestion = low packet loss - at least some times).
Now, I’m sure Jeff already knows this and he’s going to respond by saying you just reduce send/recieve buffers. I think minimum they’ll go down to is 8k (which should give you an 8k window). Or turning Nagle’s off or something (not sure that ectually effects the window size but rather how often a packet is sent).
Either way, I wonder if these details are where the confusion of “TCP being slow comes from”.
Packet propegation is identical. That much is a given
If your flow controll is well tuned it should not effect your latencies.
Your going to hit MTU long before you hit the typical window size (which AIR is adjustable on the fly)
and thats likely to screw any measurement attempts anyhoo.
Look at it this way, if you are spewing packets fast enough to cause a slow down by flow-control then WITHOUT flow control you are going to start getting massive packet loss. SO either way its going to hose you and flow control is likely to be the better of the two options.
Sorry, I still don’t see how you’re getting to this. I assume I’m just being desnse. If TCP the sender has to wait for the ack of a window before sending any part of the next window. However you look at it that still means that the second window in a stream won’t reach the destintaion as fast as just sending it straight away over a non-lossy link.
Again AIUI the windows are in fact over-lapping. its sending the next data while waiting for the acks on the previous data. There si no reason not to-- the pipe is there and available.
This was being done as farback as modem protocols for file transfer so I find it difficult to believe that TCP doesnt so it.
But I can look in my Tannenbaum for the details tonight…
I’m just hoping going into detail about what happens with TCP windows and exponential back-off for congestion control my alleviate some of the reservations about using it for real time data.
But now imagine a window of two packets
Sender -->PKT --> receiver
WHILE that pkt is traveling to the receiever, Sender --> PKT2 --> receiever
now the sender waits…
AT (latency of first PKT: receiver --> ACK --> sender
receiver immediately receives the next pkt and as the ACK1 is going back: receiever ->ACK2 -->sender
Im not drawing that in the ideal pictorial manner but I hopwe what you cna see is that, during the latency time for 1 pkt above+the time to send one additional packet, 2 packets have been transferred.
The bigger the window, the bigger the overlap, until at the ideal window size both sides are continuously pumping data responding to what the
other side did latency ms ago.
At that point you are never waiting and your transfer time is idnetical to an ack-less transfer.
As I said, a variation of this technique was in use as far back as ZMODEM.
Yep, that makes sense to me That means the correct thing to say is that TCP latency will be the same as UDP assuming you’re not trying to send more data than would fit in one TCP window at the any single instant. In that case TCP would send some of the data and then wait for the ACK then send the rest. Where a naive UDP implementation would just send all the data at once in theory alowing it to arrive at the send sooner.
I don’t think the above “non-window” case is very clear (I realise its hard in ASCII) but its not equivilent to a native UDP implementation which would be.
Well I believe the windows are adaptive, though I might be wrong. As I say id have to cehck Tannenbaum to be sure.
A more problematic case for TCP I think woudl be a sudden extreme latency that is bigger then the buffer space provided by the window, such asa modem retrain, which might put a hole in the flow. On the other hand it would stop the UDp flow too, so Id really need to sit downa nd think about how much it impacted TCP and UDP to see how much difference it makes.
Ofcourse there ARE the latency spikes you get when a apcket gets lost and needs to be resent and inserted into the flow. Again with clever buffering you can do some of that without stopping the foward march of packets. This is why you generally see a TCP latency spike follwoed by a rush of packets. Theya re there, they just arent being delievered to YOU til the missing one gets through the flow.
As I say, if what you need is reliable ,in order delivery of packets in a reasonably bandwidth efficient manner its hard to beat all the time and effort thats gone into TCP. OTOH if you can take limitations in some areas, you might imrpove on it. Cheif among these is trading redundnacy, in one form or another, for bandwidth. TCP, being designed for pipes of unknown size, doesnt do that itself.
Speakign of alternate tarde offs, another inetresting recent protocol a freidn was just telling me about is STCP. STCP loosens in order gaurantees without eliminating them and gains some flow improvements. Data is group into large “bunches”. the bunches are in order but inside of the bunch it is not. The cannonical exampel of soemthign that can use STCP is a web browser where you dont really care what order your images show up in as logn as it all gets there. Another good example might be streaming 3D world data to a client…
There are limits on the window size and TCP requires Acks. That puts a limit on the throughput that becomes somewhat independant of bandwidth, for high bandwidths, and primarily dependant on latency.
That’s where UDP-based algorithms can have an advantage. There are a few methods using UDP and FEC or some other fancy error handling to get substantial benefits.
The simplest example is perhaps a file transfer that uses the actual entire file as the “window” in a data carosel, that way allows ACKS without ever really waiting for an ACK… the un-acknowledged packets are automatically re-sent after you have been through the entire file once and this continues until finally there is an ack for all the data in the file.
Block-based FEC algorithms perform better in real life… but this is primarily a benefit over TCP for bandwidth-sucking bulk transfers. The steady trickle or reasonable flow of a real-time networked game is hopefully not saturating the bandwidth or requiring such high-speed links that the lantency effects things this way. Generally I suspect the latency issues will effect gameplay in much more drastic ways while the bandwidth requirements are still quite low.