TCP Packets

The checksum is 16 bit, so isn’t fool proof, it is just very unlikely that multiple errors occur in a corrupt packet in such a fashion that doesn’t invalidate the checksum. So unlikely, you never need to normally worry about including your own checksum. However, that would be too easy so apparently they made the protocol’s checksum optional!

UDP already has the packet length in the header, so no point in also sending it in the payload. It does make sense to send the packet length with TCP because it is stream based, not to detect corrupt packets.

I agree with elias4444 that you can’t rely on the network to be fast, and you should definitely limit the speed at which you periodically send. Keeping that in mind, sending has overhead, so you should send a reasonable amount of data all at once (if needed and if possible without unacceptable latency). Eg, and someone please correct me if I’m wrong, sending just a few bytes and sending a thousand bytes is going to take roughly the same time. When you go past the MTU size (typically ~1400 bytes) then TCP has to break your data into multiple pieces, and only then do you get really penalized for including more data. See:

Well, no. Split/merged TCP packets are allowed by the spec, and have nothing to do with a ‘corrupting router’. I fear most of your observations are from a general misunderstanding of how TCP works.

If you want to send N bytes, and receive N bytes, you basically have to apply your own logic in your TCP handling code. You never have to drop packets (received bytes) in a TCP stream, because you are guaranteed to receive any bytes that are sent. The only thing that is not guaranteed is the amount of bytes in each bulk read-operation.

Could you elaborate? Hardly anybody uses NIO, because it’s so darn hard. Blocking I/O is ‘good enough’ for just about every application.

I think it shows that you really don’t know what you’re talking about when it comes to TCP. TCP is a stream protocol, not a packet protocol.

post deleted.

I realize you are serious about this, and have put a lot of effort into this, but what you’re saying just doesn’t make sense. I’m not trying to be offensive, nor arrogant, or whatever, but what you have been experiencing must have been the result of bugs in your own code.

If that ‘corrupt router’ was as bad as you say it was, it would have been useless, as every application that makes TCP connections, simply assumes the stream is not corrupted, so a router that bad would have made any TCP traffic go bezerk, making things like browsing/mailing/messenging impossible. I really wonder how he could have downloaded your game, with such a bad piece of hardware.

I really think there is some syndrome of ‘inexperienced’ people that stumble upon a bug and if they can’t find the bug in their own code, they tend to blame the libraries that are in use throughout the world. It’s like saying there is a bug in ArrayList – it’s ridiculous.

With 12-16 users there is no need whatsoever to use NIO for performance. NIO is actually quite a bit slower than blocking-IO. NIO only shines due to requiring less threads, which you only notice when dealing with >100 of concurrent connections.

Just saying… I run server software with 300 new tcp connections per second, and I’ve never ever had to drop a ‘PAYLOAD’ or encountered any corrupted data in a TCP stream. Surely connections get dropped, but no corruption, let alone causing serious problems in a tiny multiplayer session of 16 clients.

Trust me, every problem you had was caused by your own code. You should be grateful too, because that means you can fix it.

How about you guys move this into PMs instead of further derailing CyanPrime’s Showcase post? I can also split this topic if you so desire.

…post deleted.

Sorry about your thread Cyan. Hopefully the moderators can clean up my mess.

I learned in college that the blame for miscommunication must fall on the shoulders of the speaker, so I take full responsibility. I’ve deleted my relevant posts.

Splitting would be nice. Thanks!

It has been split. Now go ahead and debate all you want!

Let the public flogging continue!!! :o

The nice thing about NIO is the ability to process the data in the main game thread without blocking it. However it takes a whole bunch of classes and lots of trial (and especially error) to make it work. All that selector stuff is hard work. Conventional blocking I/O is easier to set up, but the resulting data needs careful handling to avoid synchronisation problems as you need a receive thread as well as the main thread. More trial (and once again copious amounts of error) on my part.

plz dont be off topic this thread is about chiken, nothing else, no ?

Yeah.

So Demonpants, could you please split this thread right where Alan_W started?

Offtopic:
Alan_W, you’re absolutely right, too. It was simply my advise because not only is NIO a can of worms, the architecture also hard to grasp for most people. Doing all I/O on threads and pushing all ‘i/o-events’ in a queue, where one threads pops off events, is a simple way to ensure there are no synchronization problems.

Remember that the tcp guarantee of reliable delivery is only operative at the tcp stream
level where the checksum is checked or generated. There’s a lot of software between your
application and the low level TCP driver, and your only guarantee there is hope and best
wishes.

I run an application that keeps its own checksum at the application level, and still get
checksum errors at rare intervals - currently running at one error per 200 million transactions.
Note that this is an “all causes” count, not limited to transmission errors. Still, it’s sobering
to consider that even “reliable” channels are not.

Don’t forget RAM and disk errors.

Why things like ZFS are important no.

Please don’t remove my simplicity plz…

Anymore than it already is.

this is something that have recently afraying me when I discover the “ressource monitor” showing material memory error on window… so many error , up to 12/min for some process :o … so much fear that I decided to close this monitor window suddently ! while it stay closed cant see any error and all seems to be ok :slight_smile:

Do you guys think it is OK to do NIO selects on the game’s render thread? I have been running a separate thread to do network read/writes, then queuing any data read to be processed in the game thread. Which approach would be preferred? I was worried that the network IO might affect the framerate.

In ‘C land’ network code is (or used to be?) part of the main loop. Everything is easier with only one thread.

It certainly makes code simpler… but networking I/O is rather ‘expensive’ due to interacting with the kernel, so why not give it its own thread? In the end it doesn’t really matter, as you probably have one low-traffic connection to the server, in the typical multiplayer game.