[FIXED][TCP] Movement delays

I’ve been trying to fix this annoying little headache for a while now; I’ve got a completely tile based, networked game (entities/player moves one tile at a time) which runs pretty much fine in Eclipse, however there are two problems that can occur.

  1. Whilst running in Eclipse a client can sometimes see delays in hitting a key and his player moving if the client is running a while.
  2. When running from a exported Jar file any client running for a while will generally notice the delay.

As far as I can tell this is not related to the speed at which the server is sending/receiving data to/from the client; essentially the process is like this:

  1. Client KeyListener picks up input and sends it to the server using a PrintWriter
  2. A server input thread picks this input up and looks up the world the player is in by using whatever key the player has stored in the player class to find the world in a hashlist (the server supports multiple worlds(maps) being run simultaneously).
  3. A move method in the world class moves the player in the “world” depending on the key pressed.

As far as I can tell this can take between 20000 - 200000 nanoseconds averaging at about 50000 ( 0.00005 Seconds)

  1. The serverOutput Thread for that client (which is continuously sending data at about 50fps) then sends a serialized object containing a string representation of a small chunk of the map around the player to the ClientInput thread (which gets the data at the same fps as the server output) and stores it.
  2. The client Thread handling graphics is continuously rendering that store of information at 43(ish) fps to a JPanel.

As far as I can tell there shouldn’t be any delay here, when the delay occurs it does not seem to be affected by multiple clients joining/disconnecting.

I apologise that this is not alot to go on, I’m probably doing something really stupid and blindingly obvious, but I can’t seem to find it.
If you’re willing to take a look at the source code it can be found here (the two jars in the top directory are the most recently compiled client/server, the server needs to be terminated in task manager afterwards.):

Source: (Removed due to fix)
(Hit 1 to get rid of the intro pic)
Video showing bug with terrible voice commentary ^^: http://www.youtube.com/watch?v=0qChX1uzQ6Y&feature=youtube_gdata

I’ve only been programming around 5 months now so pity me the rough design, I mostly made up how it was going to work as I went along; also the reason why the world loading is so awkward is because I’m going to implementing loading from object streams later with data already in the entities, I haven’t got round to implementing spritesheets yet too ^^.

Any help of any sort/advice on architecture is appreciated :smiley: I don’t expect anyone to look through my code for me, but a helpful pointer in how to solve it would be awesome, I’ve been banging my head against a wall for a while now.

Try to call this method (with true) on every socket you open: http://docs.oracle.com/javase/1.4.2/docs/api/java/net/Socket.html#setTcpNoDelay(boolean). That could be it. Sorry, I don’t want to bother looking through your whole code…

It worked!
Thank you kind sir, you’ve just made my day :smiley:
*Shakes fist at minimum posts needed to “appreciate” someone

And after all that trouble I only needed two lines of code :stuck_out_tongue:

Yeah, I remember ripping my hair off about 5 years ago when I had added networking to a tile-based MMO-game (I was still a newbie, but an MMORPG shouldn’t be that hard to make, right?). It worked perfectly when I tested it locally, but over LAN (cable, not wireless) I got either 0 or 300-1000 ms delays randomly, but I don’t remember any build-up of the problem though (I probably didn’t notice it xD). When I finally found out what the problem was and added that magic line… Kind of annoying, but I guess TCP isn’t really made for games primarily.

That method controls Nagle’s algorithm, which basically is an algorithm to improve throughput at the cost of latency. Simplified it seems to use both a threshold and a time limit on the data you send, meaning that if you don’t write “enough” data it will buffer it for a few milliseconds because it’s waiting for more data to send in the same TCP packet to avoid the overhead of sending many small packets. I suppose what’s happening for you is that because you’re not sending much data per second it’s increasing the time it waits before actually sending the data.

That’s pretty much what’s happened for me :D; thanks for the info, I guess when they created it they weren’t thinking of games when they made it but things like big database files or something, the local map I’m sending only really changes a few string values per send.

I would say I wish they’d put stuff like this in with the networking trail, but at the same time I should probably have looked through all the documentation anyway ^^

Thank you so much, I don’t think I could’ve found the method on my own :stuck_out_tongue: