Good latency proxy? (TMnetsim causing stream corruption!)

I’m currently writing an unofficial multiplayer mod for the game Starsector. (great game btw! and a bargain for $10)

For simplicity* I’m making the game simulation(and fp) deterministic, syncing the peers, and transmitting just user-originated events using a lock-step model, with input delay appropriate to the latency between peers.
*(simplicity is of paramount importance because the entire mod is going to be bytecode injected, meaning there are practical limitations upon how much of the codebase I can interact with.)

To this end I created a prototype to test how well the lock-step model handles at various latencies. (using the tool TMnetsim as a latency proxy)
It seemed to work fine with low latencies, but exhibited weird data corruption at higher latencies.
After much hair pulling I determined it couldn’t possibly be my code, and wrote the below code as a sanity test to confirm TMnetsim is a useless piece of crap.

So 2 questions:

  1. The below code should work perfectly, right? (obviously with the relevant ports redirected)
  2. As TMnetsim appears to be a piece of unreliable crap, can anyone suggest a good tool for introducing semi-realistic latency between connections? (I could of course write one… but given the plethora of multiplayer games surely such tools already exist?!)

import java.io.IOException;
import java.net.Socket;

public class Client {

	public static void main(String[] args) throws IOException {
		Socket s = new Socket("localhost", 9665);

		new Thread(new SocketReader(s)).start();

		new Thread(new SocketWriter(s)).start();
	}

}

import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;

public class Server {
	public static void main(String[] args) throws IOException {
		Socket s = new ServerSocket(9765).accept();

		new Thread(new SocketReader(s)).start();

		new Thread(new SocketWriter(s)).start();
	}

}


import java.io.DataInputStream;
import java.io.IOException;
import java.net.Socket;

class SocketReader implements Runnable {

	Socket s;

	public SocketReader(Socket s) {
		this.s = s;
	}

	@Override
	public void run() {
		try {
			DataInputStream dis = new DataInputStream(s.getInputStream());
			int counter = 0;

			while (true) {
				int val = dis.readInt();
				if (val != counter) {
					throw new RuntimeException("stream corruption! Expected: " + counter + " received: " + val);
				}
				counter++;
			}
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}

}
import java.io.DataOutputStream;
import java.io.IOException;
import java.net.Socket;

class SocketWriter implements Runnable {

	Socket s;

	public SocketWriter(Socket s) {
		this.s = s;
	}

	@Override
	public void run() {

		try {
			DataOutputStream dos = new DataOutputStream(s.getOutputStream());
			int counter = 0;
			
			while (true) {
				dos.writeInt(counter);
				counter++;
			}
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}

}

I agree, the code listed should always be in sync.

I am not familiar with TMnetsim but if I was to simulate latency I would do it very simply: extend Socket and override getInputStream() to return a wrapped parent.getInputStream(). This wrapped inputstream calls the inner inputstream methods but waits for the desired delay before returning the result.

What does it mean for the code to be “bytecode injecteD”? Sounds painfull :wink:

bytecode instrumentation / code weaving.