General Architechture of Server

Somewhat of a general question but I’m new at this. I have a client applet and a server application set up. It uses TCP sockets and works pretty well.

This is an outline of the server architechture:

main class creates a server thread that loops on serversocket and accepts new clients making an object out of each and adding them to a concurrentLinkedQueue of waiting players.
main class also creates a thread that continually checks the queue of waiting players and once two are connected creates a game object out of the two players.

each client is a new thread that loops on the in-stream of that client.

each game object is also a thread that handles protocol back and forth between the clients during game play.

as i said before it works pretty well as is but I have two questions:

  1. I’m worried that I am creating too many threads and the server may crash if it has to handle a lot of players. is the architecture i have set up a solid one or is there a better aproach?
  2. Whats a good way to track whether or not a client is still connected? clients close their browsers and reload etc. from a server back end stand point i dont need to be manipulating client objects that no longer exist.

hopefully i explained myself pretty well but essentially i just need some guidance on the general set up of a server designed to handle many clients and many games going on at one time.

  1. You’ve described the acceptor/reactor pattern. You can google that for more info. Your server shouldn’t crash for too many players, but it may slow down. Unless you have thousands of players, you’re probably fine. You can use NIO or an NIO library (see KryoNet in my signature). This can simplify threading by using only one thread for network stuff, but using NIO is a major pain. Note you don’t need NIO to scale, you can scale with the one-thread-per-client approach your are doing, especially with NPTL.

That said, it doesn’t sound ideal to use a thread solely to see if two clients are connected. You should know that the moment the second client connects. Though if you have this working it probably isn’t worth rewriting.

You may also be able to eliminate using a thread per game. Maybe you can share thread safe objects/code between the two client threads.

  1. TCP is a stateful connection. It does magic to detect when the other end has been smoked and throws an exception on your next read/write or if you are blocked reading/writing.

The simplest form of server is really just an I/O multiplexor that accepts messages from clients
and retransmits them to appropriate sets of clients.

If you use non-blocking I/O you can do everything in one thread - and IMO that is both
easier to predict the performance and easier to develop and maintain that a server with
a thread per client.

@Nate

So just as an example what exception would the server “see” if an applet was closed? The game is 1 vs 1 so in the event that a client has left or lost connection the game should not continue.

@ddyer

in essence my server is just multiplexing the I/O. all the game thread does is continously check to see if either player has sent anything and in the event that they have it then sends it to the other player.

public class game extends Thread
{
	player p1,p2;
	boolean playing = true;
	String message = null;

	public game(player player1, player player2)
	{
		p1 = player1;
		p2 = player2;
	}

	public void run()
	{
		if(everyoneHere())
		{
			p1.send("player1");
			p2.send("player2");
		}

		while(playing && everyoneHere())
		{
			message = p1.getMessage();
			if(message != null)
			{
				if(message.startsWith("turn"))
				{
					p2.send("turn");
				}
			}
			
			message = p2.getMessage();
			if(message != null)
			{
				if(message.startsWith("turn"))
				{
					p1.send("turn");
				}
			}
		}

		System.out.println("game over");	
	}

	public boolean everyoneHere()
	{
		return ((p1.isConnected()) && (p2.isConnected()));
	}
			
}

I think i may be missing something though. I did not know there was such thing as non-blocking I/O. Thus my need for making a thread for each player. The player thread is blocked on the in.readLine and in the event that something is sent it takes the message and puts it in a queue. the game thread then checks each players queue for messages sent. How do you implement non-blocking I/O?

You will see IOException, specifically the JDK does: throw new SocketException(“socket closed”);
[/quote]
Actually, sometimes the connection simply timeouts, if a timeout is set. Otherwise it will be in limbo. Local connections will definitely throw an IOException, but once you send data over the interwebs it’s a lot less predictable.

At some point you are calling InputStream#read on the socket. If the connection has been smoked, you should get an exception. What do you mean by limbo? What does the InputStream#read call do?

If the connection is dropped (no clean disconnect) during a read(), and no timeout is set, your call will block indefinitely. I mean, how would you be able to tell the difference between a really long read-delay (like hours) and a lost connection where nothing is received. There is no keep-alive I/O.

Ok I think I understand. My game is turn based and each player has a time limit per turn so sounds like I should implement some sort of read time out. Something like twice the length of a turn??

Sorry for being a noob though but how would i implement a timeout? This is how i’m looping on in.readLine();

public void run()	
	{
		while(connected)
		{
			try
			{
				message = in.readLine().trim();
				if(message.startsWith("closing"))
				{
					if(opponent != null)
						opponent.handleQuit();
					connected = false;
					closeConnection();
				}
				else if(opponent != null)
					opponent.send(message);
			}
			catch(IOException e) 
			{
				System.out.println("I/O exception");
				connected = false;
				closeConnection();
			}
			catch(NullPointerException ex) 
			{
				System.out.println("null pointer");
				connected = false;
				closeConnection();
			}
		}
	}

Check out the JavaDoc for Socket. You can set the timeout with Socket.setSOTimeout(miliseconds).
Waiting 2x the turn time doesn’t sound too bad, another way is to have the client send a stay alive message every so often to reset the timeout.

Another question… kind of off topic but if I wanted to put my game out on the web are there any hosting sites yall would suggest? Also what all do I need? I have an html page with an applet that would need to be served up to all the clients and then a java application that would need to be running on the server. I know most hosting sites will allow the html page but for the server application do I need root access or anything specific?

Also is google app engine worth looking into? Will it work for what I’m trying to do?

That doesn’t sound right… TCP should know the connection is smoked and throw an exception even if there is no timeout on the read.

Welcome to reality :wink: TCP can only figure out the connection is smoked on a write(), not on a read().

Again, how would you know whether the server intentionally isn’t sending bytes for a long time, or the server suddenly lost power? Or a switch blew up… if the ‘remote end’ doesn’t send a FIN packet, you’ll never know the TCP connection is lost (unless you write(), which requires sync/ack).

I’m surprised you don’t know about this… have you never had tcp-timeouts where the server was up and running, but according to its socket-state the connection is lost, but the client thinks it’s still connected (or vice versa)? Ofcourse, when testing locally, these things never happen, but ‘in the real world’ it’s a real problem.

What he said. There is a reason for PING messages :wink:

Hmm. Nope, never had the pleasure of losing my connection that spectacularly. Even when a process gets killed the socket is closed properly. You’d need to pull a plug or otherwise have some catastrophe, as you mentioned.

KryoNet does send keep alive messages. The default for TCP is 59000ms and UDP is 19000ms.

Not at all. This is quite common on the internet.

I had this tcp-tunnel setup through Spain, and it would ‘catastrophically’ fail a couple of times per hour.

So Riven how do you handle that sort of thing? Yall are scaring me. Is TCP really that unreliable? I guess I should implement a timeout but are my players going to be disconnecting left and right?

Nah, may your protocol agnostic of TCP…

Create a tcp channel, that handles both your traffic and ping/pong behind the scenes (interleave streams). If the ping fails or the pong timeouts, reconnect. Again, this is all behind an abstraction layer. At the game-code level, you’re simply sending and receiving bytes (or highlevel game-packets), regardless of all the mess that occurs behind the scenes.

It’s fairly easy. The only thing you need to do upon reconnect is determine which packets got lost, or renegotiate some basic state.

[quote=“Riven,post:18,topic:35597”]
Oh, pity, and here I was thinking it was super easy and the client and server always got interruptions if the other side disappeared (I only stested my socket on my home intranet). I guess I’ll have to put some ping pong mechanism in there every minute or so.

Thanks for the heads up!

I’d do a ping/pong every (few) second(s). Not to keep the tcp session active, but to be notified immediately if something breaks.