How does JGO use long polling?

I’m wondering exactly how the “there are X new posts” notification at the top of this very site works.

I know the basics: it uses ajax to request the number of new posts, and then I think it uses asynchronous long polling on the server side to only return the number when it changes.

But on the server side, doesn’t that mean there are a bunch of threads (or maybe a thread pool) waiting on that update? More to the point: if I refresh the page 500 times (sorry Riven), doesn’t that mean there are 500 threads (or thread pool tasks) waiting around- 499 of which are now pointless, since I’ll only ever receive the last one? Isn’t that a potential memory leak- if I get a bot to constantly refresh the page and create more of those waiting objects, won’t it eventually break?

I’m asking about this site, but what I’m actually trying to figure out is the correct way to do this in Spring. I’m not sure how different JGO’s php server is from a Java server, but I’m hoping the above questions are general enough to apply to my situation as well.

I read the http-header, parse some shit, put the Socket in a List. The socket will be the only resource I need to retain to send a response, when I need to send a notification. At the time of an event, all objects in the list are iterated, the list is cleared, and response are written to all sockets that are connected. All new connections are put in that list again. The threadpool reading the requests and writing the responses is tiny.

This all happens right at the socket level. There are no servlets, no PHP scripts, just a List that holds pending requests.

That’s interesting, thanks for the reply.

That’s a little lower level than I assumed, but I guess the question is the same: won’t you end up with a big list of sockets? Wouldn’t it be possible (or even easy) for bad guys to overload that list until something breaks? And aren’t you keeping around references to a bunch of sockets that have been disconnected, since users make a request every time they load a page? Am I missing something, or are those references so tiny that it’s not a real concern?

Apologies for the dumb questions, I’ve just been trying to wrap my head around this stuff for a few days now. It’s been an unproductive weekend!

Apache is easier to overload than this crude long-poller, so why would I even bother. You can do pretty effective damage control at the firewall level - why burden applications with it.

If you look at the raw network traffic you’ll see I do slightly more than this, behind the scenes, but in the end the entire setup is mostly single threaded. Naturally this thread manages a JDBC connection, that polls MySQL for the latest post-id at 1Hz. Once that changes, it is considered an event, and the responses will be sent.

My HTTP Long-Polling system works slightly different. I know users logged in into the game, so I collect updates until the next requests comes in (normally you always have a connection, in practice reconnection might need a few millis). The connection knows it’s userid and some special session key (so that nobody can hijack a players session).

For suspending the long-polling sockets I use Jetty continuations or the new Servlet 3 features and create an async context. That returns the underlying thread back to the pool until I need the socket to send results back. Most often I anyways don’t really have to hold them for a long time since game content is highly changing but it heavily depends on the game what the update rate is.

To send events to the server (the second connection) the server immediately responds with an event id the client can wait for in the long-polling events.

Riven: That’s even more interesting. So behind the scenes you’re still doing “active polling” (or whatever it’s called). That’s probably easier than trying to add a hook into the message posting system (especially for complicated sites with multiple user actions), but I wonder if you’re concerned at all about making unnecessary calls to the database, especially if a good portion of the time there are no connected users? (I assume there are a few hours each day where nobody is connected, that might be a false assumption though.)

noctarius: That sounds closer to what I’m actually doing. I’m using Spring’s DeferredResult, which I believe uses Servlet 3’s async stuff. You make an interesting point about incorporating session data into the data structure that holds waiting objects. That way you can do it by session instead of by request: if that session then makes a new request, you can probably throw away any old requests by that session.

That will probably not be correct if the user has multiple pages open though. Hmph.

Active polling through a JDBC connection ([icode]SELECT MAX(ID_MSG) FROM jgo_msg[/icode] at 1Hz) is not worthy of optimization. I’ve got better things to do than to worry about 1 second of CPU time per day :point:

JGO is never deserted :slight_smile: and even if it were, sobeit. It is a constant load, so if it is okay when the server is under heavy load, it’s certainly acceptable if the server is idle. The most important resource to consider would be my limited time, not to get a near-idling CPU to fully idle. It would be trivial to add though, but I can’t be bothered.

What you have to consider though is minimizing the Socket send/recv buffer sizes, as that can take quite a bit of RAM.