Render/Update Threads

[quote]If you can reliably confirm a minimum number of clock schedules that your threaded code will get per second to execute your AI, on Linux, Win32, MacOS X, on both 1.4 and 1.5, then you’ve sold the idea to me.

But you can’t, so you won’t :slight_smile:

Cas :slight_smile:
[/quote]
You don’t need guaranteed clock cycles, you only need a constant speed regardless of geometry, the code above will give you that.

[quote]Part of the problem there, though, is that he was multithreading multiple intel chips.
[/quote]
IMHO the really big problem is just desktop OSs.

Typical game usage of threads would be to schedule a percentage of CPU time to different, seperate task. But your regular OS just isn’t designed for this kind of scheduling (in fact windows (and probably linux) don’t really gurantee anything when it comes to threading and time allocated) and are far too unpredictable. Its perfectly possible for one thread to get 5 solid seconds of CPU and then switch to another for 5 seconds. As far as the OS is concerned thats a perfectly acceptable 50/50 split. As far as your game is concerned you’ve just rendered the same frame 300 times. Yay.

Threading for games only works if you’ve either:

  • Got really low priority background work that you really don’t care how soon it gets done (level streaming)
    or
  • Got some kind of interupt handler so the OS actually wakes up your thread.

Anything else is just going to suck big time.

The ‘solution’ is either to get a real-time OS that can actually do reliable, predictable scheduling (like a non-pre-emptive one) or to manually time slice things in your app (like practically every game ever does).

I guess I just don’t understand why people are having so many problems with threading in Java.

Are you guys setting priorities or something? Are you using Thread.sleep (<— ABSOLUTE EVIL…can’t stress this enough) instead of Thread.yield? Are you using synchronized calls instead of synchronized blocks? Are you using a lock object at all?

If you use yield and default priorities, and put timmers in both your threads, you will see they are actually getting almost identical amounts of CPU always.

To control which gets more, use yield like I have in the code above. The engine thread gets only what it needs, no more, no less. The renderer/awt/ thread and awt threads gets everything else. The engine stays at a constant (I stress CONSTANT) 62 FPS regardless of if the rendere is doing 1400 FPS of nothing or 70 FPS of 28K tris. When the renderer is doing 70FPS that means the engine is heavily working as well calculating geometry for it…yet it remains constant because it can do all it needs to do in less then 15 ms.

If you find one thread is getting CPU for 5 seconds then the other is getting it, then something is wrong with how you have implemented your threading, like an unreleased lock, or a dead loop with no yielding at all.

I didn’t say I was actually getting this - but the point is that threading is totally unreliable in terms of scheduling (on PC). Its impossible to have a 100% smooth, 100% predictable game using threads for anything non-trivial.

You don’t need an RT OS, not by a long way.

There are well-known and actually mature (in their own way) OS’s and threading libraries that support this kind of stuff, but the problem is they aren’t mainstream. A lot of research OS’s (some of which can be used as a workstation OS!) have this kind of thing. My favourite is/was Nemesis, aka Pegasus v3 (IIRC). It allowed you to do precisely what you say: ask for a percentage of CPU time. It also allowed you to state a minimum and a preferred, and it would give you something between the two, depending upon system load.

When I learnt that a JVM had been got to run on it, I momentarily considered using it as a real workstation OS. Unfortunately, the team at cambridge dumped development of it. I have no idea why - probably because they got bored, or some professor left to go somewhere else. Very sad. By removing the overhead of unnecessary thread switching / thrashing it was able to significantly increase performance. The classic example was that two spamming network servers under linux competed for I/O and CPU so erratically that Nemesis coudl achieve a 10% higher throughput simply by much much more precisely scheduling them - the graph of bandwidth usage for linux see-sawed between the two processes, each getting more then less time, whreeas for Nemesis it ran almost dead level for each.

NB: I belive the 10% gap has closed a lot since then, probably down to 1% or less, because of particular improvements I recall that have gone into the linux kernel.

Well maybe you don’t need a full RT OS, but what I’m saying is that native threading for windows etc. just doesn’t cut it.

Whether you get around this by using a RT OS or an external threading lib is basically doing the same thing.

Incidently, I don’t think anyone here is saying that they have problems working with threads. We can all do it to one degree or another… its just that it doesn’t fit gaming (at least the sort of primitive crap game that I write) very well.

Kev

Intel is moving support for threading into theirs processors. You should prepare for that it would be preffered way of coding. I hope Java would be extended at least with as nice support as ada. I didn’t checked the lastest additions to Java, so part of it could be here, just support from OS and CPUs is missing.
Alas I believe there would be simillar delay as with SSE instrucitons.
SSE2 from 1.4.2 server aww.
AI could be improved with threads that are sheduled outside control of main program. Nice random data could be very nice for properly designed AI. Of course we are talking about smart AI and NP hard problems, like the assault problem.
As for my current design, I’m completely uninterested if threads would be sheduled unpredictably, for some threads. The important thing is they should be sheduled at least once per half hour. This means at least partially run once per half hour. I designed it originaly targeted on the Earth simulator, so it’s heavily multihreaded design.
And of course higher priority thread are what theirs name suggests higher priority threads, get in do work get out don’t fight between them too much. Yes it’s true that windoze threading is very strange, and you must test, and possibly poslish the game for majority of OSs, or and write sheduler more robust.
As for thread.sleep(xxx) It could be very nice to CPU. It cool down a little, and cooler could be more silent. There is also the isue with the laptop users, laptops have more balanced CPUs toward power savings, if you let them save some power. Of course MB integrated GFX cards are crap,I used thread.sleep(xxx) and then recalculated deltas. It used 1/5 of CPU power and looked very nice. Of course it was rather simple so not too much of a real application.