Are you talking about how a GA could be used?
Sorry misinterpreted and misread your post.
How it can be used will heavily depend on your game.
I’m not going to know where it can be used unless you give out all the details.
Are you talking about how a GA could be used?
Sorry misinterpreted and misread your post.
How it can be used will heavily depend on your game.
I’m not going to know where it can be used unless you give out all the details.
Yes, that was my point. I was wondering what type of in-game problems could besolved by applying GA techniques that wouldn’t be easier to solve another way.
I’m not thinking about any particular game but in terms of the type of problem you encounter in game construction I was finding it very hard to see how GAs could be used to solve them.
It occurs to me that it would be possible in an Action RPG or similar to use genetic techniques to choose enemy strategies, pushing enemy behaviour towards previously successful methods and making the game more challenging as a user progresses, so that’s one example. Any others welcome…
Tactical simulation in any shooter or action RPG.
Counter Strike Source could even benefit from a GA.
You could use a GA to determine what’s the best pattern of attack based on previous results.
IE: Dust, everyone loves to defend the 3 enterances, however majority of humans tend to gang up in a single area.
I don’t think CS:S would benefit from it with the current maps because they don’t allow much variation due to size.
I remember an article in Edge a few years ago about a company that was making a Ferrari-licenced racing game. They were using neural nets as driver AI. Each generation was given some track time, and then scored on how far they got around the track.
Eventually, the AI drivers were putting in lap times only a few seconds short of the developers themselves.
At any rate, the developer subsequently went bust, or was bought out and the project aborted, or Ferrari pulled the licence, or some such depressing fate.
Another interesting example is the SodaRace competition. In this, humans and computers vie to create the fastest-moving structure. IIRC, the GA-based solutions quickly began exploiting flaws in the physics system, creating unstable structures that would “explode” accross the screen.
This always tends to happen with GA solutions… two occurences of note:
An artificial life simulator of a friends with some simple animals. The GA was used to evolve a ‘balanced’ eco system where no one species would dominate. Unfortunatly one of the earliest solutions evolved ‘rocks’ - all the animals figured that if they stayed still they never expended energy so could live forever.
I was evolving a simple football AI to play an equally simple game of football. After some time one team started flicking the ball straight up from the center, then heading it into the goal, scoring every time. I still don’t know how this happened, as the only availible action was pass and shoot. :o After tweeking the behaviour of the pass interception code, this magically went away…
It is common with optimisation algorithms of all types — flaws in the model get exploited in unexpected ways.
I remember a case where someone decided to improve performance by rounding times to integer minutes. The theory being that in the real world times to the nearest minute be easily accurate enough. Unfortunately the algorithm being used was minimising time and thus tended to choose values which had been rounded down. It was necessary to work in seconds to eliminate unwanted artifacts.
GA are used only when no reasonable other solution exist. Such algorithms might have some simillarities with GA, but often are/should be optimalized to don’t have problems with speed/memory.
Memory overhead of GA could be drastical. 1000:1 in comparison to reasonable algorithm. CPU demands are high too. So one way how they could be used is pregeneration of some solutions on developer’s computer. In this way speed problems are reduced, and memory demands are lower as well.
Fitting function could be an another problem with GA. While some problems have easily defineable fitting function, others are more easily solvable than a programer could create a function that could decide if that solution was optimal, or not. Actually sometimes is hard to create a function that just reasonably aproximate effectivity of a solution.
Funny thing is that some GA methods are based upon myths created from Darwinism, and also on some racism.
If you’d look at that link I typed few posts ago, you could find another problems with GA. Being stuck in local optimum.
“Being stuck in local optimum.”
That’s not a problem when it comes to complex games.
Humans are lucky if they can even reach a local optimum under most circumstances.
Multi-core CPUs could increase the effectiveness of a GA.
One CPU core would be playing the game, the other doing an evolutionary AI, maybe even training an AI(run on core 1) over time.
“Being stuck in local optimum.”
That’s not a problem when it comes to complex games.
Humans are lucky if they can even reach a local optimum under most circumstances.
What makes you think that local optimum is anywhere near what a human is capable of? That local optimum might as well be running into a wall for most of the time.
Ahh. I mistook what a local optima was.
If that’s the case then EP should be used elsewhere or differently.
What makes you think that local optimum is anywhere near what a human is capable of? That local optimum might as well be running into a wall for most of the time.
If you’re EP gets stuck in a local optimum, you are not a competent EP programmer. It’s a problem that was solved around 20 years ago, so I really dont think it’s something to be worrying about in the 21st century.
What? Travelling salesman problem has been solved optimally? I wasn’t aware about that.
As for GA for JVM. That would do more harm than effectivity increase. CPU pipelines are fixed and optimalization isn’t as difficult. If someone would do compiler for some better computer like Hero, they might have some reasons for it, but they should be careful to avoid Hero’s internal recompiler that could be dazzled by “improvements” in code.
The way to improve compilers isn’t only in development a better smarter compilers, but also in forcing the CPU developers to accept some rules, and add some needed functionality to new CPUs.
Fast random memory access is one of needed things. There are others, barrel shifter (reintroduced for Prescotts after it was for strange reasons removed in Northwoods), low cache latency, addressable cache, multicore CPUs, low energy consumption, forcing developers to use instructions for memory addressing to memory addressing (by, for example, address cache possibly out of register), implementing some reasonable timer independent on CPU cycles and as much as possible on temperature of system, cache for thread states, improving bandwidth between CPU and other components, improving memory controller, or doubling memory controller, to avoid multiCPU write storms, and also increasing of parellelization on XMM registers (2x1 SIMD instruction is pretty low end).
Current situation is Intel or AMD add something, then pretend its useful for something completely else than would be reasonable, and then everyone needs to bite and try invent a way how it could be reasonably used.