The difference is in politics and support. DirectX has more of a user / code base. There are more game engines supporting DirectX. There’s also mighty Vole pushing it.
Directx was a bane to computing because it wasn’t needed at the time of its inception. Dx was oop which means it was hard to port on more platforms back then.
Opengl and d3d are virtually similar in all aspects gfx-wise.
How could you be disappointed in something you haven’t tried yet?
While I am sure we could find some way to argue your first two points, they don’t contradict anything I said in my post.
I am disappointed in PS4 (And Sony in general) with their incompetence and poor business ethics that they’ve displayed recently. Microsoft isn’t perfect in that respect but their par-taking in the console war has been much more mature.
I was lead to this conclusion after Sony’s PS4 PRs essentially consisted of 50% bashing MS (calling them out implicitly at gamescon for the DRM policies and ‘inconsistency’ on a unreleased and unofficial product.) And after Sony couldn’t even figure out how to properly preload GTA4 content safely (or maybe it was their inability to promptly notify users when their personal data was ripped out of the PS+ databases). The other 20% was them riding the irrational internet rage regarding the Xbox One.
Finally, all I see in the PS4 is better hardware. The thing is, when it comes to developing gaming consoles, hardware literally means nothing! Game consoles are designed to run cheaper hardware at an over-specification clock rate with optimized games - i.e, arguing hardware is pointless it just tells us who managed to achieve what more optimally (who has the crappier hardware with competitive performance)
They didn’t say anything untrue, if this offended anyone then they are clearly biased and most likely a fanboy. Being upset about that is irrational, a company stating what’s advantageous about their product over the competition is how advertising works.
There was nothing irrational about it, Microsoft’s product was going to be shit and potential buyers were really upset.
Yeah because playing COD on NES is the same as playing COD on PS4/XBOX360!!!
That’s such a silly statement, I don’t even feel like replying how silly it is.
Your statements are very subjective to what you like/want/feel, there’s no point in arguing further if you are going to argue subjectively and act like it’s objective.
My statement isn’t silly, you completely missed the point of it. Not only that, but you clearly have no concept of why game consoles exist.
The point is this: The games need to be competitive, it doesn’t MATTER how they achieve that level of competitiveness. What matters is that they OPTIMIZE the price with performance and software tuning. It is a very common and ignorant misconception of gaming consoles that hardware is significant - it is the product of hardware power, software performance and architecture tuning that is important TOGETHER. Not as individual components. If via one of those three you can achieve a degree a competitiveness you have the remainder of the three are less significant.
It is not subjective unless you want to over-engineer a console and lose millions of dollars on it (wait, didn’t Sony do that for PS3?)
Don’t call my statement silly, it makes complete sense and it has been a concept behind developing game consoles for a very long time. It is the entire purpose of a game console and it is what sets them apart from your desktop computer. Game consoles are CHEAP, OPTIMAL and PREDICTABLE. They shouldn’t need THE BEST hardware to be competitive.
Calling my statements subjective doesn’t make them so. I laugh at the ignorant fools who purchase a console based on hardware specification like they’re out shopping for a desktop computer.
Finally, these game consoles have unique architectures that were designed in different ways. I am blown away by the ignorance of people picking at random hardware components in either console, deriving an abstract idea of their function and applying them as “her durr x is better than y” when really they have no idea in the world. Not even the engineers at Microsoft or Sony have a full idea - the hardware is benchmarked, reduced, benchmarked, reduced until it reaches the predefined performance objectives with MINIMAL and CHEAP hardware and clever architecture and software tuning tricks because it is way cheaper to distribute software optimizations than it is to distribute expensive hardware. It is a constant cycle.
Even in that regard, it’s completely false to say that the PS4 is nothing but upgraded hardware. If that were the case, it would be just as fair to say the XBox One is the exact same thing.
Games make the console, and the PS4 definitely has a lot of games that make it tempting. To that end though, it’s completely opinion. If I cared about the games the XBox One had, I’d be more interested in it. This isn’t a point that can really be argued, though.
I didn’t say the PS4 is nothing but upgraded hardware. I said all I see is a PS3 with upgraded hardware. Like you said, on that respect, it is entirely subjective.
Adding to the point about tools being in C++, from my perspective it seems that many AAA developers reduce costs by resorting to third-party engines, like the Unreal Engine, and as long as those keep using the same technology, users of said engines are somewhat locked in place.
I’m also of the opinion that a negative for Java in the eyes of developers might be the relative ease by which it can be decompiled.
I, personally, am acquiring a sort of “bias” against Java based on the fear that Oracle might end up screwing Java devs up for the sake of monetizing the language. Let’s hope I’m wrong on that one! :clue:
Let’s not forget that Java’s performance comes after JIT, which not only needs warmup, you’ve also got the memory footprint and CPU overhead of running an optimizing compiler at runtime. Hope you’re into load times. Sure, there’s AOT compilation, but now where’s the value proposition over C++ again?
There is Java the language and then there is HotSpot the Java Runtime Environment. There is no reason why you couldn’t save the HotSpot state and cross compile it to a new platform. (Ignoring the obstacle of huge hardware differences.) Runtime recompilation is an option, not a necessity.
Most companies also make games for consoles, and the JVM will be a burden on a console
The industry is hooked to C++ right now, only indies can take the risk to use java
Even if Android has java, it’s easier to use C++ to port from ios
Java programs do not run in a VM any more. Java compilers target a virtual machine; meaning a virtual instruction set, not an interpreter. It’s called an intermediate representation because source code is transformed to an intermediate format before being translated again to native code. Compilers can be made to target different instruction sets. Some compile directly to native code. Android also uses an intermediate virtual instruction set which is different from the JVM specifications. By your logic C and C++ would also not be suitable for consoles.
The two major open source C compilers, the GCC C compiler and Clang also use a virtual machine specification. GCC compilers use two intermediate languages called “GENERIC” and “GIMPLE”. Clang targets an intermediate instruction set for a made up machine called “LLVM” or “Low Level Virtual Machine”. The JVM, GIMPLE, and the LLVM all use the same strategy. (Did I also mention IL for the .NET platform?) It enables x source code languages to target y platforms and only x + y translators need to be written instead of x * y unique compilers. Most generic optimizations are done on the intermediate representation in order to further reduce code duplication.
People hear virtual machine and they think interpreter. They should think hypothetical computer. They are as well defined as a real computer architecture (usually better because there are no undocumented quirks) and are designed to be portable and make optimization easy. The JVM is special because it is a computer that supports heap allocation without using pointers, which has a lot of benefits, though there are also some flaws in the platform that are unfortunate in hindsight. The JVM is a fully functional machine specification. Real world physical computers (hardware) that conform to the specification exist and run Java bytecode directly, but the norm is for bytecode to be (re)compiled to native machine code before it is run.
Patent nonsense. The JIT compiles methods that have run several thousand times, and it uses the tracing information it collects during bytecode execution in order to do so. That’s why it methods in the server VM require more calls as bytecode before they’re ever compiled. And it’s still designed to back out to bytecode whenever it needs to, such as in the face of new overloads appearing.
JVM Bytecode is starting to resemble something like an IR, but that doesn’t mean it is one.
Okay, you’re right because there is an interpreter in HotSpot. That said, the majority of the time it won’t be in use and really doesn’t need to be used at all. Its not that much different than having a scripting engine and 1000 is a small number in the computer world. Of course that is how HotSpot treats Java bytecode. Not every platform, device, or compiler does it that way. You wouldn’t just port HotSpot and call it good.
For future reference I will try to look to see if the interpreter works on an instruction level or method level. For tracing information you only need to watch which branch is taken, so you do not necessarily need to emulate a CPU to get that information. How IR is used does in one part of one program does not change the fact it is IR. LLVM has an interpreter and bytecode may be used as IR for Android and GCJ.
LLVM is designed from the start to support a static compiler target and includes things like load and store opcodes for arbitrary memory addresses. The high level the JVM operates at gives hotspot a lot of flexibility in the optimizations it makes, but it does complicate the hell out of AOT compilation, usually necessitating a lot of extra runtime support and instruction scheduling that isn’t always optimal, something resembling C++ code that’s extra-heavy on vtables.
It’s not an impossible problem, but C++ compilers haven’t exactly sit still in the meantime either.
I don’t disagree that LLVM is a better IR. I said there are flaws in the JVM and the same is true for the Java language.
If you wanted to make JIT compilation or run time profiling part of your final distribution, just do it before it reaches your customer. Supposing your game was deterministic, why could you not what technical constraint prevents you from hypothetically reusing trace information from a PC and sharing it with a different PC, a tablet, or a console? Testing and profiling is part of the development process, why not make it part of the build process? The same statistics could be reused on multiple machines. You could put compiler hints in comments or annotations or in a type of byte code.
You could do what Microsoft did to address long boot times. (The first time you run Windows on new hardware it takes a long time to boot. After it successfully boots it dumps RAM to the hard drive and loads certain parts the same way it would as restoring from hibernate.) You could have console game programmers (or PC programmers since Java for consoles would presumably still be cross platform) do incremental stress tests and dump the results of JIT compilation to a format that a console can read. You could even strip away the dynamic portions of the JRE the same way you do with support structures with clay and plastic 3D printing models while they are still being built and before they harden. (You would have to get rid of reflection and disable the optimistic HotSpot optimizations that require deoptimization if an exceptional case is found, but those aren’t used in console games anyway.)
Normally people insist that AOT compilers are a more practical way to do anything a JIT does. You don’t hear people argue very often that if given the amount of compilation time that console developers get you could not reproduce with AOT what JIT gives you.
If console developers collectively wanted to use Java they probably could make something happen. There are flaws in every language and Java’s are mainly things that can be avoided if you are willing for your performance critical code to look more like C code than clean Java code. Of course it still might not be worth it. Personally I would invest the energy in a new language because there is still lots of room for improvement over all existing languages.