Things you disagree with in the java language

@princec, @actual, @kaffienne

Thank you for the thumbs up and the link and example. I have bookmarked the link. My brain is a little overfull–reading about the math for curves and having to keep going back and reviewing things like how to invert matrices. Tomorrow morning should be fresher and able to read the enum link.

I thought I was doing okay with enums until I hit an example certification question pertaining to the “constant specific class body” at which point my brain went tilt. (Some of the questions in Sierra/Bates SCJP are just nasty hard.)

It is a lot easier to deal with classes if the rules for classes stay the same, rather than have special cases and exceptions. This is why Clojure appeals to me so much more than Scala. (Haven’t checked out the other new language people are mentioning here…ugh tired, can’t recall it’s name.)

Mostly, I think Java is the best. I am perfectly fine with verbose and explicit. (Can you tell from my tldr posts?) I also am coming from a Microsoft background, and find that world very frustrating and arbitrary. Consistency is a good thing, not at all a hobgoblin, in language design.

The important thing about enums in Java is that they are consistent. It’s all implemented under the hood just like ordinary classes and prior to Java 5 that’s what everyone did. What enums do is just take out all the boilerplate and do it all for you and make sure you can’t trip over the various gotchas that befall you if you do it manually (eg. forgetting to make them serializable properly)

Cas :slight_smile:

Enums are really nice (and I don’t see this used much) for ordered anonymous classes.

Enums never seem to quite fit my specific needs when I look at them, but I’m blaming the lackluster reference I often have to work with.

If I may go on a bit of a tangent, it bothers me a lot when I’m researching for a way to solve a problem using Java, and most documentation I find settles for the least effort most generic approach (and thus less flexible), barely scratching the surface of what the language is capable of.

I think I’m going to start pointing people to this thread so they realize how little they really know about Java. (I’m saying this as someone who is humbled by the knowledge on display here)

My goal is to always spend the least time possible building systems that are just sufficient for my currently needed usage…and move on.

Certainly, but there are situations when the requirements go beyond the possibilities of the standard solutions.

For example, most tutorials and books on Java Game Programming I’ve come across usually rely on Java2d objects for image presentation. That is fine and dandy for most users, but I may want to do some manipulation that goes beyond what those classes can handle, and it becomes frustrating when a sizable portion of the community simply state that it isn’t possible, when in reality they mean “I don’t know”.

And in case you’re wondering, my frustrations in this regard come from the StackOverflow forums mostly.

You can get most of the way there with the @NonNullByDefault annotation - you can apply this at the package, type or method level so it’s not a lot of work to cover the entire codebase, and you can leave bits out if you want.

Shame it’s not in 3.8 though :frowning:

Cas :slight_smile:

Good idea, sad that I can’t implement its with alpha :frowning:
I also understand that’s main time eat work with array, for example
Code by link takes 2594962 ns in (for 1000000)
Clear for eat 2101255 ns;
While simple int pixel = ar_Ints[0];
3466137 ns in same for
And you need three of array access: two for get pixels and one for put back;
5512280 ns
Like you see main problem not in math calculation :wink:
Thanks all, I really tiered trying optimize rendering at this point its work
9482898 ns =)
And give in game 100-300 fps :wink:

I would always trade fancy language features for great libraries and tools.

Wait: are you saying that there are already bytecodes for signed/unsigned conversion? If so, someone needs to tell Oracle to update the VM spec. Or are you proposing that unsigned types would be added at the VM level without changing the type system, just adding some kind of annotation to indicate which stack slots hold untyped primitives in which instruction ranges, a là LocalVariableTable?

As Roquen said:

Make every (arithmetic) operation a method call, and there is no need whatsoever so introduce new bytecodes.

JVM bytecodes (and likewise for CLR) are very misleading. These designs are very good for being able to quickly create (with little compiler/architecture knowledge) small footprint VMs based on interpretation and/or trivial JITs, including the loading/linking/verification stages. These designs are really bad for being fed to a “real” compiler frameworks such as HotSpot and .NET. So you cannot look at the bytecodes, assuming a real compiler, to determine much about what the final result is going to look like. Bytecodes are simply a transport intermediate representation (IR), or high level language if you will. Nor can you simply examine the “source” of library calls. A fair number of library calls have source code implementations which are only designed to be software fallback for hardware which doesn’t support the operation and/or to make porting to new architectures easier to get up and running. (SEE: wiki page on intrinsic list)

Conversions between integers widths within a larger bit-width integer are all trival transforms and don’t need any special case handling, say via an intrinsic (to be more precise in terminology than my previous statement). So some M-expression pseudo-code for transform pretty much any half-way sane compiler should be able to always handle:
int t = (x<<24)>>24; -> t = LoadSignedByte[FieldOffset[this, “x”]]
int t = x & 0xFF -> t = LoadUnsignedByte[FieldOffset[this, “x”]]

On the other hand many transforms that might seem easy cannot be performed as the compiler may not be able to know that it’s legal:
int t = x/2; // if x cannot be statically known to be >= 0, this cannot be converted into a shift

Floating point operations are a prime example as the rules are strict, so tons of things which might seem possible are in fact not (esp. since Java’s design does not allow for any relaxation of rules…well almost)

Intrinsics, in the context of JVM/CLR like systems, are specially marked methods that are auto-magically replaced at load/link/verifier time by some transform assuming that they are supported on the target architecture. The folks working on HotSpot “might” make some of these conversion methods into intrinsic, even though it isn’t needed, simply to lower the burden further down the chain. However the real purpose of intrinsics is allow code-sequences which cannot be expressed (at all or in simple to identify patterns) to be converted into high performance versions (frequently a single native opcode). sin, cos, sqrt, etc. etc. In this context unsigned compares, multiples and divides have many possible formulations of complex chains of operations, so they will be supported as intrinsic methods which will all end up as a single opcode, just like their signed equivalent.

Yeah, I think it would be nice to have operators for these unsigned ops, but then I’m a weirdo that considers operator overload to be full-of-awesomeness (just to beat a dead horse even more).

You’re not the only one. :wink:

Also, great post. Very informative.

I still haven’t quite pinned you down, Roquen.

Is your proposal for implementing unsigned types without extra bytecodes to not introduce any new types at the VM level, and have the compiler box normal signed types in new unsigned wrappers? It would still need some kind of annotation so that it’s known at method resolution time whether the types in the signature are the virtual unsigned primitives or the wrapper classes, so there would still be some changes required to the VM spec.

Or do you add types at runtime for signature purposes only and have conversion between signed and unsigned go via an unsafe method?

The root of the “problem” here is the notion that signed and unsigned integers are different. They are not (since 2s comp form). Computer integers are neither signed nor unsigned…that’s just an interpretation of the bit-pattern that you’re choosing to make at any given time. As such it has different sets of operators based on which representation you’re currently thinking in. As an example, java has always had “unsigned” right shifts…it’s just a different operator than “signed” right shifts. Same for compares, multiples and divides. So called conversion is just bit twiddling when expanding the width.

[quote=“Roquen,post:276,topic:39645”]
No it isn’t. The difference between datatypes is indeed the interpretation of the bits, but it’s precisely the fact that bits can be interpreted in more than one way which makes the type system necessary.

Now, it’s true that unsigned arithmetic can be performed without adding any primitive types to the language, but I’m taking it as read that “adding unsigned types to Java” means adding unsigned primitive types to the JLS - otherwise people would just write a library rather than a JCR. And that needs to be done in a way which is consistent with expectations for primitive types: in particular, they can’t be null, and they’re distinct for the purposes of method overloading and resolution. As such they need some VM support: if they’re implemented by compilation down to object types, they will be nullable and e.g. can’t be treated as primitives in the context of reflection; and if they’re implemented by compilation down to existing primitive types, method overloading becomes impossible.

Well then, from my perspective I’m getting unsigned integer support in JDK8 and they’re doing in such a way that requires the least work by just adding methods for the missing operators. Adding them as new types to the specification would be a massive undertaking which would result in pretty much nothing (other than a new set of possible signatures) which IHMO would cause more problems than it would solve. The whole point of not supporting unsigned as types in the first place was to prevent ambiguous statements.

People have written libraries, but 3rd party library calls are not converted into intrinsic methods. I’m not sure where you’re coming from with objects and null here. None of this is VM (from a specification standpoint) aware. It all happens under the hood of the compiler which simply insures none of the JVM rules are violated by its transforms.

I’m certain JDK8 is only getting unsigned integer support as library operations that just happen to become intrinsics. No larger literals, no ‘unsigned’ keyword for us now or likely in the forseeable future, for much the same reasons given, that adding a new primitive makes signatures a bear. From the way Gosling mentioned it though, it really wasn’t a principled decision to leave out unsigned so much as something they just never got around to.

I was taking my context from replies #242, #243, and #245, which are about adding new primitive types at the language level, whereas you were talking about an implementation of “unsigned types” which doesn’t actually add any types, even at the language level. No wonder we’ve been talking past each other for 3 days. (I’m still not sure what Riven was proposing in #245, but never mind).