Well the problem is that null reference problems are a subset of a larger class of errors, not the other way around. Not using nulls to your advantage just masks one of the most telltale symptom of logic errors when working with Objects. Put your hand on a hot stove and you feel pain. Take away the burning sensation and you have a worse problem. You’re just as likely to come close to accidentally burning yourself one way or the other, but now without the sensation of heat or pain you’re going to hurt yourself sooner and more often.
There is a need to strike a balance a balance between compile time and run time checks. Java does a better job than most (all?) other languages in doing that with a few exceptions. (“Optional” methods in interfaces and not providing read only interfaces to collections, for example.) Using null in a program is not a problem though. It’s very useful. Java’s interfaces and single inheritance model is very good for defining contracts to humans. Static typing helps the compiler and other programmers understand your code better. There’s a huge benefit to both the human and the computer. Knowing how an interface works isn’t the compilers job though. Enforcing just one part of that contract such as nullability wouldn’t provide any marginal benefit to the compiler and would not eliminate the necessity for the human to refer to a class’s documentation.
Edit: Speaking of assembly and null pointers. I’ve programmed in assembly on a system that used addresses 0 through 127 for debugging purposes. There is nothing an assembler could do to prevent you from writing to those addresses, but it would have been nice if the hardware treated writes to address 0 and 1 as invalid. Not only did it not fail fast and it failed silently, but when you tried to debug code suffering from a “null pointer” write it would screw up the behavior of the debugger and act differently than if you ran it without a debugger.
I’m kinda ambivalent about checked exceptions. Yes, they add a bit of boiler plate. Is that the end of the world? No, but it is pretty ugly. Is there a good side? Yup - sometimes being forced to deal with an exception is a reminder about something you’ve forgotten. But that said, most of the time it’s just a PITA.
I think that checked exceptions are clever - it’s smart getting the compiler to tell you what might be thrown from library code - but in practice it’s usually just extra work. When you want to know that you’ve handled all expected kinds of exceptions, they great. The other 95% of the time, not so much.
Yes! Exactly! I had a look at Scala and Clojure and I really like them both (esp. Clojure as I love Lisp) but Kotlin’s the only one I can see working for Java programmers as a whole.
@Cas: Yes, I think Design by contract could be an awesome feature for Java. It might even solve the few cases where you do want checked exceptions, perhaps? Maybe you could assert that a method will or will not propagate certain classes of exceptions?
LOL. I get the feeling you think the second half of that sentence still holds true.
Without wishing to hijack this thread (which I’m finding quite interesting) with a further discussion on pixel bit operations, that example from ra4king is wrong (doesn’t do clamping). I recommend this old thread which contains loads of working pixel blend modes based on bit shift operations. Maybe start a new thread if you want to discuss further - I’d be tempted to claim it’s off-topic, though this thread seems to be pretty much everything goes. ;D
[quote=“ctomni231,post:216,topic:39645”]
Ahaaarrr, I actually disagree with every part of this post As I sup another freshly drawn pint of foaming virtual ale, I counter with:
Sproingie is dead right about telling people to just be better at it. No point in trying to be some sort of righteous idealist. If you were right then nobody’s programs would ever crash would they, because we’re all perfect. It would seem that the empirical evidence points to exactly the opposite conclusion: we are highly fallible. Let a machine do the job of telling me if I’m doing it wrong or right. The longer I do this (32 years and counting) the more I wish computers told me I was doing things wrong sooner rather than later. And the only people who get punished by programs that crash are users, not developers.
null object pointers are critically important. null means something. It means, this is pointing at nothing. I have maybe not allocated it. Quite probably I do not want to waste the memory, because memory is indeed still a finite resource. I specifically make use of the null “pattern” for things such as lazy instantiation for expensive-to-construct objects and things which may take up a lot of RAM. It’s fine, for example, to have a million objects, but what if each of those million objects was forced to have some reference in each of 4 fields which were effectively useless? You’d have to point them instead at some stupid NullThingy instance which threw… RuntimeExceptions on every method you tried to use it for probably, because it’s not supposed to be there. It’s doable but means every class effectively needs a NullInstance which throws exceptions when any of its methods are called… hideous. null is a trivial solution to a real problem: saving space and being trivial to detect (OS signal).
Another interesting tidbit about @Notnull - research quite a few years back on Java programs discovered that the majority of cases where an object was referenced actually assumed @Notnull rather than @Nullable. There was therefore a reasonable school of thought that leads us to thinking the default should be @Notnull (or rather, undecorated), and you’d specifically have to annotate with @Nullable to allow nulls otherwise.
That’s exactly the problem with Java right now; you can’t have @NotNull as the default. We’re at a point where we have a “dumb” Java compiler and we’re supposed to use tools (intelligent IDEs, bytecode transformers, etc) for everything. Ok, that’s fair, but the only way to protect myself from passing null to something that expects non-null, is to explicitly annotate that something with @NotNull. But that’s like 95% of the codebase! So, we end up using @Nullable only and not @NotNull at all, to avoid the code mess, and only gain half the benefit of compile-time null safety.
Assuming we don’t want to use another language like Kotlin, this could be solved with an IDE that supports a “null-safe-Java-mode”. When it’s on, @NotNull doesn’t exist, everything is annotated with it automatically. You use @Nullable where necessary. While you’re at it (time of order more virtual ale? :)), make everything final as well (except methods/classes ofc) and have a @Mutable annotation to indicate mutability.
Yes, that’d be a great option for Eclipse to support.
Not sure about the mutability idea - may as well put the const keyword to use if you’re going to go that far. And look what a mess that seems to have made of C++.
No need to go all the way to const. It would just be very convenient if all primitives/references were immutable by default. Maybe I should have said @NonFinal and not @Mutable.
This is what I think:
You want to initialize class fields in the constructor and never change their values (more immutable classes = good). This means final fields by default.
Method code that changes the value of passed arguments is confusing and a frequent source of bugs. This means final method arguments by default.
Local variables that get assigned more than once are relatively rare and again may lead to confusion (e.g. when coupled with long if/then/else chains). One might say that mutable loop variables are very common, but there’s no reason to worry about that these days with the enhanced for loop, forEach, map/reduce, etc. This means final local variables by default.
Anyway, only the first one might be a bit problematic with injection frameworks, but in general it should lead to cleaner code.
I understand the viewpoint, and mostly agree with it, and in a new language it would be a great move. It’s never going to happen in Java, and it would be a bad idea if it did, because it would change the semantics of the language.
I can definitely see the benefit in compile time annotations for classes / fields though, so that warnings are produced if fields aren’t final and not marked - that’s something I could definitely envisage using.
Slight aside on the importance of final fields - I was interested to find out during the recent thread on double-checked locking that final fields have different assignment semantics in the new Java 5+ memory model. They can never be seen from another thread in an invalid state (unlike mutable fields).
Null is still a reference, it’s just an all-zero bit pattern (it doesn’t have to be, but that’s how every JVM does it). And it throws NPE for every method you try to use it on. Scala’s None is a global value, so it’s not taking up any extra space other than the single None object. And since None is a subclass of Option[T], but a sibling of Some[T], and Option[T] is of course a different class than T, there’s never any danger of mixing them up – and it’s the compiler that tells you when you do, not the runtime.
Scala doesn’t actually try to solve the null problem globally – null is still there, and any reference can still be null. It’s just not used that much because there are type-safe alternatives. Option isn’t a panacea either: for one it doesn’t solve the problem of “where did this null come from”, since now it’s “everything’s resulting in None, where did this None come from?”. It just makes the handling it a lot more explicit, and if you use it monadically, easier to swap out with something better like Validation.
It seems to me that being perplexed by null is like how mathematicians used to get annoyed with zero because it breaks so many things and keeps cropping up…
If your problems only involve positive integers and you have no subtract or divide operation, you too would have a right to fret over seeing a zero involved. Compare by analogy to operations involving only known valid objects that return other valid objects. When you assert that condition, it’s nice to have the compiler prove that it’s true.
I’m late for this response, but I really have to say. Everytime I log in to JGO, I check this thread.
I’m not getting tired of this… I’m so suprized this is not a flame-war…
The thread just gives me sympathy for James Gosling and the other language designers. All the devs at Sun telling them Do This, Do That… A pretty thankless task. All I know is I found C and C++ stressful and Java a joy to work with. As a self-taught programmer, I feel I can do creative things with Java, while with the C-team I spent my time worrying about memory allocation and pointer math.