Things you disagree with in the java language

Well, what you want != what the code does. :L

It’s better not to manipulate per pixel basis unless you know exactly what you’re doing.

http://death-mountain.com/2010/04/rgb-hsv-and-hue-shifting/
http://www.dig.cs.gc.cuny.edu/manuals/Gimp2/Grokking-the-GIMP-v1.0/node52.html

[quote=“Best_Username_Ever,post:199,topic:39645”]
So, just because we can’t fix every possible source of bugs at compile-time, we shouldn’t try fixing any of them? What kind of logic is that? And really, I don’t think I have to further justify why focusing on null-safety is useful. Have you ever written a Java program without thinking about whether something can or cannot be null every few minutes?

As for the rest of your post, you’re not making much sense. Null-safety isn’t about removing nulls, or the ability to set a reference to null. It’s about making nullability explicit at the source level, so the compiler can guarantee safe usage of such data. It’s beneficial to noobs and experts alike.

Bitwise operators. Instant migraine.

A PositiveInteger subtype? That’d be awesome, can you get to work on that? Meantime, let’s get back to things that are actually tractable problems, like nullity.

NPE’s are anything but “fail fast”. They’re the result of a value that could have originated from anywhere, including something that’s long out of scope. Since nothing is ever forced to deal with nulls or can demand a non-null value without manually checking, the decision to handle an anonymous failure represented by a returned null gets kicked down the road until some unsuspecting code trips over it.

And no, using nonsensical defaults to avoid NPE’s isn’t how you deal with nullity. The type system is one way, but a separate static constraint system in the compiler (of which a type system is one) would also be another approach, if not a perfectly ideal one.

At this point : what I want = what my code does :wink:
It looks so selfish, when I talking about it ^^, sorry for that =)

It already work well, it will be good if I manage optimize it XD

Why it’s so simple :L it’s like playing with legos.

    0101 (decimal 5)
AND 0011 (decimal 3) // AND is useful for "masking" i.e, grabbing only specific pixels you care about
  = 0001 (decimal 1)

   0101 (decimal 5)
OR 0011 (decimal 3) // OR is useful for combining bits together
 = 0111 (decimal 7)

bit shifting is so straightforward.

0100 >> 1 becomes 0010. (move the bits to the right 1 time)

0001 << 1 becomes 0010. (move the bits to the left 1 time)

an int is made up of 32 bits (= 4 bytes (1 byte = 8 bits (1 bit is either 0 or 1)))

The first byte of an int is “int & 0xFF” (bits are read from right to left).

The second byte of an int is either “int & 0xFF00” or “(int >> 8) & 0xFF”

ARGB pixel: 0000 0000 0000 0000 0000 0000 0000 0000

“int b = pixel & 0xFF”:
pixel 0000 0000 0000 0000 0000 0000 0000 0000 AND 0000 0000 0000 0000 0000 0000 1111 1111 ( & 0xFF ) b = 0000 0000 0000 0000 0000 0000 0000 0000

“int g = pixel & 0xFF00” or “int g = (pixel >> 8) & 0xFF”

pixel 0000 0000 0000 0000 0000 0000 0000 0000 AND 0000 0000 0000 0000 1111 1111 0000 0000 ( & 0xFF00 ) g = 0000 0000 0000 0000 0000 0000 0000 0000

or

pixel 0000 0000 0000 0000 0000 0000 0000 0000 pixel >> 8 0000 0000 0000 0000 0000 0000 0000 0000 AND 0000 0000 0000 0000 0000 0000 1111 1111 ( & 0xFF ) g = 0000 0000 0000 0000 0000 0000 0000 0000

“byte r = (pixel & 0xFF0000) >> 16”

`pixel 0000 0000 0000 0000 0000 0000 0000 0000
AND 0000 0000 1111 1111 0000 0000 0000 0000 ( & 0xFF0000 )
= 0000 0000 0000 0000 0000 0000 0000 0000

16 0000 0000 0000 0000 0000 0000 0000 0000

r = 0000 0000 (byte can only hold 8 bits)`

Don’t forgot the signed cases with bit shifting :persecutioncomplex:

Here’s an interesting article on Phantom Types (I’ve never heard of them before). It looks like another way to use the type system to enforce certain constraints.

@Spasi
The last time I did not stop to think about what my code did was five years ago. With the exception of the time I already described, I never have trouble with nulls. If you can’t remember whether the type of map your using can store null, whether it makes sense for your program to try to store null, or what purpose storing a null key or value serves, then you’re going to also have trouble with remembering whether System.arrayCopy will function correctly if you give it a String and a List (because after all, the method header takes Objects), and whether it makes sense to use a negative index to access a list, and whether it is okay to add an object to a Collection while in a for-each loop, and whether its okay to swap the values of two parameters named “min” and “max”. I think you read the latter half of my post with a mental filter. I know you can make it so you can turn nullability on and off. I just don’t think it solves a problem (for any language not using pointers) that professional or long time programmers haven’t already learned to avoid under broader circumstances. You make programming with nulls sound like the punishment in the myth of Sisyphus. It does not take any extra energy to remind yourself “Don’t use null here” if you already have to think “Don’t use zero there. Make sure this collection isn’t empty. Make sure I trim the spaces in this String before trying to parse it.” All those things require virtually no effort anyway. Having to continuously debug code because you do not check the JavaDoc of a class, on the other hand, is much worse. Code, run, fail, debug, code again, run, fail, debug, code some more. There’s a saying in carpentry and engineering: measure/calculate twice, build once. You don’t need to check twice in programming unless you cannot remember something, but if you do not fail to follow the author’s instructions for using a class, you never see NullPointerExceptions unless someone else’s code causes the bug.

@sproingie
Pascal lets you create subtypes of integral and floating point numbers. Something like “type PostiveShort = 1…65536;” would try to fit that in two+ bytes and let you use it like a normal type. - I already said NullPointerExceptions don’t occur on the same line that null gets assigned, but that’s true for most errors. In practice the stack trace of most NullPointerExceptions will direct your pretty close to where the actual mistake was made if not to the same function that caused the problem. Even when it does not, then it wouldn’t be a problem if you did parameter checking or assertions. Having a known value to check against makes debugging using assertions or a software debugger much easier. You can even deliberately set a reference to null when you’re done with it to give you an extra opportunity to fail “fast” and make it easier to hunt down other unrelated bugs like concurrent access issues. (Similar to how streams throw an exception if you close them, but having less code, being easier to debug, and aiding the garbage collector.) That’s why I related NullPointerExceptions to a person’s immune system response.

I confess to not having a great deal of trouble with NPEs - they are usually trivial to spot and occur early on and usually during testing. There are times when it’d be nice to be able to specify though.

BUE would almost certainly be in the same camp as myself and Roquen in that a design-by-contract system for Java would go a long way to preventing a lot of bugs from occurring. Whether something can or cannot be null is just one small part of a possible contract; you could just as easily specify the domain of an int to prevent it ever being < 0, for example. Pre and post conditions, constness, and invariants are nice things to be able to specify. Like generics though they have a remarkable domino effect on your code.

Cas :slight_smile:

I have eliminated a lot of NPEs by using semi-immutable objects with member variables marked ‘final’, and asserting that they’re non-null in the c’tor. It’s not perfect but it tends to fail faster and make the root cause obvious.

I’ll have to give one of the design-by-contract annotation processors a go at some point - I worry you could waste a lot of time adding constraints rather than getting things done, but you can abuse any tool if you really try. :slight_smile:

@princec
What do you mean by domino effect?

So you tweak a bit of code to add a contraint… then some other code without DBC which it relies upon violates that constraint. So you go through and attempt to put constraints in there, too… and the whole thing snowballs and you end up spreading contracts throughout your code until you get to code which you have no control over and are forced to either suppress DBC or write constraining wrappers around it all.

Cas :slight_smile:

Hmm. That sounds really bad. How does that apply to Generics? Is it still a problem?

Yes, contracts will infect a great deal of the rest of your code in much the same way that static types do. If you end up having to work around the contracts you stick in your code, then perhaps you need to review why you’re violating the contracts in the first place, or whether you were really providing that contract at all in every case.

As for nulls and asserts and sentinels and parameter checking and all that … those are all runtime checks. The purpose of @NotNull and its moral equivalents is to do it all at compile time.

NPE’s are very annoying to track down, but I agree… they are not a problem except for beginners who are new to the language, or experts that completely forgot to initialize something. Seriously, it is because Java is “Object” based why everything in it is capable of causing these exceptions. But, when I moved to the language, I got used to it. Just like I got used to not having access to pointers, or having to name my class name exactly like my file name.

If you are creating a bunch of NPE’s anyway, you really don’t have any business coding. No program works well with creating a bunch of references you aren’t going to use. Such bad coding practice of not initializing your Objects/variables is why there are so many segmentation faults and Heap errors in C/C++. How are programmers supposed to learn anything if we aren’t getting punished for writing bad code.

Funnily, I am in favor of a parameter that gives this functionality (like @Nonnull). It is a very good idea. However, I don’t think it should be default, because it produces lazy code in where programmers don’t have to code defensively. Leave that notation for people who want to make their code behave in a certain way.

If it is one thing I like Java for, it makes good programmers. The language has very high standards when it comes to design and how it was written. The fact that they are actively preventing us from causing segmentation faults is one of the best fail-safe designs I’ve ever witnessed in a language to date. NPE’s are not a problem at all. They get beginners writing better code faster. That is something I can always get behind.

I wouldn’t normally respond, but it goes on and on with this sentiment. This sort of argument has been advanced against things as basic as memory protection, and some advocates of “duck typed” languages even now put it forth against static typing: to prevent errors, just Be A Better Programmer. It’s facile and bankrupt. No compiler technology has ever advanced due to moral condemnation of programmers who just aren’t manly and robust enough to deal with Things As They Are.

@sproingie: Please don’t take things too far out of context.

Programmers have to take a little bit of responsibility for the code they write. We can’t just bandage and cover up bad coding design because “we are not manly enough”. No computer language is perfect, and no computer language is ever going to be perfect. We are the ones who are responsible for creating the next generation of programmers. If we make it all “rainbows and butterflies and roses”, then how are we going to get robust workable code.

People learn how to code better through failure. That is a fact of life. I would rather throw someone a NPE and have them learn to be better, then to bandage the error and have them post on a form “Why isn’t this working?” or “can you help me find the error in this code?”. Bandages make life harder for everyone because they don’t produce stack-traces. It is frustrating for you, me, and the entire community to make code run error-less.

We don’t fix anything this way, all we do is move compiler errors to logic errors. People have to learn to program, and compiler errors are the best way to learn. The fact that only that statement was taken as a singleton, means that there is a lot of passion in this subject and your programming skills are way above novice. So, please, take a step back and try and remember how it was when you were a beginner. The best way to learn is from mistakes… always.

I simply don’t agree that you throw out any notion of contracts that enforce against a whole class of mistakes, statically, before the program is ever run, simply because of some notion of “learning through mistakes”. And neither does anyone writing in anything above assembly.

So, I see that the “human element” is completely “null” from coding practice then. If many computer science people are thinking this way, then we are just setting up another language for failure. Of course, you want to prevent errors. All good coders want to prevent errors.

However, you can’t code against errors if errors never show up.

It is like putting yourself in a bubble so you would not get disease. There is fail-safes for code… and then there is just overdoing it. NPE’s are something very trivial that any good programmer can easily prevent. Seriously, why should we promote that programmers will not get errors for not initializing Objects. That is bad coding practice for “all languages”. It isn’t just for assembly.