Generics solves a problem I’ve never had: I’ve never had a collection and wondered what I’d put in it. I use generics because it’s there and it saves me some casts but I wasn’t especially worried about those in the first instance. I come from a C/C++ background, so casts don’t seem like the end of the world to me.
Every error that generics catch for me at compile time, I would have seen pretty much immediately with a NPE. Yes, catching errors earlier is better. Do I think the language needed a new feature to make a NPE into a compile time error? Meh. I don’t really care either way.
They have other uses too I expect but I do find that generics save me a lot of ugly cruft when actually using Collections (and Comparators, for that matter). They’re also nice for covariant return types. The icing on the cake is the new <> syntax which removes all the verbosity from them as well. Win!
wrt. Integer, Float, etc - they actually do behave similarly to primitives in some ways. The number 1 is the number 1; you cannot change its value to 2 such that 1 == 2, which is what it would mean if you could do one.add(one); - this is why they are immutable. They are also very important as keys in the Map interface, which requires that keys are immutable, and very important in sorted collections as well. There’s a whole load of reasons they are designed as they are and once this is fully, completely understood along with the other requirements in Java such as its need to be able to get “close to the metal” when required you also realise why there are separate things called primitives as well - it’s what the computer manipulates in a raw form. Etc.
Bit late, but of course that is phantom code. But business requirements + tight budgets tend to lead to code that is not “perfect”. As an example: you might just be mapping to some sort of data format of a partner, not something you design yourself…
Thought of another.
If doing the language now, i think that the new type inference in generics should be the default.
new ArrayList<>() would be new ArrayList(), and if you want a subclass from the left-value declare the type argument. <> goes away.
Probably wouldn’t be accepted because it’s not explicit.
Thinking about it there is no sane reason why that isn’t already the case. But it’s 2am and I wouldn’t know a sane reason if it came and bit me on the arse.
Interesting idea. I assume you’re referring to this? The main reason is backwards compatibility. I think you would no longer get unchecked type warnings in some cases because you could not tell if something were legacy code or code using the new semantics.
That’s actually a decent proposal. I like it so much better than the ones that advocate chopping off the left half of a variable declaration. I don’t think people realize how annoying it would be to try to read or to have to go back and make a declaration explicit as soon as you start changing or reusing your code. Still, eliminating verbosity is a terrible argument. First, its easier to read and programmers tend to type fast. Second, autocomplete saves even more time and accomplishes the same thing. Third, Generics didn’t make your code more verbose. It’s one line! It made code less verbose. When I was a kid we didn’t have templates or generics We had to make every function call in the form ((ClassName)object).someMethod((OtherClass)otherObject); :
// New original
Map<String, List<String>> anagrams = new HashMap<String, List<String>>();
// Old original (Why can't we have explicit type safety without the explicit type part??? The new original is so verbose!!!)
Map anagrams = new HashMap();
// Proposed change
Map<String, List<String>> anagrams = new HashMap<>();
// Alternative proposal (not compatible with old code)
Map<String, List<String>> anagrams = new HashMap();
The example he used isn’t that great. It would be nicer to have MultiMaps. Then you can write Multimap<String, String> anagrams = new HashMultiMap<>(); And there are ways to make your code cleaner. Its more investment at the start, but it’s shorter in the long run, improves readability, and makes it easier to rewrite portions of your library later. And if its not worth doing, maybe its not worth making a language extension. (Although I like that main part of the proposal, even if the alternatives aren’t great and his reasoning comes from the wrong origin.)
public class AnagramTable implements Map<String, Collection<String>>
Or
public class AnagramTable extends HashMultiMap<String, String>
If I can find it, I’ll post a link to someone’s proposal to change the way substring worked because he was unhappy with the way it worked in one specific program of his and because he made a naive assumption about the way it worked.
[quote]In my opinion, this implementation of substring(…) should be replaced as soon as possible. We recently spent almost one man-month of engineering time tracking down a memory leak that was related to String.substring(…).
String.substring(…) is a very commonly used method and it does not seem too clever to me to choose such a biased implementation for a “general purpose” method. I understand the advantages of simply sharing the char[] and just using a different offset and length. It is fast, and, if both the original string and the extracted substring are supposed to be kept in memory anyway, it is actually also very memory efficient (getting rid of the duplication). However, if the original string is no longer of interest once the substring has been extracted the current implementation causes a memory leakage nightmare extraordinaire.
In our case, the original strings happened to be Java source files (megabytes and megabytes of them), and the extracted substring was the @author tag in the class’s javadoc (a few characters only). Instead of just keeping a couple kilobytes of author names in memory the complete set of source files never got garbage-collected and was kept forever. This memory leak cost a lot of time and money to track down.
My main criticism is that we are dealing with a general purpose method whose implementation is optimized towards a certain use case (i.e. both strings are supposed to be kept in memory anyway, or at least the substrings are not significantly smaller than the original strings). I find that such a bias is not acceptable for such a common method.
It also makes me wonder how this implementation choice can be reconciled with Java’s general philosophy of simplicity and ease of use. Generally, Java makes it hard for you “to shoot yourself in the foot”, but here it seems to offer a pre-loaded pump gun that already points straight to your big toe.
I am aware of the performance implications if such a frequently used method is suddenly replaced with a potentially much slower implementation, but still I’d recommend to get rid of the shared char[] as soon as possible. I wonder how many Java-newbies cause huge memory leaks every day by using String.substring().
If there is a strong opposition against getting rid of the shared char[], maybe there is some clever way solving this with weak/soft references? That might be a good alternative (if it’s feasible). Or the algorithm could decide based on the length ratio between original string and substring whether a shared char[] should be used or not (thus hopefully achieving a more balanced memory usage profile).
Did anyone else experience costly memory leaks due to this bug?
[/quote]
So… not to be arrogant, but this is why most suggestions to “fix” the Java language horrify me. I know it’s not written in bold in the Java doc, so it’s almost a reasonable mistake to make, but the error is in assuming anything when it’s reasonable to imagine it implemented either way. If it’s not stated whether or not a standard String implements one technique or another, it should be considered undefined. Assuming his organization wasn’t cutting corners and it just slipped everyone’s mind, then I wonder why it wasn’t caught the first hour using debugging or profiling. Extracting the @auther tag is something that could be assigned for homework in an introductory Java class.
Consider what he did. 1) Open a file. 2) Read its entire contents into a byte array. 3) Construct a String containing a copy of that array. 4) Presumably use indexOf and substring to isolate the information he wants. 5) Store lots of duplicate Strings long term without interning them… Then consider the alternative. 1) Open a file. 2) Stream it using StreamTokenizer. 3) Close the file early if the author tag is found or omitted.
Given that his team could not find the problem, made some poor design decisions, and spent more time trying to fix problematic code than they would have taken to rewrite it, I assume we are not hearing from an expert. Not that that is a problem or that the documentation shouldn’t include a warning, but I’m glad no one took advice from someone saying “This is how I assumed it would work without looking at documentation, thinking about how I would design a String class, looking under the hood, or thinking about the implications of using one method or the other. I don’t know what the benefits of the current method are, but I have anecdotal evidence of one specific use case where ignoring the possibility of a problem eventually lead to a problem that was more expensive to track down after the fact than it would have been to do right the first time.” Heh. “Engineering.” I wonder what they thought the copy constructor of String was for?
I think it’s sad that that proposal (or one along those lines) actually made it, was accepted, implemented and ready to be released.
That new implementation of String will be one of Java7’s updates, and I’m sure some applications will see their memory usage explode: this time not from the memory leaks but from the duplications.
I for one have such an application / webservice that makes heavy use of the shared char[] to cut down RAM usage by over factor 20, and with the upcoming JRE I can either choose not to upgrade, or create my own String class, refactoring basically the entire application, creating a mess, and who’s going to pay that? So not updating the JRE it is, then…
I’m feeling all dirty that my app will be broken by a minor update. :emo:
@OP I really see no problem with var args, they are convenient in some select situations.
Only things that really irk me with java currently are:
We need better a better alternative JNI.
Additionally, some of the java classes such as the one’s involved with screen capturing are horribly slow (pretty sure, havn’t looked into it in awhile).
I’ve never looked at JNA…if it’s just a wrapper for JNI, then it doesn’t address the real problem. Don’t get me wrong, cleaner is good…but the cost of going across the boundary is the problem.
Yeah… but when you look at how the JVM is specced you realise that the JNI APIs are the way they are for a reason. Some C++ helper wrappers around JNI make would just make things more manageable. I think I did see such a thing maybe 10 years ago.
@Cas: While it’s true that some functions do require a heavywait boundary, the vast majority do not. Be that as it may, they’re thinking about a lightweight replacement…I’d expect it sometime after jigsaw and before structs.
@Native(library = "somelibrary", function="someTrivialCThing")
private native void someTrivialCFunction(int a, int b, float x);
would go a long, long way to making everyone’s lives that much easier. That’s 50% of the OpenGL API right there. It’s rather surprising this wasn’t in there from the start perhaps by adding “parameters” to the native keyword but now we’ve got annotations it’s easy to retrofit. A few more syntactic niceties to maybe allow easier marshalling of arrays and Strings in and out would also be a great boon and get about another 40% of OpenGL.