It’s really so entertaining watching people break their wrists wringing their hands every time there’s a new feature in the language. You’d think they asked you for a three-way with bondage.
You have to admit there was a certain elegance in the simplicity of the language back in the 1.4 days. There was very little magic, just simple components that were very easy to understand how they worked.
Or put it this way: I am still constantly amazed that even though Java is about as simple as an OOP language can get, that truly basic fundamentals such as how (and why) Java’s object model and how the GC works are often totally beyond many C++ programmers.
Cas
Lambda’s have been around since the 30s…this ain’t new-skool stuff. In “real” usage since the late 50s…truly not new-skool stuff. Since I haven’t said so in this thread: I could care less about java having lambda’s or not because it doesn’t address any of my personal major concerns. But it’s been a glaring hole in the language…and it only seems fad-like, because C++ & java have been the rare exceptions of not having lambdas.
[quote]You have to admit there was a certain elegance in the simplicity of the language back in the 1.4 days. There was very little magic, just simple components that were very easy to understand how they worked.
[/quote]
Certainly!
And although I really do appreciate using some of the additions to java (and I wouldn’t want to go back to 1.4 for generics alone, flawed as they might be), they often come with some ‘gotchas’.
They often obfuscate what you’re really doing while not really adding anything worthwhile beyond code brevity. They often feel like workarounds for problems that are inherent to the platform and arguably shouldn’t be fixed by just hiding them in a nicer syntax.
And imho annotations are often really abused (especially in some enterprise stuff) to the point that it opens up a whole pandora’s box of potential runtime issues that could/should have been caught by the compiler (or be in configuration instead of source code). But hey, we do test-driven ‘agile’ development now, right? That makes everything better :-\
But I digress.
[quote]Or put it this way: I am still constantly amazed that even though Java is about as simple as an OOP language can get, that truly basic fundamentals such as how (and why) Java’s object model and how the GC works are often totally beyond many C++ programmers.
[/quote]
Maybe that’s because C++ isn’t necessarily an OOP language, while java aims to be strictly that. As such I actually like them both but for very different reasons.
I often learn a lot of non-OO but really useful things from C++ programmers too
I believe lambdas were going to be in the original java, but did not make it due to time constraints.
I think when we get used to them, they will be a good thing, removing unnecessary verbosity.
As I understand it will open other possibilities such as a java version of the c sharp RX (http://msdn.microsoft.com/en-us/data/gg577609.aspx) which is some cool stuff.
DK.
just as long as they don’t break backwards compatibility, i’m fine.
I want them to release a Java that breaks backwards compatibility… adds operator overloading, structs, unions, removes the generic erasure bull crap and implements it the same way C++ templates are (minus the HORRENDOUS syntax - keep that the same). Several core JAVA API developers have mentioned how the chains of being backwards compatible is going to restrict Java from competing with new languages that have learned from their horrible design decisions. And for god’s sake, give me some function pointers… I know that is ACTUALLY coming in the next version of Java (under a different name).
Isn’t Java 8 going to have lambda functions?
Yeah, you’ll be able to do stuff like this:
class Person {
private final String name;
private final int age;
public static int compareByAge(Person a, Person b) { ... }
public static int compareByName(Person a, Person b) { ... }
}
Person[] people = ...
Arrays.sort(people, Person::compareByAge);
I hope the VM treats this syntax like C does as far as efficiency goes… I’m sure it will.
Person::compareByAge is a method reference. A lambda function would mean you don’t even create a method at all. Both syntaxes are in 1.8 which you can try now if you like.
@princec: I have zero doubt that we would be much better off with transputer-like configurations. The cost vs. power of GPUs vs. CPUs demonstrates this. But I can’t see it happening as it’s too much of a paradigm shift…see how people get their knickers in a knot over tiny changes like (fill in the blank of any recent or near-future java addition). Transputers for the desktop requires software to be completely rewritten (from the base languages & OS up) and the same for the hardware. And back to the language issue…it seems really tricky to me. Taking sproingie’s suggestion of message passing. OK, you could do some actor based language (for instance) and that could address a sub-set of use-cases (pretty good for general purpose programming) but it doesn’t seem like you’d get good coverage of, say, signal-processing. For stuff like this it seems like you’d really want some data-oriented language. Could the two be merged? Perhaps. The cost of RD and the horror stories of PS3 programming certainly seem unlikely to motivate some company into adventuring down this road. (The rumored PS4 specs indicate they’ve gone back to a classical architecture.) As for GPUs exposing something along these lines…that’s more likely, but to do so means exposing some hardware details which are currently hidden behind the scenes. Since supercomputers these days are being built out of f*ckton of high end GPUs…never say never.
@ClickerMonkey: forward compat is a bigger problem than back-compat. Jigsaw and defender methods should greatly help with both. Unions? As-in-C style: can’t happen…not type safe. Structs: On the table in some undefined way. Operator overloading: too much resistance…but doesn’t really matter because you can roll-it yourself (or use an alternate JVM language). Getting rid of type erasure: in the works. C++ templates: not at all related to generics. Templates are poorly designed macros…having macros would be very nice.
back on lambda: reducing verbosity is only one facet.
for(Foo foo: fooList) { foo.doSomething(...); }
fooList.forEach(foo -> { foo.doSomething(...); });
both ‘doSomething’ with every ‘Foo’ in ‘fooList’, but the first has stricter requirements than the second. The first must be sequentially processed in the natural order of ‘fooList’. The second does not. Additionally chained statements can be transformed if written in the second style, where they cannot if written in the first.
I’m wouldn’t say that naturally follows: doSomething would have to be a pure function to be reordered like that, which would make a foreach kind of pointless as opposed to a map. Scala requires you to specifically state your parallel intent with “par”: [icode]fooList.par.map(_.doSomething())[/icode]. Even Data Parallel Haskell requires you to use parallel arrays (and gives you syntax for them) instead of doing it automagically. The only language I can think of off the top of my head that’s parallel by default is Fortress and it was designed that way from the start.
I suppose you could say GLSL is also parallel by default, but more in the sense of executing multiple instances of an otherwise serial shader program. Still, what with the output capabilities increasing with things like transform feedback, that might just be good enough.
Same with Java 8, except with parallel()
I’m not sure you can call a PS3 a transputer, or even close to that.
I might be wrong, but my understanding of PS3’s Cell processor is that it’s basically a traditional IBM PowerPC-like CPU but with 8 satellite cores that are good for signal processing and such. I though these horror stories had mostly to do with the state of the development tools in the early years, and that the GPU was sort of underpowered and needed ‘help’ from the Cell processor to offset that, which meant going hardcore low-level programming Cell’s SPUs (which was obviously not on many developer’s CV).
Well, the architecture is kinda alien to C++ - there are no inherent built-in language features to make programming such an architecture easy.
Cas
Looks like Ruby.
@ReBirth: Looks like lots of languages.
@erikd: The PS3 model isn’t transputer based, but is much closer than what we currently have and has similar issues. And yeah a big part of the problem seems to have been tools…and I want to think that the estimates on R&D costs were around 2 billion USD…they probably would have been better off spending a bit more on the software side. The failure of a non-traditional architecture isn’t going to help encourage folks to take the huge risk of walking that path. Esp since Sony is moving back to a traditional architecture system…it’s a pretty strong statement that the experiment was a failure and that it’s too risky continue down that road.
And a (non-embedded) transputer based system is a harder nut to crack since it would have to deal with scalability (as opposed to a fixed hardware embedded system), such as changes in scratch memory sizes per cell, number and configuration of the communication channels between cells (say going from 4-way planar, 6-way planar slices, or 16-way hypercube), potentially moving from individual cell configurations to blocks of cells with common resources, etc. etc. Now I don’t think these issues can’t handled but it would require (as I said) ground-up retooling including the base languages which would be a major paradigm shift for programmers (whom never resist change). Couple in the R&D cost and risk and I can’t see anyone attempt it. Except potentially if GPUs, which are ever increasing being used from general purpose programming, are getting close being able to perform basic global illumination in realtime and again usage by supercompters cause them to start making baby steps in this direction.
@sproingie & nsigma: Yeah I doing a poor job of saying what I’m attempting to say. I think the root of the problem is that to understand why lambdas/closures are interesting requires personal experience. Like how do you explain to someone without deep experience with LISP, why it’s so powerful? You could say: “Well code really IS data” and “Well it’s trivially meta-circular”…but that doesn’t really explain anything does it. Well lamdbas allow another mechanism to treat code as data. And behavior can be passed as variables to methods. No good…sounds like function pointers. Ok in my first example the action (iteration in this case) is handled by the user of the type’s code…in the second it’s handled by the exact type of the ‘fooList’ in question. If multiple types are called at that site, then how the action is performed is type dependent or if you change a type, then you don’t have to rewrite your all of the call-sites to change the behavior, which you would if you go the first route.
I’ll make another attempt which is doomed to failure. In my trivial example the action being performed in the first case is user-code on some type. In the second case is a mixture of language dependent (what transforms can legally be applied with a specific runtime & specification) AND the concrete type of the ‘fooList’. In my example the code is mandating the action is sequential for the first…in the second it is not. (remember this is a trivial example poorly attempting to illustrate a point). The implementation of ‘forEach’ of a given type is free to do whatever it wants to preform the action…it’s the programmers job to choose the given type (so it could use join/fork as a single example). Clearer? I doubt it.
Let it break backwards compatibility! It is a curse not a blessing. The mindset that languages must grow while maintaining source level backwards compatibility is ridiculous. It is not as if compilers for the old language versions disappear or that you could not write tools to at least partially automate conversion. On the other hand, if you have an ever growing “standard” language definition, you will not only have a poorly designed ad hoc language (like C++) but you will have a single de facto “official” compiler with lots of bloat and corporate lock in. (Think of Oracle’s Hotspot being so irreplaceable and the battle of industry titans to prevent write once compile anywhere become a reality for anything besides their own basically proprietary technologies. (Flash, ObjectiveC, HTML5, C#.)) New changes involve making code harder for humans and computers to read. The alternative is to make language changes that can express programmer intent better, so code is still easy for humans to read (even if it is now only marginally harder to write the first time) and easy for the compiler to optimize (even if you had to add extra reserved words.)
So instead of having stateless types with static methods to mirror classes and interfaces, you get lambdas. Instead of structs, you get escape analysis (which is great, but would be expected even if the language had structs.) Instead of improving Generics, you get type inference. All these things make Java more complicated and make alternative runtime implementations harder. It is irritating that de facto standard makers waste their time pursuing half baked improvements when the same amount of time invested in improving the language’s old features would solve the same problems in a straightforward way.
I agree with the sentiment that Java should not look like Javascript, but make super paranoid assumptions about the direction of Java and incompetence of Oracle. Could Java 1.X (as X approaches infinity) turn into that very same language? Wouldn’t it be worse if Java gave you no option because there was never a fork?
Ideally Java 2.X would look like Java 1.7 without the mistakes and annoyances that are obviously undesirable in hindsight and less like Java 1.∞. Java 2.X could feature conservative changes to the language and major changes to the standard API. Java 1.∞ with it’s abhorrent mix of C#, Python, Javascript, and other nightmares could be developed as a new JVM language with a new name. (Hey, that’s a good idea!)
I like the prospect of Java 2.x breaking off into a new language.
However, one has to consider that the credibility of this language is already at a low point. Completely ditching the old framework of backward compatibility will shut off a lot of customers. People might even question Oracle’s actions even more as to why they had to ditch the concept. If anything, a whole new language should develop from Java under a different name with the changes you described, and leave this version to sink with the ship.
The biggest problem with forking anything is the support is also split. A little bit of people will update the system to accept the new changes. The major majority will just run older versions of the code, expecting that everything works as it should. You can see this happening with OpenGL. Half of the user base has 3.x, and the rest are running 2.x and below. Splitting the user base like that makes it twice as annoying for developers, because now we have to work twice as hard learning both ways just to reach the entire user base.
So, in one way, Java’s decision to uphold backwards compatibility is a noble one. It keeps the user base fairly relative so we can be sure the programs we create hit a vast majority of the people. It is one of the major reasons I’m standing by Java, because you know your code is going to work when you distribute it.
A lot of features Java has today though, is really taken for granted. I actually approve of the little improvements here and there, and I accept the risks Java is taking to keep the user base relative across all platforms.
That is a good point. Maybe Oracle should be ditched. Of course, backwards compatibility is only helpful in the short run. C++11 is backwards compatible with B by virtue of being backwards compatible with C++ being backwards compatible with non-standard C++ being backwards compatible with C being being backwards compatible with pre-standardized-C being backwards compatible with B. Some backwards compatibility issues of Java are hurting us now and others may hurt us in the long term.
OpenGL is a little different, you need different hardware if you want to code in one or the other and need to code in whichever one your hardware supports. Java would not have that problem, since its syntax is sane and well structured enough that updating code would not be as insane as attempting to fix C code. And you could run both on the same computer or the same VM.
I usually think of a language as being a tool. It’s okay to have more than one tool in a toolbox. I think that I would rather use multiple languages in the same project (for example, Java + OpenGL shaders, or C + SQL, or Java + ANTLR) instead of integrating junk that hinders efficiency just in case you need a hammer that can also file your taxes. Community is something I hadn’t considered. Do you think it is worse to have language diversity (for lack of a better/neutral term) or to have internal fragmentation like C++ is infamous for?