Java 7 to get Closures!

and I am affraid to have as many new java languages features release than we have last 4 years for the JRE (*), symptomatic of too much eagerness like an irrepressible need to evolve/change even when all is going well.

wow never thought I could be able to write such complex sentences in english :slight_smile:

EDIT: (*) I mean with lots of bugs or missthinked stuff

EDIT2: can you image how Java will get complexe in tens years ? this is the same as old software that have evolved for severals years they just suck, java do what it have to do verywell… maybe just start a new language Java2000!!

Nothing to do with what i said.

Maybe one day there will be a Java 2.0 that would be rewritten to fix all the mistakes and get everything right.
Legacy stuff could be included to still work with older code.

I agree. Generics and closures should have been included from day one. I think they were left out because of lack of time. The problem now is to retrofit these features.

I am not taking it lightly, I am asking Notch what is ‘fake’ about it- that word just doesn’t make any sense to me in this context.

It’s more about trying to understand his criticism than to refute it.

-Ido.

Like I wrote, we can call it J++ and no, that name is no coincidence.

If they want to get rid of boilerplate code perhaps they might consider adding some type inferencing in the form of automatic casting. Combined with generics that’d probably see of most of the remaining casts in code, like:


String s = (String) someThing;

would just become


String s = someThing;

and the compiler would simply infer there’s a cast in there and barf if the cast was impossible. Just as it does now. I just don’t see why I should have to tell it that I want it cast to a String.

Cas :slight_smile:

Yes!
I mean you can still use the casting but if the destination object is clear, why the need?
Though I might add it if the assignment is away from the declaration, just to show what I am doing.

euh… :o no you are crazy it is necessary ?! imagine how much error it will introduce : like in mathematical, stuff will be cast from double to int/float or even String<=>int ? but also in many other case. casting is IMO necessary to avoid lot of confusion. With auto-casting Java may become as php/javascript very hard to be used in area that requiere high precision/smartness.

and why not making all variable type “var” , but hey ! wait the language you define already exist and its name is JavaScript not Java ! :stuck_out_tongue:

EDIT :and when it is absolutly clear there is already no need to cast it like :

float f=3;
double d=f;

works without any warning when the inverse dosen’t

A cast is either impossible, or it’ll throw a CCE at runtime. Me sticking it in brackets is just a waste of my typing. When I assign something to a variable I’m already asserting that it fits without trouble.

Cas :slight_smile:

Well, C# has ‘var’, which must be assigned at the same type as declaration and type is inferred from the first use, at which point it is static. Nice little sweetener to avoid irritating boilerplate, particularly with complex generics. So unlike Javascript, you can’t have


var s = "Hello";
s = new Array();

or whatever, since ‘s’ is a string. Same with Scala, I think. Compiler says no.

C# is also introducing a new ‘dynamic’ type, “The type is a static type, but an object of type dynamic bypasses static type checking.”. I haven’t tried it out.

Automatic casts sound like a world of pain. Maybe they wouldn’t be so bad. One thing I will say is that the way co&contravariance has been handled in C# is less good than Java, which seems to lead to more casts being needed in my experience (again, this is being revised in C# 4, not entirely sure of the new spec). So I guess it could be nice to save having to type them manually… It tends to make me feel a little wrong inside having to cast things in certain situations, and I wouldn’t want any genuine potential wrongness to be hidden.

As for how closures help with parallelism. I guess it has something to do with lazy evaluation, and allowing the runtime to more easily come up with a strategy of when the function should be executed, for example for each element of a list in parallel.

Actually I was thinking more along the lines of casting objects that are instances of.
F.i.
StringBuffer implements Serializable, Appendable, CharSequence and extends Object.
So any of these would be ok (presuming Object is also a StringBuffer:
StringBuffer sb = object;
Serializable ser = object;
Appendable append = object;
… (you get the idea)

Now Writer also implements Appendable, so :
Appendable append = object;

for me it is just a short form of
if (object instanceof Appendable )
Appendable append = (Appendable) object;

Think about it, does anyone do this?
interface A
interface B extends A
interface C extends A

A a = © o; (basically casting to the top-level object)
Not really or? If o is an instance of A then just use A.
If it is not, you are SOL anyway.

Though maybe I am seeing it wrong (been a loooong week)

I was NOT talking about casting objects between types that are of different types.
(cept in certain cases)

[quote]for me it is just a short form of
if (object instanceof Appendable )
Appendable append = (Appendable) object;
[/quote]
ha yes, this one make more sens, and would have be less boring than generics with a CCE throws when requiered.

but the problem it introduce maybe is that type will be then checked at runtime and no more at compil time

WRT: Make Java pure OO.

Then the only option (in general) available to the ahead-of-time compiler is to unbox on incoming edges and box on outgoing. Not good.

WRT: Operator overloading.

I’d call these examples of style recommendations, not contracts. Your examples of “(a+b)-b = (a-b)+b = a” and “(a*b)/b = (a/b)*b” require certain algebric properties and that simply doesn’t make sense to me. As a simple counter-example: vector analysis cannot meet the second requirement. My basic thinking here is that you cannot dicate good style. And, of course, good style is subjective. It’s in the nature of high level languages to provide features which can be abused.

WRT: Closures. I have no feeling one way or the other about adding them to Java. However there seems to be some confusion…they are not the simply sugar for anonymous functions. Faking a closure in Java requires a fair amount of boilerplate code.

WRT: Generics. I don’t understand the complaints against. They don’t do anything, other than allow additional user-defined type checking. I also like being able to drop tons of casts. My compliants would be: I don’t know how to properly define a generic type which is cyclic. They really should have been enforced by the VM instead of only being used by the ahead-of-time compiler. And the new inferrence (on creation) should have been in the first release.

WRT: Other language additions. Stuff like closures, tuples, dynamic invoke, et al. have all been on the table for a long time. It simply takes awhile for a small team to build the design and implement each.

WRT: Inferred type assignments.

The original argument would be that the cast is an indication that the assignment may potentially cause an exception.

The ahead-of-time compiler may not remove type assignment checks anyway.

No, but I prefer to do

for (int i=0; i<entities.size(); i++) {
   Entity e = entities.get(i);
   e.tick();
   if (e.destroyed()) {
      entities.remove(i--);
   }
}

rather than to use enhanced for which insists on using an Iterator on an ArrayList, and doesn’t expose any way to remove (or add) entries while iterating over it.
Enhanced helps keep code short in some cases, but it doesn’t make the language more powerful in any way, shape or form.

Closures would, except now they’re just hacks to provide single function class interfaces. Why just make them synthetic sugar which eats up extra resources when one could change the JVM to make the language ACTUALLY support it instead of just faking it.
I could write a preprocessor for basic, then claim basic supports closures as well.

Sun’s been doing this a lot lately. Generics are deeply flawed because of it.

I’d like to keep explicit instanceof checking and make the compiler track known instanceof information: so

Object foo = foo();
if (foo instanceof Appendable)
{
    // In this scope and while foo has not potentially been assigned to I can call
    foo.append(bar);
}

But I suspect it would be hard to do this in a way which didn’t make maintenance a pain.

Can you define * and / for arbitrary vectors (other than component-wise)?

Only by moving to a higher level algebra (for want of a better term).

Rewriting the second to use a single binary operator:

(ab)(1/b) = a*(b*(1/b))

Requires the existance of a unqiue multiplicative inverse and that the product associates. These are not generally true.

+1 to that - I think a lot of people seem to think that a closure is just a shorter syntax for an anonymous class, which is not the case.

At the moment it’s not clear whether what they’re proposing actually qualifies as a “closure” or not, though; plenty of folks are, in fact, pushing for no more than a simpler anonymous class syntax, and I really have to take issue with calling that a closure, since the more powerful things you can do with closures are not possible if that’s all we get.

And FWIW, none of this stuff is even remotely new, even in OO languages, so I don’t really buy the complaint that Java is trying to “keep up” or anything like that. It’s more that computing is necessarily pushing towards parallelism, and functional style becomes vastly more important there, so Java has to make some concessions in that direction to remain viable. Without a serious effort to ease parallel programming, Java will fall further behind productivity-wise as we need to scale horizontally, which would be a shame. Real closures would allow a lot of the important logic and optimization to be put into easy to use libraries instead of the rather low level concurrency support we have today, and I think that will be a major win if it’s done right.

The current syntax seems kind of bizarre, though, and some of the discussions on the mailing lists make me worry that this implementation will be pretty seriously crippled, so I don’t know how I feel about the addition overall…I’ll have to see how the finalized proposal looks, I think.

Cas, as far as automatic conversions, Scala does lets you do that (basically, just write a conversion function and mark it implicit, and then it Just Works), and it is extremely convenient - it makes dealing with unit-ful values a breeze, whereas it’s a major pain in the ass in Java because of the explicit conversions all over the place.

It would never make it into Java, though, because people in the Java community are psychotically paranoid about potential abuse of language features, and this one actually has some real potential for confusion and bug-hiding.

(Roquen already correctly noted that division requires a unique inverse, which is definitely true)

Sure, you can do anything you want, but it wouldn’t necessarily have all the properties you might want from those operators, or a particularly natural meaning. With vectors in N dimensions, there’s the inner (dot) product, which returns a scalar, and an outer product, which returns an N-dimensional matrix in the obvious way. Neither of these permit division as the inverse, since they don’t create vectors. If you want vector*vector = vector, then in 3D we usually use the dual of the exterior or wedge product of 2 vectors (which boils down to the cross product), but this doesn’t generalize to other dimensionalities, since you need N-1 vectors as inputs to make the dual of a wedge come out as a vector (Google “exterior algebra” if you care what any of that means). In 2D you can use complex multiplication, but that may or may not let you do something useful, depending on context.

Usually people define * and / between vectors and numbers, but not between vectors themselves, when operator overloading. If you’re dealing with complex numbers, you define them all. Matrices get * defined, but oftentimes not /, because the operation of inverting a matrix can fail sometimes and it’s usually best not to hide that by making it look like regular old division.

This is pretty much exactly how mathematicians use the objects and symbols as well, at least for these simple cases, except that sometimes it’s implicit that if you multiply two vectors you’re taking the inner product, which is a lot more clear in written math because you use a dot for multiplication anyways.

Personally, I’d love to see Java get some built in primitives (preferably stack-allocated and immutable) for vectors, matrices, and complex numbers, with these overloadings baked in; if that happened, I wouldn’t care about operator overloading one bit, though I’d also like the big/arb. precision number classes to get the same treatment. I can’t come up with many other compelling or common use cases for operator overloading that make sense, but those ones cause a lot of pain if you work in areas that require them…

Short and inaccurate history:

  1. Hamilton liked 3D, so he invented Quaterions (which a 3-bivector plus a scalar)
  2. Around the same time Grassmann was into multiple dimensions, so he tossed out an algebra in n-dimensions that has the vector and bivector parts. Cool syntax…too bad he was just a high school teacher…oh well
  3. Clifford came along: grokked Quaternions, grokked Exterior algebra. Figure how to merge them, fills in the missing parts and works for topologies other than Euclidiean. Then he died young. He wasn’t a rock star, so he was promptly forgotten…oh well.
  4. Gibbs comes along and doesn’t really understand 1 or 2, but like the bits that he does and probably never heard of 3. (FYI: He also didn’t like operator overloading) So he created a broken version of Quaternions after stealing syntax from Grassmann. And calls it vectors. Yeah! Too bad that the ‘vector’ part of a quaterion is not a Gibbs vector…oh well. It takes the world by storm. The best tech as well as math always wins

The end.

(NOTE: I could be confused about the Exterior algebra parts…too lazy to review)

@ewjordan:
If there is any confusion about closures, then this is because the unclear info spread around:

In the first link, the example was that closures make anon classes a lot easier.

The one by the java dev, he talks blabs on about parallel programming but does not really explain what closures have to do with it.
Kinda felt like ‘I have this cool feature I want and I need a way to sell it’.

I agree, the now (not future) is parallel programming, but then why not think of a way to actually add it into the java realm instead of creating some kind of cyborg implant and hope the body does not reject it.

Come to think of it, aren’t the closures not just like (jython, bsh,…) script files you want executed?

Ok, Java was not designed with parallel programming in mind and if we cannot adapt the design to it, maybe it is better to make something new?
Having ‘evolving’ languages might not be the best idea.
Once C++ was the cream of the crop, but it was lacking in many departments so some people made Java to overcome those.
Now Java is lacking so why not create something new?