This reminds me of the assembly programmers going on and on about how a compiler could never beat hand-written code. What they didn’t understand is that their correctness on the small scale is what made them horrible wrong on a real-world scale.
BTW: Most of the stuff about operator overloading must do this and that under the hood is compete bunk. Like I keep saying: Forget everything you think C++ has taught you.
I keep saying something like: A programming language is a tool, not a philosophy or a religion, don’t be afraid to take a hammer to a screw if you need to.
The multiplication operator is the standard mathematical operator for Vectors.
As Cas says, the fact that this is confusing because it’s used in a different way in GLSL only underlines the fact that programming languages don’t actually allow you to overload operators correctly.
I was programming C++ before an official spec was released. When I started with it, it was a bunch of pre-processors for a C compiler (SAS-C, in my case). I do have a reasonable amount of experience with the C++ case, but most of my issues with operator overloading are to do with the fact that you can’t actually properly represent a mathematical domain properly in code anyway (due to issues about what operators are available and their existing meanings and precedence in code)
No, it isn’t.
Without overloading “a = b + c” is definitely an operation that does not involve classes and the definitions of a, b and c do not need to be read to understand the statement.
You will see that Vector cross product being defined as “multiplication of two vector fields, yielding a vector field”
“I’ve always seen inner and outer products having higher president than add/sub. I cannot see any logical argument to give them the same.”
Show me a reference where vector cross product gets higher precedence. Wikipedia Vector calculus page doesn’t mention that. Neither does Wolfram, nor any other reference I’ve ever read.
It’s important because the main argument for Operator Overloading is that you should be able to represent the mathematics properly, but in practice you actually can’t. If you can’t do it properly the whole reason for operator overloading is undermined.
The multiplication operator is the STANDARD operator used for cross product. Check that Wikipedia reference, or ANY Wolfram page on Vector maths.
As I’ve stated before, you can’t really know anything about an operator or method sequence without knowing what the types in question are. I find this to be a non argument. Thankfully one that is moot because the solution is the supposed problem is trivial for folks that use an IDE. Hold yourselves kids. Operators that are overloaded are rendered differently than built-in operators. Go Go language AND source aware IDEs.
Note that vector is an overloaded term. Vector fields != Gibbs vectors (3D vectors) != 2D vectors != vectors in linear algebra, etc. I bring this up because the “cross product” is really only defined in 3 dimensions. In all other dimensions it is some logical extension…one that matches in the 3D case and is logical in some other way(s). Basically I’m saying let’s forget vector fields. Look here instead: http://en.wikipedia.org/wiki/Cross_product or here: http://mathworld.wolfram.com/CrossProduct.html.
No it isn’t. The cross product is. If it were we write them as: AB instead of A×B. (SEE: above links)
There’s a huge gap between what, say Mathematica provides in terms of operators vs. C++. Somewhere in the middle is a sweet point appropriate for general purpose languages.
I always rooted for having ascii method operators but no-one likes it for some reason. Despite the fact it doesn’t break readability and is obvious what is going on.
@Roquen: Mathematica is a programming environment for mathematics. The choices made there are just programming language choices and don’t constitute a mathematical reference.
The reference you referred to (Wikipedia) uses this notation for cross product: a X b, which is precisely what I referring to - that is that standard notation for the multiplication operation.
@Cas: there’s really very little difference from “a = a dot b” and “a = a.dot(b)”, so… would a change like that be worth it?
I agree with the first point. Your second comment makes no sense. Technically, a programming language does not have to do much under the hood at all. We could implement classes with hash tables with string based keys for both variables and functions. Garbage collector? Meh, a semi space copying garbage collector is good enough. Fast array accesses and fast for loops? Who needs them when you have regular expressions? bytes, shorts, chars, ints, longs? Make everything that isn’t an array or a string a double. While we’re at it, make strings and arrays interchangeable.
C++ is as terrible model for Java, but that’s not the point. C++ has some features (of questionable value) that cannot be sensibly implemented in Java. You can’t take a feature from C++ and drop it into Java, just as you can’t take a feature from Java and drop it into Javascript, just as you can’t take a feature from Javascript and drop it into C++.
Integers are not a replacement for ints. No other class based implementation could be a replacement either. Adding syntax sugar for arithmetic operators would not change that fact at all, just hide symptoms of problems. If there was a new user defined type used specifically as operands to arithmetic operations, then Java could have the syntax sugar everyone craves and at the same time have a practical implementation under the hood. Maybe add a Java flavored version of C structs and at the same time use those for your custom math related data types. But as is, no amount of hand waving and assuring people that the problem is simple to solve can simplify or solve the problems.
If you know a way a JVM could use classes and Objects in place of primitives without adding several bytes of overhead before per instance, preserve immutability of numerical types, avoid excessive garbage generation in frequently accessed blocks of code, and preserves normal object behavior for existing classes, new classes, Generics, and the == operator, then don’t keep it a secret.
In arithmetic sure. 10×5. Or sometime when you’re abstractly talking about some (potentially unspecified) field, but then again it’s almost as common to use a ‘dot’ as well. But we’re talking about a specific algebra:
Standard notation for vectors: let small letters be scalars and capitals be vectors:
product of a vector with a scalar: sV (no cross symbol) cross product: A×B dot product: A.B
product: AB - Doesn’t exist…it isn’t representable in terms of vector
Don’t like that? How about from MathWorld: “Vector multiplication is not uniquely defined, but a number of different types of products, such as the dot product, cross product, and tensor direct product can be defined for pairs of vectors.”
So, let me get this straight:
Vectors for some reason don’t obey the standard convention of placing product-like structures ahead of addition-like structures.
Mathematica’s InputForm, the sole intent of which is to write equations in the exact same manner as one would on paper, is incorrect for a simple equation using Cross, when that operator has no builtin meaning other than the vector cross product.
All references of the vector identities contain incorrect statements as they include: A+B = B+A, which means that: A×B+C = C+A×B and using you’re version of ordering means: (A×B)+C = (C+A)×B which is incorrect.
Or another: (A+B)×C = A×C+B×C != (A×C+B)×C. And notice in the first that the paren shouldn’t be needed if you’re correct, so why are they there?
I wave my hands and invoke Occam’s razor (lex parsimoniae!)
@Best Username Ever: Errr…your post is all over the place.
I depends on what kind of language you’re talking about. The higher level you get…more and more is happening “under the hood”. That’s the nature of the game.
Wash your mouth out with soap. Sure, as long as performance and spending twice the required memory is of no concern.
Umm…yeah. Well languages like this exist. But what does this have to do with operators?
Like I keep saying: forget C++. C++ is a terrible period and has no bearing on this conversion as far as I’m concerned. I’m not advocating adding anything C++ like thing. And since I haven’t mentioned in this thread. Personally I could care less if Java ever gets operator overloading, but I’m able to look beyond my own bellybutton to the needs of other programmers. I can implement operators exactly as I would like them to work. Likewise lambda were never high on my list. But adding lambdas is fantastic for the language. (And yeah, I’ll use lambda, but I’m really much more interested in the JVM changes that this language addition is providing). Not having operators make java much less attractive option to programmers in some fields either without a basic compiler background, a willingness to use an extended version (which even I would hate) or can’t be bother with the additional hassle of either of the previous. That’s a heck of a lot of people.
Structs, arrays of struct and concrete arrays. Assuming that you’re talking about over non-primitives types. There’s significant work currently going all to address auto-magic, approaching zero overhead boxing/unboxing if that’s what you mean. (As an aside, I’m really curious about what they mean by “no more primitive types” as one of the JDK10 bullet-points)
above + contracts + escapes
I not getting this point. We’re not purely functional and a type is either mutable or immutable by it’s definition.
Does that cover everything or did I miss something?
Funny thing about primitives: they’re a compiler optimization, they need not be exposed at the language level. Smalltalk has never really had them, and most of Java’s JIT technology came from smalltalk derivatives (Self and Strongtalk). Scala also doesn’t have them, yet it also emits bytecode that uses primitives – in fact, because of annotations like @specialized, it’s able to do so in more places than Java (potentially anyway, it still does a lot of boxing. The compiler is slow enough without trying to be even smarter than it is)
Java did not start out with JIT at all, and therefore did not launch with many ambitions to advance the state of its compiler to leave out the primitive/object distinction or even blur it in any way.
Sure you can still hand-optimize quite a lot of code with primitives … just like you can if you wrote everything in C.