JVM bytecodes (and likewise for CLR) are very misleading. These designs are very good for being able to quickly create (with little compiler/architecture knowledge) small footprint VMs based on interpretation and/or trivial JITs, including the loading/linking/verification stages. These designs are really bad for being fed to a “real” compiler frameworks such as HotSpot and .NET. So you cannot look at the bytecodes, assuming a real compiler, to determine much about what the final result is going to look like. Bytecodes are simply a transport intermediate representation (IR), or high level language if you will. Nor can you simply examine the “source” of library calls. A fair number of library calls have source code implementations which are only designed to be software fallback for hardware which doesn’t support the operation and/or to make porting to new architectures easier to get up and running. (SEE: wiki page on intrinsic list)
Conversions between integers widths within a larger bit-width integer are all trival transforms and don’t need any special case handling, say via an intrinsic (to be more precise in terminology than my previous statement). So some M-expression pseudo-code for transform pretty much any half-way sane compiler should be able to always handle:
int t = (x<<24)>>24;
-> t = LoadSignedByte[FieldOffset[this, “x”]]
int t = x & 0xFF
-> t = LoadUnsignedByte[FieldOffset[this, “x”]]
On the other hand many transforms that might seem easy cannot be performed as the compiler may not be able to know that it’s legal:
int t = x/2; // if x cannot be statically known to be >= 0, this cannot be converted into a shift
Floating point operations are a prime example as the rules are strict, so tons of things which might seem possible are in fact not (esp. since Java’s design does not allow for any relaxation of rules…well almost)
Intrinsics, in the context of JVM/CLR like systems, are specially marked methods that are auto-magically replaced at load/link/verifier time by some transform assuming that they are supported on the target architecture. The folks working on HotSpot “might” make some of these conversion methods into intrinsic, even though it isn’t needed, simply to lower the burden further down the chain. However the real purpose of intrinsics is allow code-sequences which cannot be expressed (at all or in simple to identify patterns) to be converted into high performance versions (frequently a single native opcode). sin, cos, sqrt, etc. etc. In this context unsigned compares, multiples and divides have many possible formulations of complex chains of operations, so they will be supported as intrinsic methods which will all end up as a single opcode, just like their signed equivalent.
Yeah, I think it would be nice to have operators for these unsigned ops, but then I’m a weirdo that considers operator overload to be full-of-awesomeness (just to beat a dead horse even more).