HP's "The Machine"

On June 11, Martin Fink revealed that HP has been working on a secret project. This project is aimed at allowing us to rethink data.

Video

“The Machine [is] a processing architecture designed to cope with the flood of data from an internet of things. It uses clusters of special-purpose cores, rather than a few generalized cores; photonics link everything instead of slow, energy-hungry copper wires; memristors give it unified memory that’s as fast as RAM yet stores data permanently, like a flash drive.”

As HP says in the video, this architecture will have “electrons compute, photons communicate, and ions store.” Using this architecture, they claim that “A Machine server could address 160 petabytes of data in 250 nanoseconds; HP says its hardware should be about six times more powerful than an existing server, even as it consumes 80 times less energy.”

They plan on having this technology commercially available by 2018, and want to include it in everything, from data-centers to smartphones.

What are your thoughts? :slight_smile:

Sources:
HP’s Machine technology rethinks the basics of computing
Discover Day Two: The Future Is Now—The Machine from HP
HP Moonshot System

P.S. During the video, they mentioned the HP Moonshot System. I thought this was interesting so I linked it in the sources.

Maybe I’m missing something, but it sounds like the concept basically boils down to creating farms of PLCs connected via private VPNs. A lot of the innovations now a days seem an awful lot like a return of the days of “big iron”.

I’m not so interested in the network, as in the actual technology. Like their “memristors” (ion storage) and photonic buses.

Today’s standard CPUs are logically networks of small computational devices of different kinds. Take intel-a-likes. A core is a CPU which decodes an intermediate language (intel assembly) into an unspecified RISC assembly which is networked out to individual CPUs which actually perform the computation. Which in turn put their results out on a network and the results are gathered. The memory architecture is another network. Example a core speculatively chooses a branch, performs computation including writing a result. That write can’t actually occur until it’s known that the speculative branch was correct.

The real problem with making massively parallel hardware really isn’t the hardware. It’s creating the software that’s the tricky part.

I bet they also have a cure for cancer and a 100000x improved battery planned.

Watched the video, the guy mentioned using lots of specialized cores, could that also mean the same tech could or will eventually be applied to GPUs?
Since don’t they already consist of specialized cores?