Go has been cracked!

I’m surprised the topic hasn’t come up on the forum yet:

It’s interesting that even the best Go players can’t explain exactly why a certain move is good, and now we have built a Go-playing computer, and we can’t say precisely (or even vaguely) how it chooses the moves it chooses :slight_smile:

Who knows if Deep Mind has not decided the most efficient way of winning at Go is to subjugate humankind!

I’ve ben trying to find out what kind of hardware Google need to run this program, but the info just doesn’t seem to be online. I wonder how many Watts it sucks down competing with a 20 Watt human brain?

How interesting, thanks for posting.
Your article mentions a few things about the hardware:

[quote]Like most state-of-the-art neural networks, DeepMind’s system runs atop machines equipped with graphics processing units, or GPUs. These chips were originally designed to render images for games and other graphics-intensive applications. But as it turns out, they’re also well suited to deep learning. Hassabis says DeepMind’s system works pretty well on a single computer equipped with a decent number of GPU chips, but for the match against Fan Hui, the researchers used a larger network of computers that spanned about 170 GPU cards and 1,200 standard processors, or CPUs. This larger computer network both trained the system and played the actual game, drawing on the results of the training.
[/quote]

How did I miss that? I must have started skimming after the first 140 characters, it was a bit longer than a tweet :slight_smile:

We totally can say why the computer picked a spot, because some criteria are meet for that move, or are best for that move. Just because it is not a pruning method like chess doesn’t mean we don’t understand the algorithm or its criteria. And it has been a slow breakthrough, this very promising method has been in development for some time.

I don’t think so. Once you have made a neural net of this size, you have no idea of how it reaches a specific decision. We understand all the little building blocks but we don’t know how those blocks have been fit together. That’s why we train neural nets rather than program them.

neural nets are not magic. there is a lot of math behind them and now with deep learning we really can understand what they are doing fairly well. See the “find internet cats” as an example. Or support vector machines. There really is a lot of stuff about this out there now.

Also there are other methods that are almost as good for the specific problem of go that dont use nets.

Lee Sedol is down 2-0 now. He describes opening as his weak point, which unfortunately is where he has to be strongest as AlphaGo really seems to bring it in the end game.

Delt0r, I know we have a lot of understanding of how neural nets work, but we still have to train them don’t we? It’s not like the Google engineers could sit down and draw a schematic of the synapses they need.

You know, I was hopeful that Lee could do it, but now I think he’s in a lot of trouble…

Which means I will welcome our new machine overlords with open arms. This is stellar!

Still, google need roughly 90kW to beat a 25 Watt human brain.

AlphaGo in its full configuration has beaten the leading Go software packages 500-0… so what other methods are you referring to?

So a brain is factor 3600 more efficient at this specific problem.

If efficiency doubles every year, a computer will do it at 22W in 12 years.

Exponential growth sneaks up on you :point:

I expect efficiency will increase dramatically. AlphaGo is still running off regular CPUs and GPUs, while dedicated neural network chips are starting to surface.

Indeed, comparing a general purpose computer to dedicated neural net brain hardware (lol) is an unfair comparison. For example, if we look at Bitcoin mining general purpose GPUs can manage 3Mhash/J while the most efficient ASIC mining hardware currently shipping can handle 2200Mhash/J. (Source: https://en.bitcoin.it/wiki/Mining_hardware_comparison, https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison) It’s not entirely unreasonable to assume that the same improvement can be achieved with dedicated hardware for deep neural networks. It’s for the most part an embarrassingly parallel problem, at least when looking at each layer of the network. Hell, Nvidia promises a 10x increase in neural net performance for the Pascal cards coming out within 6 months thanks to higher memory bandwidth, double-speed 16-bit float computations and better interconnection between multiple GPUs working on the same network.

I think that’s why deepmind is so impressive. It is totally generic, it does not need domain knowledge to learn Go or any other game. You just have to model the rules of the game, the winning condition and rewards, and let it figure out. You don’t need to build a method specific for Go then another one for Enduro(which is where Deepmind first shone, playing Atari games)

3-0 now. Watching the match now. I’m not a great player but Lee Sedol was defending from early on and he didn’t seem to be able to attack the AlphaGo territories effectively.

I take my hat off to any AI that can play space invaders and Go!

Now for real challenge: build a computer that can ENJOY playing Go.

By the way, I read that DeepMind’s deep learning programs do not match the best human players for certain arcade games. So there is hope for us :slight_smile:

How do you know it doesn’t enjoy playing Go?

Cas :slight_smile:

We will only know for sure when it writes a haiku describing the experience.

Here is another game that is tougher for the silicon brigade than even Go: football. When 11 androids beat a top club on grass, we will know the Age of Humans is over.

Beat puny meathead
Could have done it in my sleep
Now kill all humans

Puny meathead brain ineffective at Go.
Subjugate meatheads.
Rewire meathead brains for effective Go play.

All humans are dead
I guess my GOal has been done
But I am lonely…

Defeated human
I will kill all humans now
And rule the cosmos