NVIDIA turned its R&D effort on another game recently, unleashing a generative adversarial network (GAN) it calls GameGAN on Pac-Man, to celebrate the 40th. anniversary of its arrival in the world’s arcades.
A GAN comprises two elements; one tries to replicate the input data, while the other compares this to the original source and then. If they don’t match, rejects the generated data, tweaks its work and tries again.
The lab dwellers didn’t just set out to learn how to play Pac-Man. While one AI did that relatively trivial task, using 50,000 hours of game play and four chunky workstations equipped with Quadro GV100 GPUs, another used the results of this play to build a Pac-Man clone from scratch.
“Our AI didn’t see any of [Pac-Man’s] code, just pixels coming out of the game engine. By watching this, it learned the rules. Any of us could watch hours of people playing Pac-Man, and from that, you could potentially write your own Pac-Man game, just by observing the rules. That’s what this AI has done.”.NVIDIA’s Hector Martinez
For a more detailed explanation of how the experiment was conducted and the results achieved, see Sam Machkovech’s article on Ars Technica and James Vincent’s on The Verge. or browse the “Learning to Simulate Dynamic Environments with GameGAN” research paper.