Building Better AI with Kongregate Games

Web games like Kongregate's may be one of the key ingredients for developing powerful artificial intelligence. Read on to find out why this is, where they fit in, and how you can help!

Hi! I'm Jack Clark, strategy and communications director at OpenAI — a non-profit artificial intelligence research lab. I want to tell you about a new AI project from us called "Universe" that relies in part on games from Kongregate.

At OpenAI, we've spent the past few months building software called Universe to let us introduce our nascent artificial intelligence systems to the digital world. We're making the main components of Universe available as open source code, so anyone can use it to develop their own AI systems. Our initial release includes a thousand different environments to train your AI on, including many Kongregate games.

Universe makes it simple to hook up an AI agent to a program running on a computer, and lets the AI system manipulate the program in the same way a human does — by observing the pixels on the screen and manipulating the controls.

This kind of infrastructure is pretty crucial to the development of AI systems, because it lets us expose our AI agents to a variety of different stimuli and lets them interact with a range of novel environments.

If you can't easily introduce your AI to new scenarios then you have a fairly expensive brain in a jar (well, code on a server, but you get the point). With Universe, you can link this AI to a variety of different programs, giving it the same interface each time, letting it learn to perceive and act in a variety of different environments.

A Brief Aside About How Our AI Works

We built Universe so we could teach our increasingly capable AI systems about the world. The technique we're using is called deep reinforcement learning. Deep RL is a popular technology in AI — it's already been used to let software master a range of Atari videogames, beat a world champion at the ancient boardgame of Go, and make data centers more efficient.

The approach has two main ingredients: neural networks, which is the machinery that lets our AI perceive the world and develop an internal representation of it; and reinforcement learning, which lets us teach our AI to associate the actions it takes with changes in how it perceives the world. RL works by rewarding correct courses of action, like gaining a high score, letting its neural network components learn which actions it can take to generate rewards.

We've also used another trick, called behavior cloning. This lets us use data we've gathered from human players mastering various games to train our AI to imitate the actions they took. This is helpful for games where a complex sequence of actions stands between you and your first reward. Without this, RL approaches tend to get stuck, as they need to perform a long series of actions in sequence to attain their first reward, and the chance of doing this is fairly slim. Therefore, we seed our Universe agents with some examples of good play, so they can learn by supervision in combination with RL.

To get an idea of how unsuccessful a random RL agent is, you can view this animation of an algo getting stuck on the Kongregate game Neon Racer.

Now, after we've trained the agent using our Universe system, we're able to create an agent that attains the same scores a talented human player might. It develops rapid reactions, is able to figure out it can gain points by hitting the red cars and avoiding the purple ones, and even learns to use the "boost" special ability.

But it doesn’t stop there. We can then train the AI further until it achieves mastery of the game, attaining top scores and displaying a decent aptitude for some of the more strategic elements, like hitting certain types of cars, or using boost more strategically.

Where Kongregate Fits In

So, where does Kongregate come in? With Universe we want to not only train our AI system to master individual games, but we want to be able to develop a curriculum for them to let them learn about more and more complex aspects of reality. That requires a lot of diverse environments, many of them games. Our initial release has over a thousand different environments to give our machines a diverse world to learn and experiment in.

Web games like those showcased on Kongregate are an excellent resource for this because they consist of genres — say, racing games — with lots of variation, difficulty differences, and differing levels of fidelity. Therefore, we can not only train our brain-in-a-jar to take the correct course of actions to win at a single Kongregate game, but can also teach it to master the basic principles of racing by playing a variety of games. We'll have more to share about our transfer learning experiments in the coming months.

Obviously, it's not feasible to train every agent to master every game if it takes several hours to do so for each game. That's where transfer learning comes in. The more games we have in Universe, the easier it is for us to train an agent on one, then transfer some of that knowledge over to another one, then another one, and so on. Each time it will take less time for the agent to master the game and, hopefully, it will develop some general "common sense" knowledge about games over time.

So, if you're interested in helping us build the next generation of AI in an open and transparent way, then there's an easy way to help: make your games available to Universe by hopping over to our website and filling out this form. We look forward to building artificial intelligence with you and your peers. Thanks!