Technical director Magnus Nordin discusses how the Search for Extraordinary Experiences Division (SEED) — a team at EA that explores the future of interactive entertainment — built a self-learning AI-agent that taught itself how to play Battlefield 1 multiplayer from scratch.
First, tell us about yourself. What’s your background, what do you do and what exactly is SEED?
I joined EA six years ago, after having worked two decades as a computer scientist in various capacities. My first job at EA was with DICE and I later moved to SEED when it was founded two years ago.
At SEED, we explore what interactive entertainment will look like in the longer term. While we do some academic research, we’re not a pure research unit. Trying to guess what the distant future holds has a tendency to become abstract, so we try to be as practical as possible and keep our horizon to technology that we think will impact interactive entertainment three to five years from now.
Our approach is to build functioning prototypes and set up real creative experiences with emerging technologies, such as artificial intelligence, machine learning, virtual- and augmented reality, and large-scale dynamic virtual worlds.
One of your latest projects has been to train a self-learning agent to play Battlefield 1 multiplayer. How did that project come about?
Upon learning how an AI created by DeepMind had taught itself how to play old Atari games, I was blown away. This was back in 2015, and it got me thinking about how much effort it would take to have a self-learning agent learn to play a modern and more complex first person AAA game like Battlefield. So when I joined SEED, I set up our own deep learning team and started recruiting people with this in mind.
First we figured out the basics, and built a bare-bones three-dimensional FPS to test our algorithms and train the network. After seeing some good results in our own basic game, we worked with the team at DICE to integrate the agent in a Battlefield environment.
How do you think your self-learning agent performs versus a human Battlefield player?
We have conducted playtests, pitting AI agents against human players in a simplified game mode, restricted to handguns. While the human players outperformed the agents, it wasn’t a complete blowout by any stretch.
The agent is pretty proficient at the basic Battlefield gameplay, and has taught itself to alter its behavior depending on certain triggers, like being low on ammo or health. But Battlefield is about so much more than defeating your opponents. There’s a lots of strategy involved, stuff like teamwork, knowing the map and being familiar with individual classes and equipment. We will have to extend the capabilities of the agents further for the AI to be able to crack these nuts.
Still, after the playtests, a few participants asked us to clearly mark the agents so that they could be properly distinguished, which to me is a good testament to how well the agents perform and how lifelike they are.
To be fair, the gameplay also shows instances of the AI bots seemingly goofing around and running around in circles. What’s happening there?
At the moment, the agents aren’t very good at planning ahead. If an agent spots an objective, like an enemy player, it will act. But if there’s nothing in sight, it will eventually start to spin around to look for something to do. A better strategy would be to go and search for opponents across the map or find somewhere to hide, but the agents aren’t quite up to that yet. I’m confident they will do less silly stuff in the future, as they become more adept.
How long did the self-learning agent train?
You can’t play Battlefield by pressing a single button at a time. Rather it requires players to perform an array of simultaneous actions. So to help the self-learning agent get a head start with basic action combinations, we let it observe 30 minutes of human play—a process called imitation learning—before letting it train on its own.
The agents that we show in our demo have subsequently practiced for six days against versions of itself and some simple old-fashioned bots, playing on several machines in parallel. In total that equates to roughly 300 days of total gameplay experience. They’re constantly improving but not particularly fast learners.
The agent has the same field-of-view as a human player and is assisted by a mini-map. We quickly discovered, however, that Battlefield is too visually complex for the agent to understand, which meant we had to simplify what it sees.
We’ve seen cases of self learning agents that have taught themselves how to play old arcade games, as well as the original Doom and Go. What makes your work stand out from these examples?
As far as I know, this is the first implementation of deep reinforcement learning in an immersive and complex first-person AAA game. Besides, it’s running in Battlefield, a game with famously elaborate game mechanics.
What’s the practical use of this technology right now?
Our short-term objective with this project has been to help the DICE team scale up its quality assurance and testing, which would help the studio to collect more crash reports and find more bugs.
In future titles, as deep learning technology matures, I expect self learning agents to be part of the games themselves, as truly intelligent NPCs that can master a range of tasks, and that adapt and evolve over time as they accumulate experience from engaging with human players.
When do you think we will see self-learning AI becoming a mainstream technology in games?
I have no doubt in my mind that neural nets will start to gradually make their way into games in the years to come. Self-learning agents aren’t just a good replacement for old-fashioned bots, you can also apply machine learning to a number of fields, such as procedurally generated content, animation, voice generation, speech recognition and more.
Will self-learning agents ever beat professional FPS-players? If so, when?
With the risk of going out on a limb with a crazy prediction, I think it’s reasonable to expect AI agents capable to defeat human players in a limited competitive game mode—one that features smaller maps, focused teams and clear objectives—in a couple of years from now. However, at SEED we’re not necessarily out to build AI that will defeat human players. Our aim is to help create new experiences that enhance games and make them more fun. Getting owned by a superior AI isn’t necessarily that fun for players in the long run.
SEED is a cross-disciplinary team within EA Worldwide Studios. Its mission is to explore, build and help define the future of interactive entertainment. To learn more about SEED, visit https://www.ea.com/seed.
Stay in the conversation of all things EA: Read our blog, follow us on Twitter and Instagram and Like us on Facebook.