Ideas for a Better AI

WARNING: This post will probably get extremely technical.
Some news: I haven’t forgot those tutorials! 3 weeks of break coming up, and I’ve got 2 videos and 3 new additions to my UScript series in the works.

For the past few weeks I’ve been away from UDK and immersed in a bunch of dusty old books (not to mention talking to a few dusty old professors) at my college. Part of the reason for this hiatus was final exams, but I also wanted to learn about artificial intelligence so that I could apply it to UDK. Today, I’d like to share with you all a bit of what I learned.

Games today increasingly push the idea of “advanced AI”, as if that somehow makes gameplay more enticing or rewarding. One of Crysis 2‘s biggest selling points was the idea that their AI was the best ever. Was it? In the end…no, not really. I mean, sure, at some points during the campaign I felt the aliens did exhibit better-than-average strategy, but they still made the same old predictable mistakes that give all AI away. I’m sure you’ve all seen a bot running stupidly into a wall or just standing there, blocking your way — it doesn’t happen often, but when it does it’s downright embarrassing.

The reason for this is that developers actually have to work backwards. A lot of the work that goes into making a “better” AI is actually trying to make it stupider, in the sense that it’s really easy to make an omnipresent bot that knows all, never misses, and thinks faster than the speed of light. Why do you think so many aimbots exist in the bowels of online gaming?

“Intelligence” in the gaming sense is actually just another way of saying more human. And humans aren’t anywhere near as good as a godly aimbot; we make mistakes, fight with our teammates, and have the tendency to let our emotions get in the way of things. To a computer, things like “frustration” and “oops” don’t exist. The AI of today represents our collective progress at faking these things: call it smoke and mirrors if you want, but it isn’t necessarily going to make anything seem smarter.

As computers become increasingly powerful, a lot of new doors suddenly open up for AI developers. For example, one of the things that has always been sadly absent in video games is learning. Why? Because:

  1. Effective ways of making machines learn are not fully understood
  2. The processes and algorithms involved are horrendous

How would you like it if your game ran at 10 FPS because the bots you were fighting against were busy thinking? I know I would probably throw that game in the trash. But now that we are finally approaching a so-called “next generation” in gaming, and even Macs can handle the previously processor-crushing Unreal Engine, I think it’s time to begin adding some of this cool stuff to game AI.

Before we can dive into UnrealScript, we must first understand exactly what the heck we’re doing. What is learning?

In simple terms, learning is “drawing from experience”. As babies we don’t know language, but as we hear sounds and sentences spoken by our parents, we begin to mimic what they say. Letters, words, semantics…all these we know just because we’ve heard it all before.

But it’s slightly more complicated than that. You see, learning is NOT the same thing as remembering. I can remember hearing the word “philatelist” at some point in my life, but do I actually know what it means? Can I use it in a sentence? Have I learned it? In this respect, learning is not so much “drawing from experience” as it is “applying experience to novel situations”.

In principle, turning these concepts into useful code isn’t that hard. It certainly seems hard, though: when I first Googled the topic, a lot of phrases like “neural networks”, “genetic algorithms”, “combinatoric heuristics”, etc. popped up. It was ghastly. Over time, I learned this was nothing more than a smart way of saying what I’ve already said. One project in particular you should check out is Project NERO; these guys have already done much of what I’m talking about (albeit for a different platform).

Still yet, none of the solutions I’ve found so far address the biggest problem: making AI think like a human. For that we’re going to have to take the conventional idea of learning a step further.

This system is partially my own idea and partially stemming from this weird book on computer science/psychology that I skimmed through. To understand this it’s best to use an example. In this case, I’ll use chess.

  • Tier 1: Concrete (Rules) — Imagine sitting down to play a game of chess, having never seen a chess board in your life. You don’t know what any of the exotic pieces do. As you play your first game, your opponent begins teaching you some of the basic rules of the game (e.g., pawns can only move one or two squares, knights move in an L-shape). These rules never change, they’re easy to pick up, and you learn and begin applying them fast.
  • Tier 2: Trusted (Strategy) — After playing a few dozen games, you’re beginning to get the hang of it. Moreover, you’re starting to pick up on some of the finer aspects of the game (e.g. forking, pinning, en passant, effective castling). Are these necessary to play chess? Not at all. But they most certainly make a person better at chess, and they can be trusted to work most of the time.
  • Tier 3: Experimental (Flair) — Every great chess player has their own special way of playing. One of the more famous examples of this is Adolf Anderssen, whose insanely bold penchant to sacrifice all of his pieces allowed him to win one of the greatest chess games of all time. How exactly do people develop these strategies? Bringing such “flair” or “flavor” to the task at hand can’t be entirely from experience. To be human is to be creative; to be human is to innovate.

As you can see, there are different “types” or “tiers” of learning, and we develop these tiers at different times. In UDK terms, Tier 1 is already pretty much in place since the base AI knows the rules of the game. Basic strategy is also implemented, but it’s hard-coded. In order for the AI to develop its own strategy it will have to play a LOT of games.

The idea is that the AI remembers everything. Over time it begins to recognize patterns in such things as kill-death ratios with different weapons. Tier 3 is developed through experimentation with other methods, such as using a rarely-used weapon more often. But the patterns that it recognizes move up to Tier 2 because they are trusted: chances are, if they’ve worked so far, they’ll continue to work. In order to apply these remembered values to actual actions, we must give each tier a weight. In other words, if( [Base Value] + 0.5 * [Tier 2 Value] + 0.25 * [Tier 3 Value] ) > [Conditional] { [DO SOMETHING HERE] }. The precise value of these multipliers must be determined through (you guessed it) experimentation.

I’ve already started doing some basic stuff with the UDK AI, such as trying to get it to store values in a save file and load them later, but the real heavy lifting is yet to come. No doubt this system can be implemented in UnrealScript (and any other HLPL, for that matter) but the difficulty is getting it to work well. And that will take a great deal of time.

More to come later.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s