Thursday, February 4, 2010

Those good-for-nothin' bandits

A lot of trial and error goes into the development process. Ideas that seem great on paper aren't always so great in practice and, for the sake of the game, you have to just scrap them and start over.

A key feature I mentioned in my last post was that our enemies would be "smarter" than those of other top-down shooters. Well, it isn't hard to develop something more complicated than "move from point A to point B", but we owe it to our fans to take things a step further. After all, the ultimate goal of our AI is to create enemies who appear somewhat human to the player.

So I ignored all conventions and started off big.

My first attempt at AI involved what I call the Enemy Neural Simulacron. The philosophy behind this was that it wasn't enough to simply develop an algorithm that allowed for learning and rational decision making. A human has a lifetime of experiences which have formed the thought process, determined his life situation, and so forth. For our project to succeed, we would need to allow an individual enemy unit to live an entire lifetime in a simulated Wild West.

Brilliant, I thought. At the start of every game, entire lifetimes would be lived out, although of course in significantly less time. This way enemies would have different personalities and have real problems that affected their performance, such as facing bouts of depression.

But once I got things running, a problem quickly emerged. Very few of my virtual people ended up becoming bandits. I was left with hundreds of useless salesmen, dentists, public officials, and so on. Hundreds of lifetimes lived out, only to create useless people. Not only this, but the people who eventually did become bandits did not perform very differently from one another. Sure, some had practiced more with their virtual guns, and others were more athletic, but I had hoped for more. One bandit's wife had left him earlier that day, but he still fought just as hard as the rest of them. It was a virtual testament to the human spirit, but it didn't add much diversity to the fight. He had a good cry later, though.

What did I learn from all of this? Apparently in a gunfight the only rational course of action is to shoot your opponent, try to not get shot, and appropriately navigate around objects. I could have just told them to do that from the start.

So I destroyed that disappointing waste of memory and started fresh. The results are very similar, and the process is much more efficient.

Once everything has been cleaned up I will post a video and explanation of the recent updates. At that time, we will revisit the enemy AI, for out of this failure we have found great success.

StumbleUpon.com

1 comment: