Advertisement

AI vs AI playtests for game balance

Started by August 04, 2018 12:12 AM
2 comments, last by Dan DAMAN 6 years, 3 months ago

I've been working solo on a robot combat game inspired by Robot Wars, Battlebots and such. I attached a clip from an older build of the game since showing is better than telling for describing stuff :)

Players can choose a chassis, drivetrain and weapon for their robots before duking it out in the arena. This is where my problem comes in. I'd like to have a decently wide selection of parts with different appearances and different stats. I intend to allow players to use any combination of parts they choose. Since I'm working solo that's way, way more playtesting than I can reasonably achieve on my own or with the group of friends I could scrape together for such a thing.

I've already implemented a basic AI for the game so I'm considering using it to autonomously test out every possible pairing of designs against each other so I can see if any are especially good or bad. The game would collect stats like who won, damage taken, damage done etc. for each match and dump it all into a giant CSV for processing.

What I'm wondering is if anyone else has done something like this? And what sort of pitfalls I should be aware of before getting too deeply into AI vs AI playtesting? If anyone has any postmortems on this sort of automation I'd love to see them.

CurrentSparks.gif

--------------------------Insert witty comment here!--------------------------
40 minutes ago, Dan DAMAN said:

What I'm wondering is if anyone else has done something like this?

Yes. It's called automated playtesting. Doing it to achieve balance testing is a good idea. 

42 minutes ago, Dan DAMAN said:

what sort of pitfalls I should be aware of before getting too deeply into AI vs AI playtesting?

The automation is just to get you a starting point that makes sense. You can't know what's going to happen when players start perceiving workarounds or loopholes, or if they just value certain features in a way that couldn't be computed. 

-- Tom Sloper -- sloperama.com

Advertisement
On 8/3/2018 at 8:56 PM, Tom Sloper said:

The automation is just to get you a starting point that makes sense.

That's pretty much what I'm aiming for. What I'm hoping to get out of the data is indications of things being really out of whack, like a part combination with an extreme win/loss ratio in either direction and/or matchups with extreme win/loss ratios. I'd then have real players try those and a random selection of "balanced" matchups as a control.

I've found a few papers of interest such as https://www.cc.gatech.edu/~riedl/pubs/zook-fdg14.pdf and https://www.researchgate.net/publication/323296721_Automated_Playtesting_with_Procedural_Personas_with_Evolved_Heuristics  Are there some other good ones you can think of? Less formal articles are also good.

Thanks for the info so far

--------------------------Insert witty comment here!--------------------------

This topic is closed to new replies.

Advertisement