Good day everyone,
I have been developing a game which can be described as a 'side-scrolling tower defence' or 'reverse platformer', in which players take on the role of the traditional 'bad guy' and deploy traps, obstacles an minions to a side-on level in an attempt to prevent the 'hero' from reaching the end. Obviously the AI for the 'hero' is going to prove extremely important.
I think I've formed a reasonable high-level view of how to approach the problem, though I'm certainly no expert and welcome being told otherwise. It's the implementation of these concepts I'm struggling with the most, however, and I was hoping I may be able to find some assistance here.
The goal of the AI remains the same at all times, to reach the end of the level. In doing so it must navigate the geometry of the level and, to the best of its ability, avoid taking damage or being killed by the array of obstacles placed by the player. In consideration of the problem, I decided that the AI needed to create a plan. This is because it needs to consider whether jumping to a platform presents an ongoing path, or whether jumping now will place the character in more danger than jumping later. Once again, I'm new to AI programming so I fully acknowledge I may be over-thinking or poorly designing my approach here.
How I'm currently approaching the problem can be considered in a number of layers:
- A heuristic. Currently this is simply 'positive movement on the x axis multiplied by 2'. In the future, levels may include a simple A* graph, the heuristic of which being reducing distance toward the 'goal node'. This allows for more complex, multi-route levels.
- What could be termed a 'navigation mesh'. At the completion of the game's 'build phase' the level geometry is scanned and a series of 'walkable lines', or 'platforms', are created to inform the AI of walkable areas.
- The AI, when recalculating its plan, determines which platforms it can reach from its current platform, then extrapolates similarly from each of these, excluding back-tracking and known dead-ends, until it has a plan at least 700 logical units in either direction. It should only back-track if it has no route forward, and when a platform has no route forward it is listed as a dead-end to back-and-forth over the same area.
- As these paths are generated they're assigned a cost based upon movement required and extra cost for jumping or sliding. The cost is meant to simulate both the 'effort' and 'difficulty' of the plan. They're also assigned a score based on the aforementioned heuristic. When all the paths have been found they're sorted by score - cost. I can happily fiddle with the sorting later.
- This is where it begins to, perhaps, exceed my knowledge and experience. The next step is to generate a plan, or series of plans, for the first few paths, or onward until a path is found that doesn't result in death. I'm having a great deal of difficulty determining how to generate this plan. Does one simply simulate every possible action at timed intervals and choose the best result, or does one attempt to direct simulation toward the goal using various formulae to calculate stopping distances, run-up requirements, etc? Do you then randomly vary that plan to find the optimal solution? How do you choose when to jump, or what if the hero requires a run-up to make a jump? If this approach makes sense, do you have any recommendation on how to implement it?
- These plans are then sorted based upon the (score-cost) of their underlying paths, with additional penalties imposed by taking damage or needing to avoid additional hazards.
Any advice you could give regarding the approach would be truly appreciate, and I must admit this process has proven rather disheartening thus-far. There is obviously a great deal of room for improving efficiency in this model, but flexibility and a 'human-like' process are important. I know that I could bake jump nodes into the navigation data, but I feel this would compromise these important characteristics and prove a sub-optimal solution.
Thanks for any help you can give, once more.
Nathan Runge
Attached: 2 screenshots from a prototype showing the platform detection.
Addendum 1
The hero character has a small selection of options available: Move Left, Move Right, Jump, Duck, Do Nothing. Ducking while moving initiates a slide. 'Do Nothing' will slow the hero to a standstill if the character has momentum. Moving left and right is at a much slower rate if airborne.
Addendum 2
I'm considering using a GOAP-like planning system to determine actions, as this would provide the AI with ways to chain together the actions needed to achieve the task. A number of problems arise, however. The first being that planning systems need to work backward from the goal. That makes checking for mobile hazards and obstacles a bit more challenging. The second question is how far you would break down actions? Would you break them down to the 'jump', 'run', 'duck' level, or something more complex such as 'run up', 'jump to platform', 'avoid obstacle', 'slide under trap'. My inclination would certainly be the latter.