I'm in a rather tricky situation for a 2D car racing game. In a nutshell, I'm experimenting with a vision based sensory system. The game has the following components implement: Racing track geometry, car models, physics (and related dynamics), collision detection, and the vehicle control code. Right now, each AI agent just needs a ghost to connect the sensory data with the control systems. :P More elaborate description of the problem is given below.
Sensory System
The whole point of the exercise is to explore vision based computing. Each AI agent uses a compound eye system, roughly like insects do. The system is just a 1D array, where entries correspond to small angle increments within the entire 180 degree view. The field of view has a certain cut-off radius, which corresponds to the computed stopping distance of the vehicle's velocity. Another analogy is to imagine a 180 degree radar sweep of the scene at certain angle increments, and we retain the following information:
- Distance of ray-object intercepts, which include road barriers and other cars... basically a depth buffer;
- Relative velocity of the intercepted object surfaces;
- Object IDs, which tells us what kind of things we have indexed in the buffers;
- Seek hotspot, essentially we render the road cross section along the racing line up to a certain look-ahead distance. So imagine you are looking at the floor area of a long, winding hallway, with the furthest distance of the hallway being the most desired seek location.
Current AI Implementation
At the moment the AI system works more or less, but with a few problems. Basically the AI is a reactive system whose primary objective is to steer the car to the most desired seek location. If the seek location is obscured by an object along the car's longitudinal axis, the algorithm searches gaps between obstacles and chooses the gap that is closest to the most desired seek location.
The algorithm also takes the relative velocity into account; it ignores objects separating from the AI agent (i.e. treating them as invisible), and attempts to avoid obstacles that are closing-in.
The AI is also doing sweep tests along the longitudinal axis, and applies either braking or acceleration, depending on the distance of the intercept point. Sweep tests take the chassis width into account (basically it is a box sweep test).
Issues
The system is surprisingly good at aiming itself between gaps and navigate through. My guess the biggest problem is the lack of predictive component for obstacle avoidance. While the system is effective for finding gaps, it does not have the ability to judge whether taking that gap is risky. This often results in cars cutting each other off. I might need some ideas to resolve this.
The other issue: How to detect a boxed-in situation? Say the agent is sandwiched between a car, the road barrier, and there is also an obstacle in front. How do I avoid getting the agent into this situation in the first place? I guess this is an extension of the aforementioned risk assessment problem. Traditional path finding solutions could be a solution, but we starting to deviate away from the reactive vision based system I'm aiming for.