Calin said:
The problem with projects like Boston Dynamics is that what they have is a blind robot. The robot doesnt have cameras to see the environment so whatever it does, it does so blindfolded.
IDK what they use for sensing, but i doubt those robots run just blindly. Sensing and modeling environment is a key problem of mobile robots.
My ragdoll so far is blind indeed, it's only sense is reaction to contacts, so sense of touch. And it knows about velocities or center of mass, ofc.
I'm unsure how to extend this. Rendering a small framebuffer with depth is expensive to generate and expensive to analyze. A volume of signed distance would be even more expensive to generate but easy to analyze.
Then we have the current solution, which is to tag the environment with walkable surfaces and use pathfinding, plus some RT for visibility checks, plus range queries to list potential interactions or dynamic objects nearby. That's fast and good enough for current game mechanics, but we surely don't want to stick at this forever.
What would you expect from simulating vision? I thought about precise cover mechanics or hide and seek in a shooter.