Calin said:
The ball is out of reach so the robot will determine that it`s a case of `moving an object that is out of reach`. The standard solution for such a scenario is to use another object (a stick).
What i meant is that there are always multiple problems/tasks at the same time. For the ball example, the robot needs to calculate a pose where his body does not collide with the table and other parts of the environment, so such collisions do not prevent him from reaching the ball. Computing this pose is already very difficult, because your predefined set of solutions can't handle all kinds of furniture arrangements which might be there, each of them requiring a different pose eventually.
But say we have luck finding a working pose is no problem in our case. Getting into this pose now isn't easy either, because we have multiple objectives such as: Keep in balance; pose as close to the table as possible; avoid collisions while moving the stick towards the ball; avoid pushing the ball further out of reach by accidently poking it with the stick.
Then, even for simpler tasks, pretty much any IK problem is underdetermined and has many open variables. For example, press your Enter key and keep it pressed. You can now rotate your elbow around the line between shoulder and wrist of hand, while still pressing the button. You can also move your shoulder around while still pressing it. That means an infinite number of poses can solve our IK problem of pressing the button, so we need further objectives to define an optimal solution. Such objectives can be: Minimize the energy needed to reach the new pose from the current pose; Or minimize the energy needed to hold the target pose (let the shoulder and elbow hang down with gravity).
Both objectives make sense, but they are not the same. It seems natural to minimize energy of movement first to reach the button quickly, but then relax the pose while keeping the button pressed. But this is just my personal strategy i came up with - it's not necessarily the ‘optimal solution’, and there's no way to formulate such optimal solution at all. Results will be effected by the choices of programmers and designers. Which - in context of games - is no bad thing.
But i also think your example problem of using tools like a stick to reach a ball under a table is too complex. Too much intelligence is required, and a set of predefined solutions - no matter if it's procedural code or a learned database - may not be enough to handle all problems which come up.
Reducing expectations seems key. I'll focus on simple and primary locomotion - walk, run, climb, crawl, swim, and it should work on moving platforms. Eventually some interactions like melee fighting.
That's doable, and the flexibility we get should already enable new types of games not possible with current animation tech.
Ofc, if your game requires to use tools for certain mechanics, you can spend time to make this work by extending those primary abilities. And that's what i expect to happen. So, with time and multiple games the character systems will improve their abilities. The number of actions they can do will increase. Then we have what we want: New games and constant progress with time. We don't want a super intelligent / skilled NPC right now, we only want to have smarter avatars in our next game. To show technical progress, which is key motivation to play games imo.
At the same time, we'll need less and less motion capture, saving costs, which is important too.
So that's why i think robotics is the next big thing in games. But i say this since many years, and so far not much has been shown. Opportunity for a next generation of game developers, maybe ; )