Phosphenes said:
It appears that C++ is the winning language here, as Newton and Unity both support that.
Unity is written in C++, but does not support it to create games. It uses C# for that.
The Unity topic came up for off topic reasons, but ofc. you could use an engine like Unity, Unreal, CryEngine, Godot, etc. They all have built in physics simulation. But i guess they do not really expose robotics. PhysX has it, but i assume U engines do not expose related interfaces.
If so, Unreal might be the better choice than Unity, because you get full C++ engine source code, so you could (in theory) do some changes if needed.
But i'm just guessing here. Likely you could try multiple engines to see what suits your needs best.
A need for high accuracy comes up with multi body problems. I made a self balancing walking ragdoll, which is such problem. Because there are many bodies and joints, typical game physics accumulate too much inaccuracy. E.g. if the robot has to stand on it's feet but also grab an object from a table, it becomes hard to control the hand with precision of millimeters, because small errors accumulate from ankle → knee → hip → maybe 3 spine bodies → shoulder → upper and lower arms → wrist → fingers.
The error is not just about missing a precise position, but also about oscillations. So your hand may not only end up too low down due to gravity, but it may also wobble and jitter due to accumulated oscillations.
If you expect problems like this, the recommendation of Newton holds. If you use much simpler robots made only from 2 or 3 joints, game engines default physics should be good enough.
Phosphenes said:
To get a feel for it, here are some primitive 2D mouse exercises to see if you can predict where the bricks will go (These are JS demos I hacked with the queue delay Programmer71 suggested):
Hmm, i tried it. Maybe it could work similar to games like ‘The Incredible Machine’. This was a puzzle game. First you have a planning phase (positioning objects), then you start the simulation and only watch it running, observing if the actions happen as planned out, trying again if not.
Phosphenes said:
Yes I agree input latency may not be fun in a video game, but it might be fun if you are practicing to operate an actual robot building things on the actual Moon. Hundreds of robots can be put into space for the cost of one human astronaut, so if the industry ever gets serious about Artemis they will depend heavily on remote-control whether they plan to or not.
It sounds you're not sure if you want to make a industrial simulator or a game. Or you might not be aware about the difference between them.
Ofc. it does not matter if you only make this for yourself, for fun and learning. But if you want to make a commercial game, you may need to refine your game design.
Though, experimentation is a good method to come up with a good design. Just expect you'll eventually have to change your plans if a first prototype is no fun indeed. ; )
Phosphenes said:
I'd like to see how far human learning can take this with 3D and 2 hands, without sci-fi level AI.
Big problem is input devices, which are all 2D, except VR controllers. So we never found a general good way to rotate objects in 3D space. We can make it work, but the result is never intuitive for all cases. (This truly sucks.)
Haptic feedback also is missing. So we can't feel contact or collisions. Some visual cues and some sound effects is all we can do.
So some AI assistance might be no bad idea, or eventually make it a VR game.