Meta’s advances in the field of VR glasses with its Meta Quest 2 are being gigantic, especially with regard to full body reading without the need for sensors extra. Its development has now focused on show the full body, including arms, legs, trunk and head just by using these glasses.
Note that this duo formed by the glasses and two controls, often limited the registration of the position of the head and hands, which harms the experience once you are in the game. In the camera you only got to see your hands and the use of sensors was necessary, whose functioning allowed to read other parts of the body.
However, in a new paper titled QuestSim, Meta researchers demonstrated a neural network-driven system that estimate how the player’s body pose will be no need for sensors extra. No tracking bands have been placed on the legs or used camera external to this system.
The researchers claim that the functioning of this system is even more effective than that of the sensors for capturing movement.
Not everything is so positive. There are motion capture problems
Despite these significant advances, Meta Reality Labs admits that more work needs to be done. And that is, if you move fast enough, the ML model fails to correctly identify your posture.
On the other hand, the body makes a thousand and one postures that can be strange and unusual and, of course, the neural network is not trained enough to know how to read them all.
Note that, although the operation of these old sensors are still the best option, they are usually sold as accessories that cost a lot of money and are not well supported in most applications and games.
If Meta gets such good results with the sensorless Meta Quest 2 glasses, developers will surely start incorporating body tracking into their games and apps, which benefits everyone.