Hand tracking introduced by UltraLeap (formerly LeapMotion) and by Oculus on the Quest, finally enables VR setups where users are free of hand controllers. This opens VR to the whole population instead of a few tech savvy people. But they require a mean to control motion at the feet.
The benefits of hand tracking
There are several benefits of using hand tracking instead of controllers.
Using standard VR controllers like the Oculus Touch or the HTC Vive controllers, is complex for many people. When training staff in VR, you want to avoid having to first train them on how to use controllers. Those controllers may actually be immersion killers for many users.
And when you add motion to the equation, the issue of coordination arises. For most people, moving and interacting at the same time is a challenge.
Hand or finger tracking (we use both words to describe the same technology of full hand and finger recognition in VR) is the ideal interface. Users get to work and interact with the instrument they master the most, their hands. No buttons for thumbs up. No trigger for pushing a key… Just the hand, as in real life and more.
Using hand tracking in consumer and business applications, builds on Steve Job’s vision, explained in the iPhone launch key note. Why bringing in a intermediate object for interaction like a keyboard or gamepad. Users can directly browse pictures, click buttons… on a touch screen or in VR.
The obviousness of using the hand shows in kids. It always amuses me how young children try to interact with the living room TV with their fingers. Using their hands is a primal reflex. And because they recognize the tablet interface, their just go for it and touch the TV screen, without effect but with disappointment.
It’s actually the same for adults in VR. When using a training application, manipulating molecules in a simulation application, reviewing a building design in a BIM application… it makes so much more sense to do it directly with the hands.
But users still need to move
Taking the controllers away means users need an input mechanism for motion. Room scale, i.e. standing and walking around a delimited space, may be a solution when users navigate in a small environment. But it fails to meet the following constraints of making motion as easy as hand interaction.
First, you want to have most people seated so that they feel secure and focused, as sitting prevents the occurence of injuries that could occur from users losing their balance or bumping into an object or a person.
Second, you need to make sure the affordance of the motion controller will be maximal, i.e., it shall be immediate and intuitive, with no learning curve. As easy as walking.
Lastly, it should be so natural, users will forget they are controlling motion as in real life when you’re cooking in your kitchen and obviously not thinking how to go from the fridge to the kitchen counter. This means, users need to use their feet.
These 3 characteristics are what characterizes the 3dRudder. Users are seated. They rest their feet on the device, and trigger motion by just tilting or twisting it slightly. It takes anyone, even non-gamers, just a few seconds to start using it and a few minutes to actually forget they are moving with the feet. Now users are fully immersed in the VR application they are using, with their hands fully focused on hand-tracking interactions.
The 3dRudder vision for VR is that of application where hands are free of controllers to act naturally, and motion is handled effortlessly and intuitively at the feet.