👐
Hand & Finger Tracking for Unity
HomepageSupportContact
  • Hand Tracking Unity Plugin
    • Overview
    • System Requirements
    • Installation
    • QuickStart
    • Supported Joints
  • Help & Support
    • Ask for help
    • Request a feature
  • Examples
    • Detect hand and retrieve joints
    • No Code: Detect and visualize hands from a Sprite
    • Detect and visualize hands from a Sprite
    • Detect and visualize hands from a video
    • Use webcam to detect and visualize hands (2D Canvas)
    • Use webcam to detect and visualize hands (3D World space)
  • API Reference
    • Finger
      • Index
      • Middle
      • Palm
      • Pinky
      • Ring
      • Thumb
    • FingerJoint
    • FingerJointType
    • Hand
    • HandExtensions
    • HandSide
    • HandTracker
  • UI Reference
    • HandManager
    • HandViewer
    • HandVisual
    • ImageView
    • StreamSource
      • SpriteSource
      • VideoSource
      • WebcamSource
    • HandTrackerExtensions
    • WebcamSelector
Powered by GitBook
On this page
  1. Examples

Use webcam to detect and visualize hands (3D World space)

An example on how to get camera feed, detect and visualize hands in real time in 3D world space.

PreviousUse webcam to detect and visualize hands (2D Canvas)NextFinger

Last updated 1 year ago

This example demonstrates how to load and display camera feed in a Unity scene with a and an , implement hand tracking with the , and use the to render detected fingers in 3D world space.

This is a code walkthrough of the LightBuzz_Hand_Tracking_3D Hand Tracking Unity plugin sample. The plugin includes the no-code demo that has the same functionality.

Step 1: Create a Unity scene

Open the Unity Project you created in the section.

Right-click on the Assets folder and select Create > Scene.

Type the scene's name. In this example, we'll use the name WebcamDemo3D.

After the scene is created, right-click on the scene and select GameObject > UI > Canvas.

Navigate to the LightBuzz Prefabs folder at Assets\LightBuzz Hand Tracking\Runtime\Prefabs.

Then, right-click on the Hierarchy pane and select Create Empty.

Give a name to the new component. In this example, we'll use the name Demo.

Then, go to the Inspector pane and select Add Component. In the search bar, type new and select the New script option.

Type the script's name and select Create and Add. For this example, we'll use the name WebcamDemo3D.

Step 2: Initialize the visual components

Double-click on the newly created MonoBehaviour script and import the necessary namespaces.

using LightBuzz.HandTracking;
using System.Collections.Generic;
using UnityEngine;
[SerializeField] private ImageView _image;
[SerializeField] private WebcamSource _webcam;
[SerializeField] private HandManager _handManager;

After adding the serialized fields, go to the Unity Editor to connect these fields with the Demo component.

At the Inspector pane, select the round button next to each SerializeField.

Then, at the Scene tab, select the corresponding prefab. For example, for the Image field, select the ImageView prefab.

When all fields are connected, the result should resemble the following image.

Then, select the HandManager prefab. Connect the Image field to the ImageView prefab and uncheck the Is 2D option.

Make sure the Is 2D option is NOT selected to see the hand tracking detections in the 3D world space. If the option is checked, detections are displayed in the 2D space.

After connecting all the fields, navigate to the Canvas to set the render options.

Change the Render Mode to Screen Space - Camera.

Then, set the Main Camera, from the Scene tab, as the Render Camera.

When all the render options are set, the result should look like the following image.

Step 3: Adjust Scene components

Select the middle and center option (the one that looks like a target).

Then, change the Width to 400, the Height to 200 and the Pos X to -600.

For a proper view, navigate to the Main Camera component and set the Y Position to 0, since the 3D coordinates of the hands are estimated relative to the cartesian origin point. Also, set the Z Position to -1 to move the camera farther from the hands.

HandTracker _handTracker = new HandTracker();

Step 4: Open the webcam

Open the webcam to get the live feed.

_webcam.Open();

In this example, the camera is opened in the Start() method. Alternatively, you could open the camera with the click of a button.

Step 5: Get the live feed

Check that the camera is open and available for capturing video.

if (!_webcam.IsOpen) return;
_image.Load(_webcam);

Step 6: Detect hands

List<Hand> hands = _handTracker.Load(_image.Texture);

Step 7: Visualize the hands

To display the detected hands in 3D world space, first sort the hands by position X.

hands.Sort((h1, h2) => 
    h1[FingerJointType.Root].Position2D.x.CompareTo(
        h2[FingerJointType.Root].Position2D.x));

In this example, each new hand will be positioned 25 cm away from the previous one.

List<Vector3> offsets = new List<Vector3>();
float offset_x = 0f;
float step = 0.25f;

foreach (Hand hand in hands)
{
    offsets.Add(new Vector3(offset_x, 0f, 0f));
    offset_x += step;
}
_handManager.Load(hands, offsets);

Steps 5 through 7 are incorporated into the Update() method.

Step 8: Release the resources

Close the webcam to stop the live feed, preventing further video capture.

_webcam.Close();
_handTracker.Dispose();

In this example, the resources are released in the OnDestroy() method. Alternatively, you could do that with the click of a button or in the OnApplicationQuit() method.

Full example code

Here is the full example code that has the same functionality as the Hand Tracking Unity plugin LightBuzz_Hand_Tracking_3D sample.

using LightBuzz.HandTracking;
using System.Collections.Generic;
using UnityEngine;

public class WebcamDemo3D : MonoBehaviour
{
    [SerializeField] private ImageView _image;
    [SerializeField] private WebcamSource _webcam;
    [SerializeField] private HandManager _handManager;
    
    private readonly HandTracker _handTracker = new HandTracker();

    private void Start()
    {
        _webcam.Open();
    }

    private void OnDestroy()
    {
        _webcam.Close();
        _handTracker.Dispose();
    }

    private void Update()
    {
        if (!_webcam.IsOpen) return;

        // 1. Draw the camera texture.
        _image.Load(_webcam);

        // 2. Detect hands in the camera texture.
        List<Hand> hands = _handTracker.Load(_image.Texture);
        
        // 3. Sort hands by position X.
        hands.Sort((h1, h2) => 
            h1[FingerJointType.Root].Position2D.x.CompareTo(
                h2[FingerJointType.Root].Position2D.x));

        // 4. Add offset to show all detected hands.
        List<Vector3> offsets = new List<Vector3>();
        float offset_x = 0f;
        float step = 0.25f;

        foreach (Hand hand in hands)
        {
            offsets.Add(new Vector3(offset_x, 0f, 0f));
            offset_x += step;
        }

        // 5. Visualize the hands on a Canvas.
        _handManager.Load(hands, offsets);
    }
}

By following these steps, you will be able to load the camera feed into your application, detect hands in real time, and finally, render these detections in 3D world space.

For this example, you'll need the , and prefabs.

Drag and drop the and prefabs into the Canvas.

Drag and drop the prefab on the Hierarchy pane (but not into the Canvas).

To see the 3D detections, the needs to be outside of the Canvas.

For this example, we'll need a to get the frames, an to draw camera texture and a to visualize the detected hands.

To display the hand detections in the 3D world space we need to adjust the .

Navigate to the prefab, under the Canvas, go to the Aspect Ratio Fitter section and change Aspect Mode to None.

Then, go to the Rect Transform section of the prefab and click on the blue cross to open the Anchor Presets.

The suggested Rect Transform settings are designed to reduce the size of the and reposition it to the side. Feel free to experiment and set the settings to different values.

Finally, return to the MonoBehaviour script and instantiate a to detect hands.

Load the new frame from the _webcam object onto the to show the live feed to your users.

Pass the Texture2D object from the camera frame to the for processing.

The will analyze the texture and detect any hands present in the image.

By default, the positions the root joint of any detected hand at the origin point (0, 0, 0) within the 3D world space. To track multiple hands and visualize them clearly within the 3D environment, you need to provide an offset value in meters for each hand. Provided with an offset list, the will reposition each hand according to the offset value.

Then, simply pass the hand detections and the offsets to the . It will manage the rendering and updates required to accurately depict the hands in 3D world space based on the detection data provided.

Dispose of the object to ensure that all associated resources are released.

ImageView
WebcamSource
HandManager
ImageView
WebcamSource
HandManager
HandManager
WebcamSource
ImageView
HandManager
ImageView
ImageView
ImageView
ImageView
HandTracker
ImageView
HandTracker
HandTracker
HandManager
HandManager
HandManager
HandTracker
WebcamSource
ImageView
HandTracker
HandManager
Installation
Create new scene
Add a Canvas
LightBuzz Prefabs
Add prefabs into Canvas
Add HandManager prefab
Create empty component
Add script
Create script
Seriaze fileds
Connect prefab
Connected serialize fields
Connect field to HandManager
Change Render Mode
Select Render Camera
Render settings
Aspect Mode settings
Anchor Presets
Rect Transform
Main Camera Transform