# Use webcam to detect and visualize hands (3D World space)

This example demonstrates how to load and display camera feed in a Unity scene with a [WebcamSource](https://handtracking.lightbuzz.com/ui-reference/streamsource/webcamsource) and an [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview), implement hand tracking with the [HandTracker](https://handtracking.lightbuzz.com/api-reference/handtracker), and use the [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) to render detected fingers in 3D world space.

{% hint style="info" %}
This is a code walkthrough of the `LightBuzz_Hand_Tracking_3D` Hand Tracking Unity plugin sample. The plugin includes the no-code demo that has the same functionality.
{% endhint %}

#### Step 1: Create a Unity scene

Open the Unity Project you created in the [Installation](https://handtracking.lightbuzz.com/hand-tracking-unity-plugin/installation) section.

Right-click on the `Assets` folder and select `Create > Scene`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2Fjq6n6qCxjh0OR775jrfm%2F1_create_scene.png?alt=media&#x26;token=f8e0e814-512e-4bab-b17e-c341a8bd2281" alt=""><figcaption><p>Create new scene</p></figcaption></figure>

Type the scene's name. In this example, we'll use the name `WebcamDemo3D`.

After the scene is created, right-click on the scene and select `GameObject > UI > Canvas`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2Fde9seJ1XF1in693SVhhX%2F2_add_canvas.png?alt=media&#x26;token=dc33f608-7150-4fe9-a2c4-07fa0e212c35" alt=""><figcaption><p>Add a Canvas</p></figcaption></figure>

Navigate to the LightBuzz `Prefabs` folder at `Assets\LightBuzz Hand Tracking\Runtime\Prefabs`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FptBTcWOYXEUCIgv8n8Cs%2F3_prefabs.png?alt=media&#x26;token=a8c96402-2def-43f9-87cb-764c0336e3ed" alt=""><figcaption><p>LightBuzz Prefabs</p></figcaption></figure>

For this example, you'll need the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview), [WebcamSource](https://handtracking.lightbuzz.com/ui-reference/streamsource/webcamsource) and [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) prefabs.&#x20;

Drag and drop the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview) and [WebcamSource](https://handtracking.lightbuzz.com/ui-reference/streamsource/webcamsource) prefabs into the Canvas. &#x20;

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FuHSEdspeBNngW78blY5D%2F4_add_prefabs_webcam3D.png?alt=media&#x26;token=87de0aec-bb09-468b-8632-431da09bdcd1" alt=""><figcaption><p>Add prefabs into Canvas</p></figcaption></figure>

Drag and drop the [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) prefab on the `Hierarchy` pane (but not into the Canvas).

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FdL8YRB7C3tsXMqi8entB%2F4_add_prefabs_webcam3D_h.png?alt=media&#x26;token=8ff11d45-e9dc-4ce2-8d7a-50522dd3d31b" alt=""><figcaption><p>Add HandManager prefab</p></figcaption></figure>

{% hint style="info" %}
To see the 3D detections, the [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) needs to be outside of the Canvas.
{% endhint %}

Then, right-click on the `Hierarchy` pane and select `Create Empty`.&#x20;

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2F128JJiRWg5tVPvbu5wAZ%2F5_create_empty.png?alt=media&#x26;token=1388ed19-e6ee-4b1d-b2cc-32d51aa6882e" alt=""><figcaption><p>Create empty component</p></figcaption></figure>

Give a name to the new component. In this example, we'll use the name `Demo`.

Then, go to the `Inspector` pane and select `Add Component`. In the search bar, type `new` and select the `New script` option.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FOmSwepgdDduiVmMQubl0%2F6_add_script.png?alt=media&#x26;token=627e78d1-8fd0-4408-933b-2cfe6f2fbb81" alt=""><figcaption><p>Add script</p></figcaption></figure>

Type the script's name and select `Create and Add`. For this example, we'll use the name `WebcamDemo3D`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FoiEnpFBaRhder1Oru5an%2F7_name_script_webcam3D.png?alt=media&#x26;token=112d7c0f-fc6d-45da-aab0-178ac10d3773" alt=""><figcaption><p>Create script</p></figcaption></figure>

#### Step 2: Initialize the visual components

Double-click on the newly created `MonoBehaviour` script and import the necessary namespaces.

```csharp
using LightBuzz.HandTracking;
using System.Collections.Generic;
using UnityEngine;
```

For this example, we'll need a [WebcamSource](https://handtracking.lightbuzz.com/ui-reference/streamsource/webcamsource) to get the frames, an [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview) to draw camera texture and a [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) to visualize the detected hands.

```csharp
[SerializeField] private ImageView _image;
[SerializeField] private WebcamSource _webcam;
[SerializeField] private HandManager _handManager;
```

After adding the serialized fields, go to the Unity Editor to connect these fields with the `Demo` component.

At the `Inspector` pane, select the round button next to each `SerializeField`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FyFDGZJg0df7WiGkrgQLT%2F8_connect_serialize_fields_webcam3D.png?alt=media&#x26;token=8e04c33b-c6bf-494f-a2f7-0d6ce896ba88" alt=""><figcaption><p>Seriaze fileds</p></figcaption></figure>

Then, at the `Scene` tab, select the corresponding prefab. For example, for the `Image` field, select the `ImageView` prefab.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2Fu5hbIW0JSuqVkmPJJ04O%2F9_imageview.png?alt=media&#x26;token=87365163-98eb-424a-af4b-859ca882f91c" alt=""><figcaption><p>Connect prefab</p></figcaption></figure>

When all fields are connected, the result should resemble the following image.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FvBT9pAGGbdyWiQtgKs20%2F10_connected_prefabs_webcam3D.png?alt=media&#x26;token=1bb9826a-9680-487b-979c-5275c65dd8b4" alt=""><figcaption><p>Connected serialize fields</p></figcaption></figure>

Then, select the `HandManager` prefab. Connect the `Image` field to the `ImageView` prefab and uncheck the `Is 2D` option.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2Fu7urA37ZEPAX90jl4OyZ%2F10a_connected_handmanager3D.png?alt=media&#x26;token=e2e8307e-35fa-4f5a-be5f-eb20c7c847c1" alt=""><figcaption><p>Connect field to HandManager</p></figcaption></figure>

{% hint style="warning" %}
Make sure the `Is 2D` option is NOT selected to see the hand tracking detections in the 3D world space. If the option is checked, detections are displayed in the 2D space.
{% endhint %}

After connecting all the fields, navigate to the `Canvas` to set the render options.

Change the `Render Mode` to `Screen Space - Camera`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2F3Nl9z2Beuk8eRI1DZj7G%2F11_render_mode_camera.png?alt=media&#x26;token=e582de34-c7cb-4610-82a7-9d2a3b9f486d" alt=""><figcaption><p>Change Render Mode</p></figcaption></figure>

Then, set the `Main Camera,` from the Scene tab, as the `Render Camera`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FIUjjpjWLwZqDxqRdJ5q9%2F12_main_camera.png?alt=media&#x26;token=d82b6dce-3e08-4777-8bb2-23ed475d6aca" alt=""><figcaption><p>Select Render Camera</p></figcaption></figure>

When all the render options are set, the result should look like the following image.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FqVosuFDLymh0LRB1qK48%2F13_render_camera_settings.png?alt=media&#x26;token=6a83eba6-4c2c-4787-a324-f1c519b26026" alt=""><figcaption><p>Render settings</p></figcaption></figure>

#### Step 3: Adjust Scene components

To display the hand detections in the 3D world space we need to adjust the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview).&#x20;

Navigate to the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview) prefab, under the `Canvas`, go to the `Aspect Ratio Fitter` section and change `Aspect Mode` to `None`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2Fd6rJ9NB4Ns6ZXTTEE1fy%2F14_aspect_mode_settings.png?alt=media&#x26;token=49db2914-1011-4cba-9e2c-5b97710ceefa" alt=""><figcaption><p>Aspect Mode settings</p></figcaption></figure>

Then, go to the `Rect Transform` section of the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview) prefab and click on the blue cross to open the `Anchor Presets`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FolfRdVMSjkVHK9yeZ3HR%2F15_anchor_presets.png?alt=media&#x26;token=42e55d36-fd42-496b-9116-a132ce6eea31" alt=""><figcaption><p>Anchor Presets</p></figcaption></figure>

Select the `middle` and `center` option (the one that looks like a target).&#x20;

Then, change the `Width` to `400`, the `Height` to `200` and the `Pos X` to `-600`.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FbtHxZDw3sI2WsivbvDDn%2F16_rect_transform.png?alt=media&#x26;token=cd955da3-a325-491c-96b7-2a4a6989d091" alt=""><figcaption><p>Rect Transform</p></figcaption></figure>

{% hint style="info" %}
The suggested `Rect Transform` settings are designed to reduce the size of the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview) and reposition it to the side. Feel free to experiment and set the settings to different values.
{% endhint %}

For a proper view, navigate to the `Main Camera` component and set the `Y Position` to `0`, since the 3D coordinates of the hands are estimated relative to the cartesian origin point. Also, set the `Z Position` to `-1` to move the camera farther from the hands.

<figure><img src="https://1895788644-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyGqKFJQGBmrlsrsolzNU%2Fuploads%2FT6LrQ1RD5YEuftYI4OsR%2F17_main_camera_transform.png?alt=media&#x26;token=74cbc800-4bfb-48f9-a580-3416f5b3166b" alt=""><figcaption><p>Main Camera Transform</p></figcaption></figure>

Finally, return to the `MonoBehaviour` script and instantiate a [HandTracker](https://handtracking.lightbuzz.com/api-reference/handtracker) to detect hands.

```csharp
HandTracker _handTracker = new HandTracker();
```

#### Step 4: Open the webcam

Open the webcam to get the live feed.

```csharp
_webcam.Open();
```

{% hint style="info" %}
In this example, the camera is opened in the `Start()` method. Alternatively, you could open the camera with the click of a button.
{% endhint %}

#### Step 5: Get the live feed

Check that the camera is open and available for capturing video.

```csharp
if (!_webcam.IsOpen) return;
```

Load the new frame from the `_webcam` object onto the [ImageView](https://handtracking.lightbuzz.com/ui-reference/imageview) to show the live feed to your users.

```csharp
_image.Load(_webcam);
```

#### Step 6: Detect hands

Pass the `Texture2D` object from the camera frame to the [HandTracker](https://handtracking.lightbuzz.com/api-reference/handtracker) for processing.

```csharp
List<Hand> hands = _handTracker.Load(_image.Texture);
```

The [HandTracker](https://handtracking.lightbuzz.com/api-reference/handtracker) will analyze the texture and detect any hands present in the image.

#### Step 7: Visualize the hands

To display the detected hands in 3D world space, first sort the hands by position X.

```csharp
hands.Sort((h1, h2) => 
    h1[FingerJointType.Root].Position2D.x.CompareTo(
        h2[FingerJointType.Root].Position2D.x));
```

{% hint style="warning" %}
By default, the [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) positions the root joint of any detected hand at the origin point (0, 0, 0) within the 3D world space. To track multiple hands and visualize them clearly within the 3D environment, you need to provide an offset value in meters for each hand. Provided with an offset list, the [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager) will reposition each hand according to the offset value.
{% endhint %}

In this example, each new hand will be positioned 25 cm away from the previous one.

```csharp
List<Vector3> offsets = new List<Vector3>();
float offset_x = 0f;
float step = 0.25f;

foreach (Hand hand in hands)
{
    offsets.Add(new Vector3(offset_x, 0f, 0f));
    offset_x += step;
}
```

Then, simply pass the hand detections and the offsets to the [HandManager](https://handtracking.lightbuzz.com/ui-reference/handmanager).  It will manage the rendering and updates required to accurately depict the hands in 3D world space based on the detection data provided.

```csharp
_handManager.Load(hands, offsets);
```

{% hint style="info" %}
Steps 5 through 7 are incorporated into the `Update()` method.
{% endhint %}

#### Step 8: Release the resources

Close the webcam to stop the live feed, preventing further video capture.

```csharp
_webcam.Close();
```

Dispose of the [HandTracker](https://handtracking.lightbuzz.com/api-reference/handtracker) object to ensure that all associated resources are released.

```csharp
_handTracker.Dispose();
```

{% hint style="info" %}
In this example, the resources are released in the `OnDestroy()` method. Alternatively, you could do that with the click of a button or in the `OnApplicationQuit()` method.
{% endhint %}

### Full example code

Here is the full example code that has the same functionality as the Hand Tracking Unity plugin `LightBuzz_Hand_Tracking_3D` sample.

<pre class="language-csharp"><code class="lang-csharp">using LightBuzz.HandTracking;
using System.Collections.Generic;
using UnityEngine;

public class WebcamDemo3D : MonoBehaviour
{
    [SerializeField] private ImageView _image;
    [SerializeField] private WebcamSource _webcam;
    [SerializeField] private HandManager _handManager;
    
    private readonly HandTracker _handTracker = new HandTracker();

<strong>    private void Start()
</strong>    {
        _webcam.Open();
    }

    private void OnDestroy()
    {
        _webcam.Close();
        _handTracker.Dispose();
    }

    private void Update()
    {
        if (!_webcam.IsOpen) return;

        // 1. Draw the camera texture.
        _image.Load(_webcam);

        // 2. Detect hands in the camera texture.
        List&#x3C;Hand> hands = _handTracker.Load(_image.Texture);
        
        // 3. Sort hands by position X.
        hands.Sort((h1, h2) => 
            h1[FingerJointType.Root].Position2D.x.CompareTo(
                h2[FingerJointType.Root].Position2D.x));

        // 4. Add offset to show all detected hands.
        List&#x3C;Vector3> offsets = new List&#x3C;Vector3>();
        float offset_x = 0f;
        float step = 0.25f;

        foreach (Hand hand in hands)
        {
            offsets.Add(new Vector3(offset_x, 0f, 0f));
            offset_x += step;
        }

        // 5. Visualize the hands on a Canvas.
        _handManager.Load(hands, offsets);
    }
}
</code></pre>

By following these steps, you will be able to load the camera feed into your application, detect hands in real time, and finally, render these detections in 3D world space.
