Skip to content

Streaming Server Operator#

Authors: Holoscan Team (NVIDIA)
Supported platforms: x86_64, aarch64
Language: C++
Last modified: August 20, 2025
Latest version: 1.0
Minimum Holoscan SDK version: 3.2.0
Tested Holoscan SDK versions: 3.2.0
Contribution metric: Level 1 - Highly Reliable

The streaming_server operator provides a streaming server implementation that can receive and send video frames to connected clients. It wraps the StreamingServer interface to provide seamless integration with Holoscan applications.

holoscan::ops::StreamingServerOp#

This operator class implements a streaming server that can: - Accept incoming client connections - Receive video frames from clients - Send video frames to clients - Handle multiple client connections (optional) - Manage streaming events through callbacks

Parameters#

  • width: Width of the video frames in pixels
  • type: uint32_t
  • default: 1920

  • height: Height of the video frames in pixels

  • type: uint32_t
  • default: 1080

  • fps: Frame rate of the video

  • type: uint32_t
  • default: 30

  • port: Port used for streaming server

  • type: uint16_t
  • default: 8080

  • multi_instance: Allow multiple server instances

  • type: bool
  • default: false

  • server_name: Name identifier for the server

  • type: std::string
  • default: "StreamingServer"

  • receive_frames: Whether to receive frames from clients

  • type: bool
  • default: true

  • send_frames: Whether to send frames to clients

  • type: bool
  • default: false

  • allocator: Memory allocator for frame data

  • type: std::shared_ptr<Allocator>

Example Usage#

// Create the operator with configuration
auto streaming_server = make_operator<ops::StreamingServerOp>(
    "streaming_server",
    Arg("width") = 1920,
    Arg("height") = 1080,
    Arg("fps") = 30,
    Arg("port") = 8080,
    Arg("multi_instance") = false,
    Arg("server_name") = "MyStreamingServer",
    Arg("receive_frames") = true,
    Arg("send_frames") = true,
    Arg("allocator") = make_resource<UnboundedAllocator>("pool")
);

//add the streaming_server to the app
add_operator(streaming_server);

Building the operator#

In order to build the server operator, you must first download the server binaries form NGC and add to the lib directory in the streaming_server operator folder

Download the Holoscan Server Cloud Streaming library from NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/resources/holoscan_server_cloud_streaming

cd <your_holohub_path>/operators/streaming_server 
ngc registry resource download-version "nvidia/holoscan_server_cloud_streaming:0.1"
unzip -o holoscan_server_cloud_streaming_v0.1/holoscan_server_cloud_streaming.zip

# Copy the appropriate architecture libraries to lib/ directory
# For x86_64 systems:
cp lib/x86_64/*.so* lib/
cp -r lib/x86_64/plugins lib/
# For aarch64 systems:
# cp lib/aarch64/* lib/

# Clean up architecture-specific directories and NGC download directory
rm -rf lib/x86_64 lib/aarch64
rm -rf holoscan_server_cloud_streaming_v0.1

Deployment on NVCF#

The Holoscan cloud steaming stack provides plugins with endpoints required to deploy the server docker container as a streaming function. You can push the container and create/update/deploy the streaming function from the web portal.

Push Container#

Note: You first must docker login to the NGC Container Registry before you can push containers to it: https://docs.nvidia.com/ngc/gpu-cloud/ngc-private-registry-user-guide/index.html#accessing-ngc-registry Tag the container and push it to the container registry:

docker tag simple-streamer:latest {registry}/{org-id}/{container-name}:{version}
docker push {registry}/{org-id}/{container-name}:{version}

For example, if your organization name/id is 0494839893562652 and you want to push a container to the prod container registry using the name my-simple-streamer at version 0.1.0 then run:

docker tag simple-streamer:latest nvcr.io/0494839893562652/my-simple-streamer:0.1.0
docker push nvcr.io/0494839893562652/my-simple-streamer:0.1.0

Set Variables#

All the helper scripts below depend on the following environment variables being set:

# Required variables
export NGC_PERSONAL_API_KEY=<get from https://nvcf.ngc.nvidia.com/functions -> Generate Personal API Key>
export STREAMING_CONTAINER_IMAGE=<registry>/<org-id>/<container-name>:<version>
export STREAMING_FUNCTION_NAME=<my-simple-streamer-function-name>

# Optional variables (shown with default values)
export NGC_DOMAIN=api.ngc.nvidia.com
export NVCF_SERVER=grpc.nvcf.nvidia.com
export STREAMING_SERVER_PORT=49100
export HTTP_SERVER_PORT=8011

Create the Cloud Streaming Function#

Create the streaming function by running the provided script after setting all the required variables:

./nvcf/create_streaming_function.sh

Once the function is created, export the FUNCTION_ID as a variable:

export STREAMING_FUNCTION_ID={my-simple-streamer-function-id}

Update Function#

Update an existing streaming function by running the provided script after setting all the required variables:

./nvcf/update_streaming_function.sh

Deploy Function#

Deploy the streaming function from the web portal: https://nvcf.ngc.nvidia.com/functions

Pre-deployment Port Check#

Before starting HAProxy or deploying cloud functions, verify that the required ports are available:

# Navigate to the holohub root directory
cd /path/to/holohub

# Check streaming server port (from STREAMING_SERVER_PORT variable, default: 49100)
./check_port.sh ${STREAMING_SERVER_PORT:-49100}

# Check HTTP server port (from HTTP_SERVER_PORT variable, default: 8011)  
./check_port.sh ${HTTP_SERVER_PORT:-8011}

# Check NVCF server port (typically 443 for grpc.nvcf.nvidia.com)
./check_port.sh 443

# Check any custom ports your application uses
./check_port.sh [YOUR_CUSTOM_PORT]

Key ports to verify: - Streaming Server Port (49100 by default): Main streaming communication port - HTTP Server Port (8011 by default): HTTP endpoint for function management
- HAProxy Ports: Any custom HAProxy configuration ports - NVCF gRPC Port (443): Communication with NVIDIA Cloud Functions

The port checking script will help identify: - 🚫 Port conflicts: If ports are already in use by other processes - ✅ Available ports: Confirmation that ports can be bound successfully
- 🔧 Process identification: What applications are using specific ports - 📋 Port recommendations: Guidance on port selection

If ports are in use, either: 1. Stop conflicting processes: kill [PID] (use caution) 2. Use different ports: Update environment variables 3. Configure around conflicts: Modify YAML configurations

Test Function#

Start the test intermediate haproxy by running the provided script after setting all the required variables:

./nvcf/start_test_intermediate_haproxy.sh

Please note that the test haproxy server should be running on a separate machine, either on the client machine or a separate one.

Note: If the test haproxy is still running, and you wish to test the executable or docker file again you must first stop it:

./nvcf/stop_test_intermediate_haproxy.sh

Supported Platforms#

  • Linux x86_64
  • NVCF Cloud instances

For more information on NVCF Cloud functions, please refer to NVIDIA Cloud Functions documentation.