Holoscan-Yolo¶
Authors: Holoscan Team (NVIDIA)
Supported platforms: x86_64, aarch64
Last modified: March 18, 2025
Language: Python
Latest version: 1.0
Minimum Holoscan SDK version: 1.0.3
Tested Holoscan SDK versions: 1.0.3
Contribution metric: Level 2 - Trusted
This project is aiming to provide basic guidance to deploy Yolo-based model to Holoscan SDK as "Bring Your Own Model"

Model¶
- Yolo v8 model: https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
- Yolo v8 export repository: https://github.com/triple-Mu/YOLOv8-TensorRT
In this application example, we use the YOLOv8s model, which is converted to ONNX format using the repository mentioned above. Note that if you provide your own ONNX model, ensure it includes the EfficientNMS_TRT layer. You can verify it using Netron. Additionally, we employ the graph_surgeon.py
script to modify the input shape. For more details on this script, refer to graph_surgeonpy.
The detailed process is documented in the CMakeLists.txt
file.
Input Source¶
This app currently supports two input options:
- v4l2 compatible input device
- Pre-recorded video
Data¶
This application downloads a pre-recorded video from Pexels when the application is built. Please review the license terms from Pexels.
Run¶
Build and launch container. Note that this will use a v4l2 input source as default.
./dev_container build_and_run yolo_model_deployment
Video Replayer Support¶
If you don't have a v4l2 compatible device plugged in, you can also run this application on a pre-recorded video. To launch the application using the Video Stream Replayer as the input source, run:
./dev_container build_and_run yolo_model_deployment --run_args "--source replayer"
Configuration¶
For application configuration, please refer to the yolo_detection.yaml
.