vpipeline_onboard_web_logo

VPipelineOnboard application

An example of VPipeline C++ library usage onboard v1.0.4

Table of contents

Overview

The VPipelineOnboard is an example of VPipeline library usage onboard only for Intel-based platforms. The repository includes the source code of the application itself and source code of all the libraries required for it to work. The application demonstrates how to initialize and use VPipeline library. The application (repository) is a CMake project and can be used as a template for your project. It provides VPipeline initialization and streaming all the data through UDP port. Upon starting, the application initializes VPipeline with the following libraries: ViscaCamera (as camera and lens modules), VSourceOpenCv (as video source module), ImageFlip (as the first video filter), DigitalZoom (as the second video filter), VideoStabiliserOpenCv (as the video stabilization module), Dehazer (as the fourth video filter), MotionMagnificator (as the fifth video filter), CvTracker (as video tracker module), Gmd (motion detector library as the first object detector), Ged (video changes detector library as the second object detector) and DnnOpenVinoDetector (neural network object detector library as the third objects detector). The library does not initialize the third video filter (dummy video filer will be initialized) and the fourth object detector (dummy object detector will be initialized). There is also used VCodecOneVpl library used for transcoding frame that comes from video source.

NOTE: Since VCodecOneVpl library is compatible only with Intel processors, VPipelineOnboard example can be used only on Intel hardware. Check How to create your own example depending on platform chapter to see how to create VPipeline instance on different platforms.

The application uses C++17 standard. Image bellow shows processing pipeline structure:

vpipeline_onboard_structure

The application reads configuration parameters from VPipelineOnboard.json file and initializes VPipeline. After initialization of VPipeline and UDPDataChannel server, the application runs internal processing loop which includes: camera and lens control, video capture, image flip, digital zoom, video stabilization, defog / dehaze, motion magnification, video tracking (separate thread), motion detection (separate thread), video changes detection (separate thread) and DNN (neural network) detection (separate thread). It is worth noting, that parameters of each module can be controlled through UDP Port (the best example of controlling VPipeline is presented in VPipelineControl library). VPipelineOnboard library also provides control of PanTilt unit if received command contains such directions. The user can chose file dialog to open video files using SimpleFileDialog library if video source from camera is not available. Main application loop waits video frame from VPipeline library, converts it from YUV24 format (native format for video processing pipeline) to NV12 (native format for transcoding image data), obtains current parameters from VPipeline library (parameters of all modules and objects detection results) and sends combined data through the UDP Port.

Application versions

Table 1 - Application versions.

Version Release date What’s new
1.0.0 06.06.2024 First version.
1.0.1 06.08.2024 - Submodules updated.
1.0.2 18.09.2024 - Submodules updated.
1.0.3 05.10.2024 - Update submodules.
1.0.4 04.12.2024 - Update submodules.

Application files

The VPipelineOnboard is provided as source code. The user receives a set of files in the form of a CMake project (repository). The repository structure is outlined below:

CMakeLists.txt ----------- Main CMake file of the application.
3rdparty ----------------- Folder with third-party libraries.
    CMakeLists.txt ------- CMake file which includes third-party libraries.
    CvTracker ------------ Folder with CvTracker library source code.
    Dehazer -------------- Folder with Dehazer library source code.
    DigitalZoom ---------- Folder with DigitalZoom library source code.
    DnnOpenVinoDetector -- Folder with DnnOpenVinoDetector library source code.
    Ged ------------------ Folder with Ged library source code.
    Gmd ------------------ Folder with Gmd library source code.
    ImageFlip ------------ Folder with ImageFlip library source code.
    MotionMagnificator --- Folder with MotionMagnificator library source code.
    SimpleFileDialog ----- Folder with SimpleFileDialog library source code.
	UdpDataChannel ------- Folder with UdpDataChannel library source code.
	VCodecOneVpl --------- Folder with VCodecOneVpl library source code.
    ViscaCamera ---------- Folder with ViscaCamera library source code.
    VPipeline ------------ Folder with VPipeline library source code.
    VSourceOpenCv -------- Folder with VSourceOpenCv library source code.
    VStabiliserOpenCv ---- Folder with VStabiliserOpenCv library source code.
src ---------------------- Folder with source code of the library.
    CMakeLists.txt ------- CMake file of the application.
    main.cpp ------------- Source code file of the application.

Build application

The VPipelineOnboard is a complete repository in the form of CMake project. The application itself has no external dependencies, however included libraries depend on OpenCV, OpenVino and OneVpl. Before compiling you have to install them into your OS. To compile library its necessary to have installed CMake.

On Linux

Below are the steps to configure on Linux operating system:

  1. At first there has to be installed OneVpl codec for Intel platforms for obtaining VCodecOneVpl usage. To install OneVpl follow instruction: OneVpl for Linux.

  2. To install CMake and OpenCV use following commands:

  sudo apt-get install cmake libopencv-dev
  1. To install OpenVino follow the guide: OpenVino for Linux.

  2. Commands to build VPipelineOnboard application (Release mode):

cd VPipelineOnboard
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make

On Windows

Below are the steps to configure on Windows operating system:

  1. At first there has to be installed OneVpl codec for Intel platforms for obtaining VCodecOneVpl usage. To install OneVpl follow instruction: OneVpl for Windows.

  2. Install CMake: Open the link, download installer and follow instructions: CMake install. You can use different version if needed, all above 3.15 are supported.

  3. Install OpenCV: Open the link, download installer and follow instructions: OpenCV 4.9. Install to “C:/OpenCV”. You can use different version if needed, all above 4.5 are supported. Locate OpenCVConfig.cmake file add to cmake command OpenCV_DIR that points to the location of this file, shown below.

  4. To install OpenVino follow the guide: OpenVino for Windows.

  5. Commands to build VPipelineOnboard application (Release mode):

cd VPipelineOnboard
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -D OpenCV_DIR="C:/OpenCV/opencv/build" -D OpenVINO_DIR="C:/Program Files (x86)/Intel/openvino_2022/runtime/cmake"
cmake --build . --config Release

How to create your own example depending on platform

The VPipelineOnboard is an example how to use VPipeline on your platform, but it utilizes VCodecOneVpl library, which is suited only for Intel-based platforms. If you want to create your own VPipelineOnboard example you can copy whole structure, but you have to replace codec used for transcoding frame. Table 2 demonstrates what codec is recommended for different platforms:

Table 2 - What codec to use depending on platform.

Device Codec to use
Raspberry Pi 4
Raspberry Pi 2 zero
VCodecV4L2
Raspberry Pi 5 VCodecLibav
NVidia Jetson devices VCodecJetPack

Launch

The result of compiling will be VPipelineOnboard.exe executable file on Windows or VPipelineOnboard executable file on Linux. To launch application on Linux run command:

./VPipelineOnboard

Note: on Windows you may need copy of OpenCV dll files to application’s executable file folder.

After start of the application, it will create configuration file (if it doesn’t exist already) VPipelineOnboard.json with default VPipeline parameters. Configuration file allows to modify all of the parameters from VPipeline, you can also copy configuration file (VPipelineOnboard.json) and neural network file for DNN object detector (yolov7-tiny_640x640.onnx) from static folder of VPipelineOnboard repository. Default contents of the configuration file:

{
    "Params": {
        "camera": {
            "agcMode": 0,
            "alcGate": 0,
            "autoNucIntervalMsec": 0,
            "blackAndWhiteFilterMode": 0,
            "brightness": 0,
            "brightnessMode": 0,
            "changingLevel": 0.0,
            "changingMode": 0,
            "chromaLevel": 0,
            "contrast": 0,
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "ddeLevel": 0.0,
            "ddeMode": 0,
            "defogMode": 0,
            "dehazeMode": 0,
            "detail": 0,
            "digitalZoom": 0.0,
            "digitalZoomMode": 0,
            "displayMode": 0,
            "exposureCompensationMode": 0,
            "exposureCompensationPosition": 0,
            "exposureMode": 0,
            "exposureTime": 0,
            "filterMode": 0,
            "fps": 0.0,
            "gain": 0,
            "gainMode": 0,
            "height": 0,
            "imageFlip": 0,
            "initString": "",
            "isoSensitivity": 0,
            "logMode": 0,
            "noiseReductionMode": 0,
            "nucMode": 0,
            "palette": 0,
            "profile": 0,
            "roiX0": 0,
            "roiX1": 0,
            "roiY0": 0,
            "roiY1": 0,
            "sceneMode": 0,
            "sensitivity": 0.0,
            "sharpening": 0,
            "sharpeningMode": 0,
            "shutterMode": 0,
            "shutterPos": 0,
            "shutterSpeed": 0,
            "stabilizationMode": 0,
            "type": 0,
            "videoOutput": 0,
            "whiteBalanceArea": 0,
            "whiteBalanceMode": 0,
            "wideDynamicRangeMode": 0,
            "width": 0
        },
        "general": {
            "enable": true,
            "logMode": 2
        },
        "lens": {
            "afHwSpeed": 0,
            "afRange": 0,
            "afRoiMode": 0,
            "afRoiX0": 0,
            "afRoiX1": 0,
            "afRoiY0": 0,
            "afRoiY1": 0,
            "autoAfRoiBorder": 0,
            "autoAfRoiHeight": 0,
            "autoAfRoiWidth": 0,
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "extenderMode": 0,
            "filterMode": 0,
            "focusFactorThreshold": 0.0,
            "focusHwFarLimit": 0,
            "focusHwMaxSpeed": 0,
            "focusHwNearLimit": 0,
            "focusMode": 0,
            "fovPoints": [],
            "initString": "",
            "irisHwCloseLimit": 0,
            "irisHwMaxSpeed": 0,
            "irisHwOpenLimit": 0,
            "irisMode": 0,
            "logMode": 0,
            "refocusTimeoutSec": 0,
            "stabiliserMode": 0,
            "type": 0,
            "zoomHwMaxSpeed": 0,
            "zoomHwTeleLimit": 0,
            "zoomHwWideLimit": 0
        },
        "objectDetector1": {
            "classNames": [
                ""
            ],
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "enable": true,
            "frameBufferSize": 5,
            "initString": "",
            "logMode": 2,
            "maxObjectHeight": 128,
            "maxObjectWidth": 128,
            "maxXSpeed": 30.0,
            "maxYSpeed": 30.0,
            "minDetectionProbability": 0.5,
            "minObjectHeight": 4,
            "minObjectWidth": 4,
            "minXSpeed": 0.009999999776482582,
            "minYSpeed": 0.009999999776482582,
            "numThreads": 1,
            "resetCriteria": 5,
            "scaleFactor": 1,
            "sensitivity": 10.0,
            "type": 0,
            "xDetectionCriteria": 10,
            "yDetectionCriteria": 10
        },
        "objectDetector2": {
            "classNames": [],
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "enable": false,
            "frameBufferSize": 30,
            "initString": "",
            "logMode": 2,
            "maxObjectHeight": 128,
            "maxObjectWidth": 128,
            "maxXSpeed": 30.0,
            "maxYSpeed": 30.0,
            "minDetectionProbability": 0.5,
            "minObjectHeight": 4,
            "minObjectWidth": 4,
            "minXSpeed": 0.009999999776482582,
            "minYSpeed": 0.009999999776482582,
            "numThreads": 1,
            "resetCriteria": 1,
            "scaleFactor": 1,
            "sensitivity": 10.0,
            "type": 0,
            "xDetectionCriteria": 2,
            "yDetectionCriteria": 2
        },
        "objectDetector3": {
            "classNames": [],
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "enable": true,
            "frameBufferSize": 0,
            "initString": "yolov7-tiny_640x640.onnx;640;640",
            "logMode": 2,
            "maxObjectHeight": 256,
            "maxObjectWidth": 256,
            "maxXSpeed": 0.0,
            "maxYSpeed": 0.0,
            "minDetectionProbability": 0.5,
            "minObjectHeight": 4,
            "minObjectWidth": 4,
            "minXSpeed": 0.0,
            "minYSpeed": 0.0,
            "numThreads": 0,
            "resetCriteria": 0,
            "scaleFactor": 0,
            "sensitivity": 0.0,
            "type": 0,
            "xDetectionCriteria": 0,
            "yDetectionCriteria": 0
        },
        "objectDetector4": {
            "classNames": [],
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "enable": false,
            "frameBufferSize": 0,
            "initString": "",
            "logMode": 0,
            "maxObjectHeight": 0,
            "maxObjectWidth": 0,
            "maxXSpeed": 0.0,
            "maxYSpeed": 0.0,
            "minDetectionProbability": 0.0,
            "minObjectHeight": 0,
            "minObjectWidth": 0,
            "minXSpeed": 0.0,
            "minYSpeed": 0.0,
            "numThreads": 0,
            "resetCriteria": 0,
            "scaleFactor": 0,
            "sensitivity": 0.0,
            "type": 0,
            "xDetectionCriteria": 0,
            "yDetectionCriteria": 0
        },
        "videoFilter1": {
            "custom1": -1.0,
            "custom2": -1.0,
            "custom3": -1.0,
            "level": -1.0,
            "mode": 0,
            "type": -1
        },
        "videoFilter2": {
            "custom1": -1.0,
            "custom2": -1.0,
            "custom3": -1.0,
            "level": 1.0,
            "mode": 0,
            "type": 0
        },
        "videoFilter3": {
            "custom1": 0.0,
            "custom2": 0.0,
            "custom3": 0.0,
            "level": 0.0,
            "mode": 0,
            "type": 0
        },
        "videoFilter4": {
            "custom1": -1.0,
            "custom2": -1.0,
            "custom3": -1.0,
            "level": 0.0,
            "mode": 0,
            "type": 0
        },
        "videoFilter5": {
            "custom1": -1.0,
            "custom2": -1.0,
            "custom3": -1.0,
            "level": 62.5,
            "mode": 0,
            "type": -1
        },
        "videoSource": {
            "custom1": -1.0,
            "custom2": -1.0,
            "custom3": -1.0,
            "exposureMode": 0,
            "focusMode": 0,
            "fourcc": "BGR24",
            "fps": 30.0,
            "gainMode": -1,
            "height": 720,
            "logLevel": 0,
            "roiHeight": 0,
            "roiWidth": 0,
            "roiX": 0,
            "roiY": 0,
            "source": "test.mp4",
            "width": 1280
        },
        "videoStabiliser": {
            "aFilterCoeff": 0.8999999761581421,
            "aOffsetLimit": 10.0,
            "constAOffset": 0.0,
            "constXOffset": 0,
            "constYOffset": 0,
            "cutFrequencyHz": 0.0,
            "enable": false,
            "fps": 30.0,
            "logMod": 0,
            "scaleFactor": 1,
            "transparentBorder": true,
            "type": 2,
            "xFilterCoeff": 0.8999999761581421,
            "xOffsetLimit": 150,
            "yFilterCoeff": 0.8999999761581421,
            "yOffsetLimit": 150
        },
        "videoTracker": {
            "custom1": -1.0,
            "custom2": -1.0,
            "custom3": -1.0,
            "frameBufferSize": 256,
            "lostModeOption": 0,
            "maxFramesInLostMode": 128,
            "multipleThreads": false,
            "numChannels": 3,
            "rectAutoPosition": false,
            "rectAutoSize": false,
            "rectHeight": 72,
            "rectWidth": 72,
            "searchWindowHeight": 256,
            "searchWindowWidth": 256,
            "type": -1
        }
    }
}

The application parameters consists of only VPipeline parameters. Table 2 shows short description of parameters.

Table 2 - Application parameters.

Parameter Description
camera Camera controller parameters. ViscaCamera object will be initialized.
lens Lens controller parameters. ViscaCamera object will be initialized.
general General video processing pipeline parameters.
objectDetector1 First object detector parameters. Gmd object will be initialized. Motion detector parameters.
objectDetector2 Second object detector parameters. Ged object will be initialized. Video changes detector parameters.
objectDetector3 Second object detector parameters. DnnOpenVinoDetector object will be initialized. DNN object detector parameters.
objectDetector4 Fourth object detector parameters. No used in the application. Dummy object detector will be initialized.
videoFilter1 First video filter parameters. Image flip module. ImageFlip object will be initialized.
videoFilter2 Second video filter parameters. DigitalZoom object will be initialized. Digital zoom parameters
videoFilter3 Third video filter parameters. No used in the application. Dummy video filter will be initialized.
videoFilter4 Fourth video filter parameters. Dehazer object will be initialized. Defog / dehaze module parameters.
videoFilter5 Fifth video filter parameters. MotionMagnificator object will be initialized. Motion magnification module parameters.
videoSource Video source parameters. Video capture module. VSourceOpenCv object will be initialized. Note: by default “source” filed has value “test.mp4” which means to open video file (test file located in static folder). If you will change value to “file dialog” the application will open file dialog to chose video. Also you can open camera by it’s number (“0”, “1” etc.) or your can open RTSP stream (“rtsp://192.168.1.100:7000/live”) (Note: in case RTSP stream video capture latency can be big).
videoStabiliser Video stabilizer parameters. VStabiliserOpenCv object will be initialized.
videoTracker Video tracker parameters. CvTracker object will be initialized.

After starting the application (running the executable file) the user should select the video file in the dialog box (if parameter “videoSource” in config file is set to “file dialog”). After that the UDP channel will start working, sending data from video source and current parameters and also reading control instructions from VPipelineControl or any UDP port connected to the VPipelineOnboard.

Prepare compiled library files

If you want to compile and collect all libraries from VPipelineOnboard repository with their header files on Linux you can create bach script in repository root folder which will collect all necessary files in one place. To do this compile VPipelineOnboard and after make follow steps:

  1. Create bash script (your have to have installed nano editor):
cd VPipelineOnboard
nano prepareCompiled
  1. Copy the following text there:
#!/bin/bash

# Define the directory where you want to copy all .h files.
# Make sure to replace /path/to/destination with your actual destination directory path.
HEADERS_DESTINATION_DIR="./include"
LIB_DESTINATION_DIR="./lib"

# Check if the destination directory exists. If not, create it.
if [ ! -d "$HEADERS_DESTINATION_DIR" ]; then
    mkdir -p "$HEADERS_DESTINATION_DIR"
fi
# Find and copy all .h files from the current directory to the destination directory.
# The "." specifies the current directory. Adjust it if you want to run the script from a different location.
find . -type f -name '*.h' -exec cp {} "$HEADERS_DESTINATION_DIR" \;


found_dir=$(find . -type d -name "nlohmann" -print -quit)
if [ -n "$found_dir" ]; then
    cp -r "$found_dir" "$HEADERS_DESTINATION_DIR"
    echo "Directory nlohmann has been copied to $HEADERS_DESTINATION_DIR."
fi

# Check if the destination directory exists. If not, create it.
if [ ! -d "$LIB_DESTINATION_DIR" ]; then
    mkdir -p "$LIB_DESTINATION_DIR"
fi

# Find and copy all .a files from the current directory to the destination directory.
# The "." specifies the current directory. Adjust it if you want to run the script from a different location.
find . -type f -name '*.a' -exec cp {} "$LIB_DESTINATION_DIR" \;
  1. Save “Crtr + S” and close “Ctrl + X”.
  2. Copy make file executable and run:
sudo chmod +x prepareCompiled
./prepareCompiled