RpiTrackerOnboard application
v1.0.4
Table of contents
- Overview
- Versions
- Source code files
- Config file
- How to copy RpiTrackerOnboard to RPI
- Run application
- Build application
- Raspberry PI configuration
- Source code
Overview
RpiTrackerOnboard application implements video processing pipeline: video capture > tracking > streaming. The application combines libraries: VSourceLibCamera (video capture from Libcamera API compatible devices), VCodecLibav (H264, H265 and JPEG encoders for RPI5 since it does not have any hardware codec), CvTracker (video tracker library) and UdpDataChannel. The applications shows how to use combination of the libraries listed above. It is built for Raspberry PI5 (with Debian Bookworm 12 x64 OS support). It provides real-time video processing. Structure of video processing pipeline:
After start application reads JSON config file which includes video capture parameters, tracker parameters and communication port. If there is no config file the application will create new one with default parameters. When the config file is read the application initializes the video source. After the application creates and initializes CvTracker class object by parameters from JSON config file. Then the application creates codec and then initializes by parameters from JSON config file. When all modules are initialized the application executes the video processing pipeline: capture video frame from video source - tracking - stream. Additionally the application creates folder “Log” to write log information. In order to prepare tracker commands from a remote connection, refer to the VTracker interface library. The interface includes static methods such as encodeCommand(…) and encodeSetParamCommand(…) for constructing a buffer that holds the required command.
The application reads the input buffer for incoming commands through UdpDataServer before processing each frame for tracking. The UdpDataServer provides a thread-safe get(…) method, which return the buffer of commands if there is any.
Versions
Table 1 - Application versions.
Version | Release date | What’s new |
---|---|---|
1.0.0 | 31.05.2024 | First version. |
1.0.1 | 06.08.2024 | - Submodules updated. |
1.0.2 | 18.09.2024 | - CvTracker submodule updated. - VCodecLibav submodule updated. |
1.0.3 | 04.12.2024 | - CvTracker submodule updated. - VCodecLibav submodule updated. |
1.0.4 | 15.12.2024 | - Submodules updated. |
Source code files
The application is supplied by source code only. The user is given a set of files in the form of a CMake project (repository). The repository structure is shown below:
CMakeLists.txt --------------------- Main CMake file.
3rdparty --------------------------- Folder with third-party libraries.
CMakeLists.txt ----------------- CMake file to include third-party libraries.
UdpDataChannel ----------------- Folder with UdpDataChannel library source code.
VCodecLibav -------------------- Folder with VCodecLibav library source code.
VSourceLibCamera --------------- Folder with VSourceLibCamera library source code.
CvTracker ---------------------- Folder with CvTracker library source code.
FormatConverterOpenCv ---------- Folder with FormatConverterOpenCv library source code.
src -------------------------------- Folder with application source code.
CMakeLists.txt ----------------- CMake file.
RpiTrackerOnboardVersion.h ----- Header file with application version.
RpiTrackerOnboardVersion.h.in -- File for CMake to generate version header.
main.cpp ----------------------- Application source code file.
Config file
RpiTrackerOnboard application reads config file RpiTrackerOnboard.json in the same folder with application executable file. Config file content:
{
"Params":
{
"codec":
{
"bitrateKbps": 3000,
"fps": 30,
"gopSize": 30
},
"communication":
{
"dataUdpPort": 50021
},
"videoSource":
{
"cameraIndex": 0,
"fps": 30,
"height": 720,
"pixelFormat": "YUYV",
"width": 1280
},
"videoTrackerParams":
{
"custom1": 0.0,
"custom2": 0.0,
"custom3": 0.0,
"frameBufferSize": 128,
"lostModeOption": 0,
"maxFramesInLostMode": 128,
"multipleThreads": false,
"numChannels": 3,
"rectAutoPosition": false,
"rectAutoSize": false,
"rectHeight": 72,
"rectWidth": 72,
"searchWindowHeight": 256,
"searchWindowWidth": 256,
"type": 0
}
}
}
Table 2 - Config file parameters description.
Parameter | type | Description |
---|---|---|
Codec: | ||
bitrateKbps | int | H264 encoding bitrate of codec. |
fps | int | Encoding fps. |
gopSize | int | Gop size of encoder. |
Communication: | ||
dataUdpPort | int | UDP port number to get commands for tracker and send video stream. |
Video source parameters: | ||
cameraIndex | int | Camera index. If only one camera connected to system, it is going to be 0. |
fps | int | Fps of video source. |
width | int | Video source width. |
height | int | Video source height. |
pixelFormat | string | Video source pixel format. (For example : “YUYV” , “NV12”, “BGR24”, “RGB24”) |
Tracker parameters: | ||
rectWidth | int | Tracking rectangle width, pixels. Set by user or can be changed by tracking algorithm if rectAutoSize == 1. |
rectHeight | int | Tracking rectangle height, pixels. Set by user or can be changed by tracking algorithm if rectAutoSize == 1. |
searchWindowWidth | int | Width of search window, pixels. Set by user. |
searchWindowHeight | int | Height of search window, pixels. Set by user. |
lostModeOption | int | Option for LOST mode. Parameter that defines the behavior of the tracking algorithm in LOST mode. Default is 0. Possible values: 0. In LOST mode, the coordinates of the center of the tracking rectangle are not updated and remain the same as before entering LOST mode. 1. The coordinates of the center of the tracking rectangle are updated based on the components of the object’s speed calculated before going into LOST mode. If the tracking rectangle “touches” the edge of the video frame, the coordinate updating for this component (horizontal or vertical) will stop. 2. The coordinates of the center of the tracking rectangle are updated based on the components of the object’s speed calculated before going into LOST mode. The tracking is reset if the center of the tracking rectangle touches any of the edges of the video frame. |
rectAutoSize | bool | Use tracking rectangle auto size flag: false - no use, true - use. Set by user. |
rectAutoPosition | bool | Use tracking rectangle auto position: false - no use, true - use. Set by user. |
numChannels | int | Number of channels for processing. E.g., number of color channels. Set by user. |
multipleThreads | bool | Use multiple threads for calculations. false - one thread, true - multiple. Set by user. |
custom1 | int | Custom parameter for tracker. Not used. |
custom2 | int | Custom parameter for tracker. Not used. |
custom3 | int | Custom parameter for tracker. Not used. |
frameBufferSize | int | Size of frame buffer (number of frames to store). Set by user. |
maxFramesInLostMode | int | Maximum number of frames in LOST mode to auto reset of algorithm. Set by user. |
type | int | Not supported by CvTracker. |
How to copy RpiTrackerOnboard Executable to Raspberry PI
In order to copy ready application to raspberry pi follow steps. Run commands on your windows computer where you have RpiTrackerOnboard file.
cd <directory where you have RpiTrackerOnboard>
for example if you downloaded RpiTrackerOnboard application you can copy file to raspberry pi:
scp ./RpiTrackerOnboard pi@192.168.1.100:/home/pi
Enter password
It will be copied to raspberry pi’s home directory.
Note: The procedure till here, can be carried out without command line. It is much easier. Please check application WinSCP. It does not require any command line interaction.
Now we should make it executable, it is just a raw file for raspberry yet. Run following commands on raspberry pi.
Go to home directory where we have RpiTrackerOnboard.
Make file executable.
chmod +x ./RpiTrackerOnboard
Now application is ready to run.
Run application
Copy application (RpiTrackerOnboard executable and RpiTrackerOnboard.json) to any folder and run :
./RpiTrackerOnboard -vv
-vv or -v arguments enables logger to print logs to console and file.
When the application is started for the first time, it creates a configuration file named RpiTrackerOnboard.json if file does not exist (refer to the Config file section). If the application is run as a superuser using sudo, the file will be owned by the root user. Therefore, to modify the configuration file, superuser privileges will be necessary. You can run application manually or you can add application to auto start as systemd service. To add application as systemd service create service file:
sudo nano /etc/systemd/system/rpitrackeronboard.service
and add content:
[Unit]
Description=RpiTrackerOnboard
Wants=network.target
After=syslog.target network-online.target
[Service]
Type=simple
ExecStart=/home/pi/App/RpiTrackerOnboard
Restart=on-failure
RestartSec=10
KillMode=control-group
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.target
3.6. Save ctrl + s and close ctrl + x.
3.7. Reload systemd services:
sudo systemctl daemon-reload
3.8. Command to control service:
Start service:
sudo systemctl start rpitrackeronboard
Enable for auto start:
sudo systemctl enable rpitrackeronboard
Stop service:
sudo systemctl stop rpitrackeronboard
Check status:
sudo systemctl status rpitrackeronboard
Build application
Before build user should configure raspberry (Raspberry PI configuration). Typical build commands:
cd RpiTrackerOnboard
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make
Raspberry PI configuration
OS install
-
Download Raspberry PI imager.
-
Choose Operating System: Raspberry Pi OS (other) -> Raspberry Pi OS Lite (64-bit) Debian Bookworm 12.
-
Choose storage (MicroSD) card. If it possible use 128 GB MicroSD.
-
Set additional options: do not set hostname “raspberrypi.local”, enable SSH (Use password authentication), set username “pi” and password “pi”, configure wireless LAN according your settings. You will need wi-fi for software installation, set appropriate time zone and wireless zone.
-
Save changes and push “Write” button. After push “Yes” to rewrite data on MicroSD. At the end remove MicroSD.
-
Insert MicroSD in Raspberry and power up Raspberry.
LAN configuration
-
Configure LAN IP address on your PC. For Windows 11 got to Settings -> Network & Internet -> Advanced Network Settings -> More network adapter options. Click right button on LAN connection used to connect to Raspberry PI and chose “Properties”. Double click on “Internet Protocol Version 4 (TCP/IPv4)”. Set static IP address 192.168.0.1 and mask 255.255.255.0.
-
Connect raspberry via LAN cable. After power up you don’t know IP address of Raspberry board but you can connect to raspberry via SSH using “raspberrypi” name. In Windows 11 open windows terminal or power shell terminal and type command ssh pi@raspberrypi. After connection type yes to establish authenticity. WARNING: If you work with Windows we recommend delete information about previous connections. Go to folder C:/Users/[your user name]/.ssh. Open file known_hosts is exists in text editor. Delete all lines “raspberrypi”.
-
You have to set static IP address in Raspberry PI but not NOW. After static IP set you will lose wi-fi connection which you need for software installation.
Install dependencies
-
Connect to raspberry via SSH: ssh pi@raspberrypi. WARNING: If you work with Windows we recommend delete information about previous connections. Go to folder C:/Users/[your user name]/.ssh. Open file known_hosts is exists in text editor. Delete all lines “raspberrypi”.
-
Install libraries:
sudo apt-get update sudo apt install cmake build-essential libopencv-dev
sudo apt-get install -y cmake meson ninja-build python3 python3-pip python3-setuptools python3-wheel git pkg-config libgnutls28-dev openssl libjpeg-dev libyuv-dev libv4l-dev libudev-dev libexpat1-dev libssl-dev libdw-dev libunwind-dev doxygen graphviz
pip3 install ply PyYAML
- If you get error for ply and PyYAML run following command.
sudo apt-get install -y python3-ply python3-yaml
- If you get error for ply and PyYAML run following command.
-
RpiTrackerOnboard application uses a specific libcamera version. In order to have proper libcamera run provided commands one by one :
- Remove everything related with libcamera.
sudo apt-get remove --auto-remove libcamera* libcamera-apps*
- Get libcamera
git clone https://github.com/raspberrypi/libcamera.git cd libcamera
- Checkout v0.1.0 version
git checkout v0.1.0+rpt20231122
- Build libcamera
meson build ninja -C build
- Install libcamera
sudo ninja -C build install
- Update linker cache
sudo ldconfig
- Remove source code.
cd ../ sudo rm -rf libcamera
- Remove everything related with libcamera.
-
Install required libraries for libav codec.
-
Run command :
sudo apt-get install -y ffmpeg libavcodec-dev libavutil-dev libavformat-dev libavdevice-dev libavfilter-dev libcurl4
-
Reboot the system.
sudo reboot
Setup hight performance mode (overclock) if necessary
-
Open config file:
sudo nano /boot/firmware/config.txt
-
Add line at the end of file:
force_turbo=1 over_voltage_delta=500000 arm_freq=3000 gpu_freq=800
-
Save changes ctrl+s and close ctrl+x. Reboot raspberry sudo reboot.
Static IP configuration on Raspberry.
-
Connect to raspberry via SSH ssh pi@raspberrypi after reboot. WARNING: If you work with Windows we recommend delete information about previous connections. Go to folder C:/Users/[your user name]/.ssh. Open file known_hosts is exists in text editor. Delete all lines “raspberrypi”.
-
Open config file:
sudo nano /etc/dhcpcd.conf
-
Go dawn to file and delete the comments in the following lines and set IP 192.168.0.2:
interface eth0 static ip_address=192.168.0.2/24 static ip6_address=fd51:42f8:caae:d92e::ff/64 static routers=192.168.0.2 static domain_name_servers=192.168.0.2 8.8.8.8 fd51:42f8:caae:d92e::1
-
Save changes ctrl + s and exit ctrl + x. Reboot after sudo reboot.
-
After configuring static IP you can connect to to raspberry by IP.
ssh pi@192.168.0.2
Source code
Bellow source code of main.cpp file:
#include <iostream>
#include "CvTracker.h"
#include "VSourceLibCamera.h"
#include "FormatConverterOpenCv.h"
#include "VCodecLibav.h"
#include "Logger.h"
#include "UdpDataServer.h"
#include "RpiTrackerOnboardVersion.h"
/// Max command size, bytes.
constexpr int MAX_COMMAND_SIZE = 128;
/// Log folder.
#define LOG_FOLDER "Log"
/// Config file name.
#define CONFIG_FILE_NAME "RpiTrackerOnboard.json"
/// Application parameters.
class Params
{
public:
/// Video source params.
class VideoSource
{
public:
/// Camera index.
int cameraIndex{0};
/// Frame width.
int width{1280};
/// Frame height.
int height{720};
/// Frame rate.
int fps{30};
/// Pixel format.
std::string pixelFormat{"YUYV"};
JSON_READABLE(VideoSource, cameraIndex, width, height, fps, pixelFormat)
};
/// Codec params.
class Codec
{
public:
/// Bitrate.
int bitrateKbps{3000};
/// GOP size.
int gopSize{30};
/// Frame rate.
int fps{30};
JSON_READABLE(Codec, bitrateKbps, gopSize, fps)
};
/// Communication.
class Communication
{
public:
/// Output UDP port.
int dataUdpPort{50021};
JSON_READABLE(Communication, dataUdpPort)
};
/// Video source.
VideoSource videoSource;
/// Control channel.
Communication communication;
/// Video tracker.
cr::vtracker::VTrackerParams videoTrackerParams;
/// Codec params.
Codec codec;
JSON_READABLE(Params, communication, videoSource, videoTrackerParams, codec)
};
/// Application params.
Params g_params;
/// Log flag.
cr::utils::PrintFlag g_logFlag{cr::utils::PrintFlag::DISABLE};
/// Telemetry data buffer size.
const int g_trackerResultsMaxSize = sizeof(cr::vtracker::VTrackerParams) + 64;
/// Init buffer for telemetry data.
uint8_t g_trackerResults[g_trackerResultsMaxSize];
/// Telemetry data size.
int g_trackerResultsSize{0};
/// Shared frame.
cr::video::Frame g_sharedFrame;
/// Logger.
cr::utils::Logger g_log;
/// Shared frame sync mutex.
std::mutex g_sharedFrameMutex;
/// Shared frame sync condition.
std::condition_variable g_sharedFrameCond;
/// Shared frame condition mutex.
std::mutex g_sharedFrameCondMutex;
/// Shared frame condition flag.
std::atomic<bool> g_sharedFrameCondFlag = false;
/// Shared telemetry data sync mutex.
std::mutex g_trackerResultsMutex;
/// Shared telemetry data sync condition.
std::condition_variable g_trackerResultsCond;
/// Shared telemetry data condition mutex.
std::mutex g_tackerResultsCondMutex;
/// Shared telemetry data condition flag.
std::atomic<bool> g_trackerResultsCondFlag = false;
/// Udp server for sending data and receiving commands.
cr::clib::UdpDataServer g_server;
/**
* @brief Load configuration params from JSON file.
* @return TRUE if parameters loaded or FALSE if not.
*/
bool loadConfig();
/**
* @brief Encoding thread function.
*/
void encodingThreadFunc();
int main(int argc, char **argv)
{
// Configure logger.
cr::utils::Logger::setSaveLogParams(LOG_FOLDER, "log", 20, 1);
// Welcome message.
g_log.print(cr::utils::PrintColor::YELLOW, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] " <<
"RpiTrackerOnboard application v" << RPI_TRACKER_ONBOARD_VERSION << std::endl;
// Check arguments and set console output if necessary.
if (argc > 1)
{
std::string str = std::string(argv[1]);
if (str == "-v" || str == "-vv")
{
g_logFlag = cr::utils::PrintFlag::CONSOLE_AND_FILE;
}
}
// Load config file.
if (!loadConfig())
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't load config file." << std::endl;
return -1;
}
g_log.print(cr::utils::PrintColor::CYAN, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] " <<
"Config file loaded." << std::endl;
// Init video source.
cr::video::VSource* videoSource = new cr::video::VSourceLibCamera();
std::string initSource = std::to_string(g_params.videoSource.cameraIndex) + ";"
+ std::to_string(g_params.videoSource.width) + ";"
+ std::to_string(g_params.videoSource.height) + ";"
+ std::to_string(g_params.videoSource.fps) + ";"
+ g_params.videoSource.pixelFormat;
if (!videoSource->openVSource(initSource))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't initialise video source." << std::endl;
return -1;
}
g_log.print(cr::utils::PrintColor::CYAN, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] INFO: " <<
"Video source init." << std::endl;
// Init command UDP server.
if (!g_server.init(g_params.communication.dataUdpPort, 2000000))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't init command UDP server." << std::endl;
return false;
}
// Init video tracker.
cr::vtracker::CvTracker tracker;
if (!tracker.initVTracker(g_params.videoTrackerParams))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't init video tracker." << std::endl;
return -1;
}
// Init frame.
cr::video::Frame sourceFrame;
// Get one frame to obtain proper frame info from source.
if (!videoSource->getFrame(sourceFrame, 1000))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"No input video frame" << std::endl;
}
// Initialize shared frame with source frame parameters.
g_sharedFrame = cr::video::Frame(sourceFrame.width, sourceFrame.height, sourceFrame.fourcc);
// Start encoding thread.
std::thread encodingThread(encodingThreadFunc);
// Command data buffer.
uint8_t commandData[MAX_COMMAND_SIZE];
cr::vtracker::VTrackerParams trackerParams;
while (true)
{
// Wait new video frame.
if (!videoSource->getFrame(sourceFrame, 1000))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"No input video frame" << std::endl;
continue;
}
g_sharedFrameMutex.lock();
g_sharedFrame = sourceFrame; // copy frame to shared frame.
g_sharedFrameMutex.unlock();
// Notify encoding thread about new frame.
std::unique_lock<std::mutex> lock(g_sharedFrameCondMutex);
g_sharedFrameCondFlag.store(true);
g_sharedFrameCond.notify_one();
lock.unlock();
// Execute tracker commands.
int commandSize = 0;
if (g_server.get(commandData, MAX_COMMAND_SIZE, commandSize))
{
// Execute command.
if (!tracker.decodeAndExecuteCommand(commandData, commandSize))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't execute command" << std::endl;
}
}
// Video tracking.
tracker.processFrame(sourceFrame);
// Get tracker results.
tracker.getParams(trackerParams);
// Encode tracker data.
g_trackerResultsMutex.lock();
trackerParams.encode(g_trackerResults, g_trackerResultsMaxSize, g_trackerResultsSize);
g_trackerResultsMutex.unlock();
// Notify encoding thread about new telemetry data.
std::unique_lock<std::mutex> lockTelemetry(g_tackerResultsCondMutex);
g_trackerResultsCondFlag.store(true);
g_trackerResultsCond.notify_one();
lockTelemetry.unlock();
}
// Wait for encoding thread.
encodingThread.join();
return 1;
}
void encodingThreadFunc()
{
cr::video::VCodecLibav codec;
codec.setParam(cr::video::VCodecParam::TYPE, 1); // Raspberry Pi 5 doesn't support hardware codec.
codec.setParam(cr::video::VCodecParam::BITRATE_KBPS, g_params.codec.bitrateKbps);
codec.setParam(cr::video::VCodecParam::GOP, g_params.codec.gopSize);
codec.setParam(cr::video::VCodecParam::FPS, g_params.codec.fps);
// Init frames.
cr::video::Frame h264Frame(g_sharedFrame.width, g_sharedFrame.height, cr::video::Fourcc::H264);
cr::video::Frame nv12Frame(g_sharedFrame.width, g_sharedFrame.height, cr::video::Fourcc::NV12);
// Output data buffer.
uint8_t* outputData = new uint8_t[(g_sharedFrame.size + g_trackerResultsMaxSize) * 2]; // Enough big.
// Format converter.
cr::video::FormatConverterOpenCv converter;
while (1)
{
// Wait frame from main thread.
if(!g_sharedFrameCondFlag.load())
{
std::unique_lock<std::mutex> lock(g_sharedFrameCondMutex);
while(!g_sharedFrameCondFlag.load())
g_sharedFrameCond.wait(lock);
lock.unlock();
}
// Encode frame.
g_sharedFrameMutex.lock();
converter.convert(g_sharedFrame, nv12Frame);
g_sharedFrameCondFlag.store(false); // reset flag for next frame.
g_sharedFrameMutex.unlock();
if(!codec.transcode(nv12Frame, h264Frame))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't encode frame." << std::endl;
continue;
}
// Wait tracker results from main thread.
if(!g_trackerResultsCondFlag.load())
{
std::unique_lock<std::mutex> lock(g_tackerResultsCondMutex);
while(!g_trackerResultsCondFlag.load())
g_trackerResultsCond.wait(lock);
lock.unlock();
}
// Prepare output data by using encoded frame and tracker results.
int outDataSize = 0;
g_trackerResultsMutex.lock();
memcpy(outputData, &g_trackerResultsSize, sizeof(int));
memcpy(&outputData[sizeof(int)], g_trackerResults, g_trackerResultsSize);
g_trackerResultsCondFlag.store(false); // reset flag for next telemetry data.
g_trackerResultsMutex.unlock();
// Copy encoded frame to output data.
int frameSerializedSize;
h264Frame.serialize(&outputData[sizeof(int) + g_trackerResultsSize + sizeof(int)], frameSerializedSize);
memcpy(&outputData[sizeof(int) + g_trackerResultsSize], &frameSerializedSize, sizeof(int));
outDataSize = (sizeof(int) + g_trackerResultsSize + sizeof(int) + frameSerializedSize );
// Send data to client.
if (!g_server.send(outputData, outDataSize))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Can't send data." << std::endl;
continue;
}
}
}
bool loadConfig()
{
// Init variables.
cr::utils::ConfigReader config;
// Open config json file (if not exist - create new and exit).
if(config.readFromFile(CONFIG_FILE_NAME))
{
// Read values and set to params
if(!config.get(g_params, "Params"))
{
g_log.print(cr::utils::PrintColor::RED, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "] ERROR: " <<
"Params were not read" << std::endl;
return false;
}
}
else
{
// Set default params.
config.set(g_params, "Params");
// Save config file.
if (!config.writeToFile(CONFIG_FILE_NAME))
{
g_log.print(cr::utils::PrintColor::CYAN, g_logFlag) <<
"[" << __LOGFILENAME__ << "][" << __LINE__ << "]: " <<
"Config file created." << std::endl;
return false;
}
}
return true;
}