Fast and accurate multiple-object tracking and segmentation on edge devices



Object tracking algorithm based on cloud of points using segmentation masks. It was improved in case of speed using GPU-based preprocessing and numba-based assinging algorithm.

This branch works with PyTorch 1.10.0.


πŸ’ͺIntroduction | πŸ› οΈInstallation | πŸƒRun | πŸ‘€Contents | πŸ”–Docs



nvidia-jetpack-4.5 (L4T R32.5.0) #if you are using NVIDIA edge devices

Environment installation

git clone --branch indexing_fast
cd pointtrack
git lfs fetch && git lfs pull
source docker/
sh docker/
sh docker/

See additional information in docs/

After that you will be in project directory. You will need to register the package manually using scripts below.

ROS package installation

cd ..
git clone
cd ..
source /opt/ros/noetic/setup.bash
source devel/setup.bash

Train environment installation

pip install -r requirements/train.txt

Run the project

ROS node

roslaunch pointtrack main.launch \
    camera_ns:=/stereo/left \            # camera namespace
    image_topic:=image_rect \            # colored image topic 
    objects_topic:=objects \             # topic with detection results (see camera object msgs)
    objects_track_ids_topic:=track_ids \ # ouput topic name
    print_stats:=1 \                     # print or dont print stat params
    stats_rate:=20                       # how often will the information be printed

To train/test a model

See additional information in docs/


There were a lot of research on speedup vs quality of work. The original network structure was changed, accelerated using torch2trt. The dependence of the inference speed on the size of segmentation masks is investigated. Also the model has also been accelerated using Numba library. You can also set the maximum number of objects during tracking in order to accurately meet the allocated inference time. All of this improvements explained in docs/


β”œβ”€β”€ docker                          <- Docker scripts and env setup.
β”œβ”€β”€ docs                            <- Markdown files which provides an additional information about package.
β”œβ”€β”€ launch                          <- Launch file for package params in the ROS namespace.
β”œβ”€β”€ requirements                    <- Directory with main requirements for train/infer stage.
β”œβ”€β”€ scripts                         <- Scripts for configuration, downloading weights.
β”‚    β”‚
β”‚    ...
β”‚    β”œβ”€β”€                <- Entrypoint ros node file (inference).
β”‚    β”œβ”€β”€  <- Entrypoint model file (training).
β”‚    └──           <- Entrypoint model file (testing).
β”œβ”€β”€ weights                         <- pth-like files for the model storing in git lfs.
β”œβ”€β”€                       <- You are here.
β”œβ”€β”€ package.xml                     <- Main info about the package for ROS.
β”œβ”€β”€ config.yaml                     <- Main config for running the ROS node.
└── requirements.txt                <- Required libraries.