Online Multi-Object Tracking Via
Robust Collaborative Model and Sample Selection

Mohamed A. Naiel1, M. Omair Ahmad1, M.N.S. Swamy1, Jongwoo Lim2, and Ming-Hsuan Yang3 

1Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada H3G 1M8
2Department of Computer Science and Engineering, Hanyang University, Seoul 113-791, Republic of Korea
3School of Engineering, University of California, Merced, CA 95344 USA


The past decade has witnessed significant progress in object detection and tracking in videos. In this paper, we present a collaborative model between a pre-trained object detector and a number of single-object online trackers within the particle filtering framework. For each frame, we construct an association between detections and trackers, and treat each detected image region as a key sample, for online update, if it is associated to a tracker. We present a motion model that incorporates the associated detections with object dynamics. Furthermore, we propose an effective sample selection scheme to update the appearance model of each tracker. We use discriminative and generative appearance models for the likelihood function and data association, respectively. Experimental results show that the proposed scheme generally outperforms state-of-the-art methods. 

Keywords: Multi-object tracking, particle filter, collaborative model, sample selection, sparse representation.

Qualitative Results

PETS-S2_L1 PETS-S2_L2 Soccer
UCF-PL Town Center - Body Town Center - Head
LISA 2010 - Urban LISA 2010 - Sunny


- Mohamed A. Naiel, M. Omair Ahmad, M.N.S. Swamy, Jongwoo Lim, and Ming-Hsuan Yang, "Online multi-object tracking via robust collaborative model and sample selection", Computer Vision and Image Understanding, August 2016, In press. [PDF] [Video] [BibTex][Code]

- Mohamed A. Naiel, M. Omair Ahmad, M.N.S. Swamy, Yi Wu, and Ming-Hsuan Yang, "Online multi-person tracking via robust collaborative model",  21st IEEE International Conference on Image Processing (ICIP), Paris, France, pp. 431 – 435, Oct. 2014.  [PDF] [Video] [BibTex]