![]() |
Vitaly Ablavsky Principal Research Scientist Affiliate Assistant Professor, Electrical and Computer Engineering vablavsky@apl.washington.edu Phone 206-616-0380 |
Research Interests
Ablavsky's research has focused on machine learning, computer vision, and autonomous systems. His broader interests include the application of artificial intelligence to problems in diverse domains and the role of AI in our society.
Education
B.A. Mathematics, Brandeis University, 1992
M.S. Computer Science, University of Massachusetts at Amherst, 1996
Ph.D. Computer Science, Boston University, 2011
Publications |
2000-present and while at APL-UW |
![]() |
SSP-GNN: Learning to track via bilevel optimization Golias, G., M. Nakura-Fan, and V. Ablavsky, "SSP-GNN: Learning to track via bilevel optimization," in Proc., 27th International Conference on Information Fusion, 8-11 July, Venice, Italy, doi:10.23919/FUSION59988.2024.10706332 (IEEE, 2024). |
More Info |
11 Oct 2024 ![]() |
![]() |
|||||
We propose a graph-based tracking formulation for multi-object tracking (MOT) where target detections contain kinematic information and re-identification features (attributes). Our method applies a successive shortest paths (SSP) algorithm to a tracking graph defined over a batch of frames. The edge costs in this tracking graph are computed via message-passing network, a graph neural network (GNN) variant. The parameters of the GNN, and hence, the tracker, are learned end-to-end on a training set of example ground-truth tracks and detections. Specifically, learning takes the form of bilevel optimization guided by our novel loss function. We evaluate our algorithm on simulated scenarios to understand its sensitivity to scenario aspects and model hyperparameters. Across varied scenario complexities, our method compares favorably to a strong baseline. |
![]() |
ZeroWaste dataset: Towards deformable object segmentation in cluttered scenes Bashkirova, D., and 9 others including V. Ablavsky, "ZeroWaste dataset: Towards deformable object segmentation in cluttered scenes," Proc., IEEE/CVR Conference on Computer Vision and Pattern Recognition (CVPR), 18-24 June, New Orleans, LA, doi:10.1109/CVPR52688.2022.02047 (IEEE, 2022). |
More Info |
27 Sep 2022 ![]() |
![]() |
|||||
Less than 35% of recyclable waste is being actually recycled in the US, which leads to increased soil and sea pollution and is one of the major concerns of environmental researchers as well as the common public. At the heart of the problem are the inefficiencies of the waste sorting process (separating paper, plastic, metal, glass, etc.) due to the extremely complex and cluttered nature of the waste stream. Recyclable waste detection poses a unique computer vision challenge as it requires detection of highly deformable and often translucent objects in cluttered scenes without the kind of context information usually present in human-centric datasets. This challenging computer vision task currently lacks suitable datasets or methods in the available literature. In this paper, we take a step towards computer-aided waste detection and present the first in-the-wild industrial-grade waste detection and segmentation dataset, ZeroWaste. We believe that ZeroWaste will catalyze research in object detection and semantic segmentation in extreme clutter as well as applications in the recycling domain. |
![]() |
The 6th AI City Challenge Naphade, M., and 16 others including V. Ablavsky, "The 6th AI City Challenge," Proc., IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 19-20 June, New Orleans, LA, doi:10.1109/CVPRW56347.2022.00378 (IEEE, 2022). |
More Info |
23 Aug 2022 ![]() |
![]() |
|||||
The 6th edition of the AI City Challenge specifically focuses on problems in two domains where there is tremendous unlocked potential at the intersection of computer vision and artificial intelligence: Intelligent Traffic Systems (ITS), and brick and mortar retail businesses. The four challenge tracks of the 2022 AI City Challenge received participation requests from 254 teams across 27 countries. Track 1 addressed city-scale multi-target multi-camera (MTMC) vehicle tracking. Track 2 addressed natural-language-based vehicle track retrieval. Track 3 was a brand new track for naturalistic driving analysis, where the data were captured by several cameras mounted inside the vehicle focusing on driver safety, and the task was to classify driver actions. Track 4 was another new track aiming to achieve retail store automated checkout using only a single view camera. We released two leader boards for submissions based on different methods, including a public leader board for the contest, where no use of external data is allowed, and a general leader board for all submitted results. The top performance of participating teams established strong baselines and even outperformed the state-of-the-art in the proposed challenge tracks. |