Figure 1. An Overview of LaserMix. (a) LiDAR scans contain strong spatial prior. Objects and backgrounds around the ego-vehicle have a patterned distribution on different (e.g., lower, middle, upper) laser beams. (b) Following the scene structure, the proposed LaserMix blends beams from different LiDAR scans, which is compatible with various popular LiDAR representations, such as the range view and voxel representations. (c) LaserMix achieves superior results over SoTA methods in both low-data (10%, 20%, and 50% labels) and high-data (full labels) regimes on nuScenes.
Figure 2. Laser partition example. We group
points whose inclinations ϕ are within the
same inclination range into the same area.
Figure 4. Framework overview. Labeled scan is fed into the Student net to compute the supervised
loss (w/ ground-truth). Unlabeled scan and the generated pseudo-label are mixed with
the labeled scan and its labels via LaserMix to produce mixed data, which is then fed into the Student
net to compute the mix loss. Additionally, we adopt the EMA update for the Teacher net
and compute the mean teacher loss over Student net’s and Teacher net’s predictions.
Figure 5. Qualitative results from LiDAR top view and range view. The correct and incorrect
predictions are painted in green and red to highlight the difference. Best viewed in color.
@article{kong2022lasermix,
title={LaserMix for Semi-Supervised LiDAR Semantic Segmentation},
author={Kong, Lingdong and Ren, Jiawei and Pan, Liang and Liu, Ziwei},
journal={arXiv preprint arXiv:2207.00026},
year={2022}
}