SwiftMOS: A Fast and Lightweight Moving Object Segmentation via Feature Flowing Direct View Transformationopen access
- Authors
- Lee, Minjae; Kim, Ungsik; Kim, Gun-Woo; Lee, Suwon
- Issue Date
- Mar-2026
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Keywords
- Computer Vision for Transportation; Deep Learning for Visual Perception; Object Detection, Segmentation and Categorization
- Citation
- IEEE Robotics and Automation Letters, v.11, no.3, pp 2873 - 2880
- Pages
- 8
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE Robotics and Automation Letters
- Volume
- 11
- Number
- 3
- Start Page
- 2873
- End Page
- 2880
- URI
- https://scholarworks.gnu.ac.kr/handle/sw.gnu/82335
- DOI
- 10.1109/LRA.2026.3655199
- ISSN
- 2377-3766
2377-3766
- Abstract
- Autonomous vehicles must recognize their surroundings and distinguish between dynamic and static objects to avoid collisions. Most recent moving object segmentation (MOS) studies project LiDAR point-cloud streams into multiple views to capture spatio-temporal cues. When a single view proves insufficient, the common strategy is to combine or convert views. However, existing methods rebuild features in full 3D space during conversion, significantly increasing time and memory costs. They also depend on high-end graphics processing units, which limits their on-board deployment. We introduce SwiftMOS, a lightweight framework centered on a direct view transformation module that rapidly converts bird's-eye view to range view without restoring 3D features. Direct view transformation leverages precomputed grids and view-specific hint maps to efficiently map coordinates, and an addition-fusion feature flow quickly merges past and current frames to capture the spatio-temporal context. Extensive quantitative and qualitative experiments on the SemanticKITTI dataset confirm the validity and real-time capability of the proposed method. SwiftMOS achieves a substantial latency reduction while maintaining competitive performance. It operates within the typical LiDAR scan rate on on-board hardware and also achieves robust performance on the Sipailou Campus Dataset, underscoring its practicality for real-world autonomous driving.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.