Cited 0 time in
SwiftMOS: A Fast and Lightweight Moving Object Segmentation via Feature Flowing Direct View Transformation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Minjae | - |
| dc.contributor.author | Kim, Ungsik | - |
| dc.contributor.author | Kim, Gun-Woo | - |
| dc.contributor.author | Lee, Suwon | - |
| dc.date.accessioned | 2026-02-09T04:30:13Z | - |
| dc.date.available | 2026-02-09T04:30:13Z | - |
| dc.date.issued | 2026-03 | - |
| dc.identifier.issn | 2377-3766 | - |
| dc.identifier.issn | 2377-3766 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/82335 | - |
| dc.description.abstract | Autonomous vehicles must recognize their surroundings and distinguish between dynamic and static objects to avoid collisions. Most recent moving object segmentation (MOS) studies project LiDAR point-cloud streams into multiple views to capture spatio-temporal cues. When a single view proves insufficient, the common strategy is to combine or convert views. However, existing methods rebuild features in full 3D space during conversion, significantly increasing time and memory costs. They also depend on high-end graphics processing units, which limits their on-board deployment. We introduce SwiftMOS, a lightweight framework centered on a direct view transformation module that rapidly converts bird's-eye view to range view without restoring 3D features. Direct view transformation leverages precomputed grids and view-specific hint maps to efficiently map coordinates, and an addition-fusion feature flow quickly merges past and current frames to capture the spatio-temporal context. Extensive quantitative and qualitative experiments on the SemanticKITTI dataset confirm the validity and real-time capability of the proposed method. SwiftMOS achieves a substantial latency reduction while maintaining competitive performance. It operates within the typical LiDAR scan rate on on-board hardware and also achieves robust performance on the Sipailou Campus Dataset, underscoring its practicality for real-world autonomous driving. | - |
| dc.format.extent | 8 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.title | SwiftMOS: A Fast and Lightweight Moving Object Segmentation via Feature Flowing Direct View Transformation | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/LRA.2026.3655199 | - |
| dc.identifier.scopusid | 2-s2.0-105028441526 | - |
| dc.identifier.wosid | 001673823600045 | - |
| dc.identifier.bibliographicCitation | IEEE Robotics and Automation Letters, v.11, no.3, pp 2873 - 2880 | - |
| dc.citation.title | IEEE Robotics and Automation Letters | - |
| dc.citation.volume | 11 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 2873 | - |
| dc.citation.endPage | 2880 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Robotics | - |
| dc.relation.journalWebOfScienceCategory | Robotics | - |
| dc.subject.keywordAuthor | Computer Vision for Transportation | - |
| dc.subject.keywordAuthor | Deep Learning for Visual Perception | - |
| dc.subject.keywordAuthor | Object Detection, Segmentation and Categorization | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0534
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
