Cited 0 time in
A Video Frame Extrapolation Scheme Using Deep Learning-Based Uni-Directional Flow Estimation and Pixel Warping
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ban, Tae-Won | - |
| dc.date.accessioned | 2023-10-25T02:40:50Z | - |
| dc.date.available | 2023-10-25T02:40:50Z | - |
| dc.date.issued | 2023-09 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/68225 | - |
| dc.description.abstract | This paper investigates video frame extrapolation, which can predict future frames from current and past frames. Although there have been many studies on video frame extrapolation in recent years, most of them suffer from the unsatisfactory image quality of the predicted frames such as severe blurring because it is difficult to predict the movement of future pixels for multi-modal video frames, especially with fast changing frames. An additional process such as frame alignment or recurrent prediction can improve the quality of the predicted frames, but it hinders real-time extrapolation. Motivated by the significant progress in video frame interpolation using deep learning-based flow estimation, a simplified video frame extrapolation scheme using deep learning-based uni-directional flow estimation is proposed to reduce the processing time compared to conventional video frame extrapolation schemes without compromising the image quality of the predicted frames. In the proposed scheme, the uni-directional flow is first estimated from the current and past frames through a flow network consisting of four flow blocks and the current frame is forward-warped through the estimated flow to predict a future frame. The proposed flow network is trained and evaluated using the Vimeo-90K triplet dataset. The performance of the proposed scheme is analyzed using the trained flow network in terms of prediction time as well as the similarity between predicted and ground truth frames such as the structural similarity index measure and mean absolute error of pixels, and compared to that of the state-of-the-art schemes such as Iterative and cycleGAN schemes. Extensive experiments show that the proposed scheme improves prediction quality by 2.1% and reduces prediction time by 99.7% compared to the state-of-the-art scheme. Author | - |
| dc.format.extent | 1 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.title | A Video Frame Extrapolation Scheme Using Deep Learning-Based Uni-Directional Flow Estimation and Pixel Warping | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2023.3319660 | - |
| dc.identifier.scopusid | 2-s2.0-85172984218 | - |
| dc.identifier.wosid | 001080672700001 | - |
| dc.identifier.bibliographicCitation | IEEE Access, v.11, pp 1 - 1 | - |
| dc.citation.title | IEEE Access | - |
| dc.citation.volume | 11 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 1 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | Bidirectional control | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | Estimation | - |
| dc.subject.keywordAuthor | Extrapolation | - |
| dc.subject.keywordAuthor | flow estimation | - |
| dc.subject.keywordAuthor | flow network | - |
| dc.subject.keywordAuthor | Generative adversarial networks | - |
| dc.subject.keywordAuthor | Real-time systems | - |
| dc.subject.keywordAuthor | Streaming media | - |
| dc.subject.keywordAuthor | Training | - |
| dc.subject.keywordAuthor | Video frame extrapolation | - |
| dc.subject.keywordAuthor | video frame prediction | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
