Cited 6 time in
Anchor-Net: Distance-Based Self-Supervised Learning Model for Facial Beauty Prediction
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Bae, Jiho | - |
| dc.contributor.author | Buu, Seok-Jun | - |
| dc.contributor.author | Lee, Suwon | - |
| dc.date.accessioned | 2024-05-16T01:30:26Z | - |
| dc.date.available | 2024-05-16T01:30:26Z | - |
| dc.date.issued | 2024-04 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/70569 | - |
| dc.description.abstract | In today’s society, beauty is more than just aesthetics, it has a profound impact on many aspects of life, including social interactions, self-confidence, and job opportunities. To quantify this beauty, the field of Facial Beauty Prediction (FBP) is gaining traction. In the field of FBP, traditional methods often fall short due to their reliance on absolute beauty scores, which do not fully capture the subjective nature of human aesthetic perception. This study presents a novel approach to address this gap through the development of Anchor-Net, a self-supervised learning model that predicts differences in relative beauty scores by comparing images. The objective of this research is to offer a more nuanced understanding of facial beauty by employing a reference image (anchor) alongside a prediction image, thereby aligning closer with how humans perceive aesthetic differences. To construct Anchor-Net, we first developed a Base model that predicts beauty scores using a model pre-trained with VGGFace2. This Base model was then adapted into Anchor-Net, which is designed to train on the difference in beauty scores between a reference image and a prediction image. Our methodology involved two transfer learning steps to leverage the strengths of pre existing models while tailoring them to our specific research problem. The experimental validation of Anchor-Net was conducted on the SCUT-FBP5500 benchmark dataset, utilizing a 6:4 training-testing split and 5-fold cross-validation to ensure robust testing of the model’s predictive capabilities. The results demonstrate that Anchor-Net outperforms other state-of-the-art deep learning algorithms on all metrics: Pearson Correlation (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE). Anchor-Net outperformed other models with a PC of 0.0021, MAE of 0.0055, and RMSE of 0.0065 on a 6:4 training-test split. On average, it achieved a PC of 0.0034, MAE of 0.0155, and RMSE of 0.0135 on 5-fold cross-validation. This research proposes a novel approach to FBP and suggests a broader application of relative comparison methodologies in fields where absolute measurements fall short. Authors | - |
| dc.format.extent | 1 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.title | Anchor-Net: Distance-Based Self-Supervised Learning Model for Facial Beauty Prediction | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2024.3394870 | - |
| dc.identifier.scopusid | 2-s2.0-85192216993 | - |
| dc.identifier.wosid | 001215240600001 | - |
| dc.identifier.bibliographicCitation | IEEE Access, v.12, pp 1 - 1 | - |
| dc.citation.title | IEEE Access | - |
| dc.citation.volume | 12 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 1 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | ATTRACTIVENESS | - |
| dc.subject.keywordPlus | COMPUTATION | - |
| dc.subject.keywordPlus | BENCHMARK | - |
| dc.subject.keywordAuthor | Analytical models | - |
| dc.subject.keywordAuthor | Anchor | - |
| dc.subject.keywordAuthor | Computational modeling | - |
| dc.subject.keywordAuthor | convolution neural network | - |
| dc.subject.keywordAuthor | Convolutional neural networks | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | facial beauty prediction | - |
| dc.subject.keywordAuthor | Facial features | - |
| dc.subject.keywordAuthor | Feature extraction | - |
| dc.subject.keywordAuthor | Predictive models | - |
| dc.subject.keywordAuthor | SCUT-FBP | - |
| dc.subject.keywordAuthor | Self-supervised learning | - |
| dc.subject.keywordAuthor | self-supervised learning | - |
| dc.subject.keywordAuthor | Training | - |
| dc.subject.keywordAuthor | Transfer learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
