Cited 0 time in
Fuzzy (Ordered) Filters of Ordered BCI-Algebras
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Eunsuk Yang | - |
| dc.contributor.author | Eun Hwan Roh | - |
| dc.contributor.author | Young Bae Jun | - |
| dc.date.accessioned | 2025-11-04T06:00:10Z | - |
| dc.date.available | 2025-11-04T06:00:10Z | - |
| dc.date.issued | 2025-09 | - |
| dc.identifier.issn | 1598-2645 | - |
| dc.identifier.issn | 2093-744X | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/80561 | - |
| dc.description.abstract | Text summarization and content tagging are pivotal Natural Language Processing (NLP) tasks, enhancing information accessibility, organization, decision-making and optimizing utilization of data for diverse applications. This paper addresses the problem of automating these tasks by evaluating and comparing general-purpose and specialized models. For text summarization, a general-purpose large language model (LLM) is used and compared against specialized models such as Bidirectional and Auto-Regressive Transformer (BART) and Pegasus, focusing on accuracy, coherence, and relevance. BART achieved the highest ROUGE-1 score of 44.2553, highlighting its strong performance in abstractive summarization. For content tagging, the Bidirectional Encoder Representations from Transformers (BERT) model is evaluated on a classification dataset and benchmarked against other state-of-the-art models, including Robustly Optimized BERT approach (RoBERTa), DistilBERT, and XLNet, to assess accuracy, speed, and applicability. BERT outperformed other models for content tagging, achieving an accuracy of 0.9315, precision of 0.9393, and recall of 0.9263. The major contributions include identifying trade-offs between general-purpose and specialized models, providing recommendations for real-world applications such as news aggregation and content management systems. | - |
| dc.format.extent | 12 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 한국지능시스템학회 | - |
| dc.title | Fuzzy (Ordered) Filters of Ordered BCI-Algebras | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.5391/IJFIS.2025.25.3.272 | - |
| dc.identifier.scopusid | 2-s2.0-105017786371 | - |
| dc.identifier.wosid | 001580736700004 | - |
| dc.identifier.bibliographicCitation | International Journal of Fuzzy Logic and Intelligent Systems, v.25, no.3, pp 272 - 283 | - |
| dc.citation.title | International Journal of Fuzzy Logic and Intelligent Systems | - |
| dc.citation.volume | 25 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 272 | - |
| dc.citation.endPage | 283 | - |
| dc.type.docType | Article | - |
| dc.identifier.kciid | ART003244953 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | esci | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.subject.keywordPlus | IDEALS | - |
| dc.subject.keywordPlus | (IS-AN-ELEMENT-OF | - |
| dc.subject.keywordPlus | SUBALGEBRAS | - |
| dc.subject.keywordAuthor | Natural Language Processing (NLP) | - |
| dc.subject.keywordAuthor | Text summarization | - |
| dc.subject.keywordAuthor | Content Tagging | - |
| dc.subject.keywordAuthor | Large Language Model (LLM) | - |
| dc.subject.keywordAuthor | Bidirectional Encoder Representations from Transformers (BERT) | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
