Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Fuzzy (Ordered) Filters of Ordered BCI-Algebras

Full metadata record
DC Field Value Language
dc.contributor.authorEunsuk Yang-
dc.contributor.authorEun Hwan Roh-
dc.contributor.authorYoung Bae Jun-
dc.date.accessioned2025-11-04T06:00:10Z-
dc.date.available2025-11-04T06:00:10Z-
dc.date.issued2025-09-
dc.identifier.issn1598-2645-
dc.identifier.issn2093-744X-
dc.identifier.urihttps://scholarworks.gnu.ac.kr/handle/sw.gnu/80561-
dc.description.abstractText summarization and content tagging are pivotal Natural Language Processing (NLP) tasks, enhancing information accessibility, organization, decision-making and optimizing utilization of data for diverse applications. This paper addresses the problem of automating these tasks by evaluating and comparing general-purpose and specialized models. For text summarization, a general-purpose large language model (LLM) is used and compared against specialized models such as Bidirectional and Auto-Regressive Transformer (BART) and Pegasus, focusing on accuracy, coherence, and relevance. BART achieved the highest ROUGE-1 score of 44.2553, highlighting its strong performance in abstractive summarization. For content tagging, the Bidirectional Encoder Representations from Transformers (BERT) model is evaluated on a classification dataset and benchmarked against other state-of-the-art models, including Robustly Optimized BERT approach (RoBERTa), DistilBERT, and XLNet, to assess accuracy, speed, and applicability. BERT outperformed other models for content tagging, achieving an accuracy of 0.9315, precision of 0.9393, and recall of 0.9263. The major contributions include identifying trade-offs between general-purpose and specialized models, providing recommendations for real-world applications such as news aggregation and content management systems.-
dc.format.extent12-
dc.language영어-
dc.language.isoENG-
dc.publisher한국지능시스템학회-
dc.titleFuzzy (Ordered) Filters of Ordered BCI-Algebras-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.5391/IJFIS.2025.25.3.272-
dc.identifier.scopusid2-s2.0-105017786371-
dc.identifier.wosid001580736700004-
dc.identifier.bibliographicCitationInternational Journal of Fuzzy Logic and Intelligent Systems, v.25, no.3, pp 272 - 283-
dc.citation.titleInternational Journal of Fuzzy Logic and Intelligent Systems-
dc.citation.volume25-
dc.citation.number3-
dc.citation.startPage272-
dc.citation.endPage283-
dc.type.docTypeArticle-
dc.identifier.kciidART003244953-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.description.journalRegisteredClassesci-
dc.description.journalRegisteredClasskci-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.subject.keywordPlusIDEALS-
dc.subject.keywordPlus(IS-AN-ELEMENT-OF-
dc.subject.keywordPlusSUBALGEBRAS-
dc.subject.keywordAuthorNatural Language Processing (NLP)-
dc.subject.keywordAuthorText summarization-
dc.subject.keywordAuthorContent Tagging-
dc.subject.keywordAuthorLarge Language Model (LLM)-
dc.subject.keywordAuthorBidirectional Encoder Representations from Transformers (BERT)-
Files in This Item
There are no files associated with this item.
Appears in
Collections
사범대학 > 수학교육과 > Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jun, Young Bae photo

Jun, Young Bae
사범대학 (수학교육과)
Read more

Altmetrics

Total Views & Downloads

BROWSE