Cited 0 time in
프롬프트 엔지니어링 기법을 활용한 LLM의 응답 안정성 및 일관성 향상에 관한 연구
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 장민규 | - |
| dc.contributor.author | 최상민 | - |
| dc.date.accessioned | 2025-11-17T05:00:15Z | - |
| dc.date.available | 2025-11-17T05:00:15Z | - |
| dc.date.issued | 2025-10 | - |
| dc.identifier.issn | 1226-833x | - |
| dc.identifier.issn | 2765-5415 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/80843 | - |
| dc.description.abstract | Large language models (LLMs) can exhibit reliability issues—most notably hallucinations—that undermine their suitability for high-stakes applications. We investigate a simple yet effective prompt-engineering strategy to improve response consistency and stability without modifying model internals. Our central technique prompts models to provide the reasoning evidence their answers, aiming for task-agnostic, broadly applicable gains in consistency. We compare two engineered prompt against one non-engineered baselines across three models (GPT-4o-mini, Llama-3.1-8B, and Gemini-2.0-Flash-Lite) and four datasets (BoolQ, QNLI, MRPC, SST-2). For every model–dataset–prompt combination, we run 10 trials and evaluate Accuracy, Precision, Recall, F1 score, and the standard deviation of F1. The engineered prompt yields the lowest F1 standard deviation across the full experimental suite, indicating markedly improved response stability; on several datasets, it also achieves substantial F1 gains over non-engineered prompts. These results suggest that explicitly requesting post-answer reasoning is a practical, cost-efficient, and broadly applicable method for reducing output variability and enhancing overall reliability in LLMs. Code: https://anonymous.4open.science/r/LLM_consitency-B0ED/README.md | - |
| dc.format.extent | 11 | - |
| dc.language | 한국어 | - |
| dc.language.iso | KOR | - |
| dc.publisher | 한국산업융합학회 | - |
| dc.title | 프롬프트 엔지니어링 기법을 활용한 LLM의 응답 안정성 및 일관성 향상에 관한 연구 | - |
| dc.title.alternative | Research on Improving Response Reliability and Consistency in LLM Using Prompt Engineering Techniques | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.bibliographicCitation | 한국산업융합학회논문집, v.28, no.5, pp 1517 - 1527 | - |
| dc.citation.title | 한국산업융합학회논문집 | - |
| dc.citation.volume | 28 | - |
| dc.citation.number | 5 | - |
| dc.citation.startPage | 1517 | - |
| dc.citation.endPage | 1527 | - |
| dc.type.docType | Y | - |
| dc.identifier.kciid | ART003259726 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | Large Language Models | - |
| dc.subject.keywordAuthor | Prompt Engineering | - |
| dc.subject.keywordAuthor | Response Consistency | - |
| dc.subject.keywordAuthor | Reliability | - |
| dc.subject.keywordAuthor | Reasoning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
