Publications
You can also find my articles on my Google Scholar profile.
Show All Papers
- 9. Position: The Term “Machine Unlearning” Is Overused in LLMs
Sangyeon Yoon*, Yeachan Jun*, Albert No
arxiv, UNDER REVIEW, 2026 - 8. K-EXAONE Technical Report: Journey to Frontier-Level Performance of Foundation Models
LG AI Research
Technical Report, 2026 [pdf] - 7. Rethinking Benign Relearning: Syntax as the Hidden Driver of Unlearning Failures
Sangyeon Yoon, Hyesoo Hong, Wonje Jeung, and Albert No
ICLR, 2026 [pdf] - 6. A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models
Wonje Jeung*, Sangyeon Yoon*, Yoonjun Cho, Dongjae Jeon, Sangwoo Shin, Hyesoo Hong, and Albert No
ICLR, 2026 [pdf] - 5. R-TOFU: Unlearning in Large Reasoning Models
Sangyeon Yoon, Wonje Jeung, and Albert No
EMNLP, Main, 2025 [pdf] - 4. SEPS: A Separability Measure for Robust Unlearning in LLMs
Wonje Jeung*, Sanyeon Yoon*, and Albert No
EMNLP, Main, 2025 [pdf] - 3. DUSK: Do not unlearn shared knowledge
Wonje Jeung*, Sangyeon Yoon*, Hyesoo Hong*, Soeun Kim, Seungju Han, Youngjae Yu, and Albert No
arxiv, UNDER REVIEW, 2025 [pdf] - 2. SAFEPATH: Preventing Harmful Reasoning in Chain-of-Thought via Early Alignment
Wonje Jeung, Sangyeon Yoon, Minsuk Kang, and Albert No
NeurIPS, 2025 [pdf] - 1. Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon*, Wonje Jeung*, and Albert No
NeurIPS WORKSHOP (SFLLM), 2024 [pdf]
