I’m a fourth-year Ph.D. student in the Computer Science department at Yale University working on Natural Language Processing. My advisor is Professor Arman Cohan, and I was very fortunate to have Professor Dragomir Radev as my advisor during my first two years at Yale. Before coming to Yale, I received my master’s degree from the Language Technologies Institute at Carnegie Mellon University, working with Professor Graham Neubig, Professor Pengfei Liu, and other amazing faculties and students on various projects related to text generation and summarization. I started my NLP research during my undergraduate study at Fudan University, where I worked with Professor Xipeng Qiu for my bachelor’s thesis on text style transfer. Beyond academic research, I also worked as a research intern at Google DeepMind and Microsoft Research.

My research interests are centered on developing robust evaluation methodologies for language models (arXiv 2024a, NAACL 2024 Findings, EMNLP 2023, ACL 2023), and on designing training algorithms that explicitly incorporate both human and automatic evaluations to enhance language model performance (NeurIPS 2024 FITML Workshop, arXiv 2024b, NAACL 2024, ACL 2023, ACL 2022, ACL 2021, NAACL 2021). A specific area of my interest is natural language generation tasks, with a focus on text summarization.

[GitHub] [Google Scholar]

SELECTED PAPERS

ReIFE: Re-evaluating Instruction-Following Evaluation
Yixin Liu*, Kejian Shi*, Alexander R. Fabbri, Yilun Zhao, Peifeng Wang, Chien-Sheng Wu, Shafiq Joty, Arman Cohan
*Equal Contribution
[arXiv][Code]

A Meta-Algorithm for Aligning LLMs with General Preferences
Yixin Liu*, Argyris Oikonomou*, Weiqiang Zheng*, Yang Cai, Arman Cohan
*Equal Contribution
NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability [Paper]

Understanding Reference Policies in Direct Preference Optimization
Yixin Liu, Pengfei Liu, Arman Cohan
[arXiv][Code]

Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization
Yixin Liu*, Alexander R. Fabbri*, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan
*Equal Contribution
NAACL 2024 Findings [Paper][Code]

On Learning to Summarize with Large Language Models as References
Yixin Liu, Kejian Shi, Katherine S He, Longtian Ye, Alexander R. Fabbri, Pengfei Liu, Dragomir Radev, Arman Cohan
NAACL 2024 [Paper][Code]

Improving Large Language Model Fine-tuning for Solving Math Problems
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, Peter J. Liu
[arXiv]

Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation
Yixin Liu*, Alexander R. Fabbri*, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
*Equal Contribution
ACL 2023 [Paper][Code]

On Improving Summarization Factual Consistency from Natural Language Feedback
Yixin Liu, Budhaditya Deb, Milagro Teruel, Aaron Halfaker, Dragomir Radev, Ahmed H. Awadallah
ACL 2023 [Paper][Code]

BRIO: Bringing Order to Abstractive Summarization
Yixin Liu, Pengfei Liu, Dragomir Radev, Graham Neubig
ACL 2022 [Paper][Code]

SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization
Yixin Liu, Pengfei Liu
ACL 2021 [Paper][Code]

EXPLAINABOARD: An Explainable Leaderboard for NLP
Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Graham Neubig
Best Demo Paper! System Demonstrations @ACL 2021 [Paper][Code]

On Learning Text Style Transfer with Direct Rewards
Yixin Liu, Graham Neubig, John Wieting
NAACL 2021 [Paper][Code]