I’m a third-year Ph.D. student in the Computer Science department at Yale University working on Natural Language Processing. My advisor is Professor Arman Cohan, and I was very fortunate to have Professor Dragomir Radev as my advisor during my first two years at Yale. Before coming to Yale, I received my master’s degree from the Language Technologies Institute at Carnegie Mellon University, working with Professor Graham Neubig, Professor Pengfei Liu, and other amazing faculties and students on various projects related to text generation and summarization. I started my NLP research during my undergraduate study at Fudan University, where I worked with Professor Xipeng Qiu for my bachelor’s thesis on text style transfer. Beyond academic research, I also worked as a research intern at Google DeepMind (Summer 2023), and Microsoft Research (Summer 2022).

My research interests are centered on developing robust evaluation methodologies for language models, and on designing training algorithms that explicitly incorporate both human and automatic evaluations to enhance language model performance. A specific area of my focus is text summarization, which I consider an important testbed for studying natural language generation.

Please visit here for a selected list of my publications.

[GitHub] [Google Scholar] [Semantic Scholar]

LATEST PREPRINTS

Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization
Yixin Liu*, Alexander R. Fabbri*, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan
*Equal Contribution
[arXiv][Code]

On Learning to Summarize with Large Language Models as References
Yixin Liu, Kejian Shi, Katherine S He, Longtian Ye, Alexander R. Fabbri, Pengfei Liu, Dragomir Radev, Arman Cohan
[arXiv][Code]

Improving Large Language Model Fine-tuning for Solving Math Problems
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, Peter J. Liu
[arXiv]

SELECTED PUBLICATIONS

Towards Interpretable and Efficient Automatic Reference-Based Summarization Evaluation
Yixin Liu, Alexander R. Fabbri, Yilun Zhao, Pengfei Liu, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
EMNLP 2023 [Paper][Code]

Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation
Yixin Liu*, Alexander R. Fabbri*, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
*Equal Contribution
ACL 2023 [Paper][Code]

On Improving Summarization Factual Consistency from Natural Language Feedback
Yixin Liu, Budhaditya Deb, Milagro Teruel, Aaron Halfaker, Dragomir Radev, Ahmed H. Awadallah
ACL 2023 [Paper][Code]

Leveraging Locality in Abstractive Text Summarization
Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev
EMNLP 2022 [Paper][Code]

BRIO: Bringing Order to Abstractive Summarization
Yixin Liu, Pengfei Liu, Dragomir Radev, Graham Neubig
ACL 2022 [Paper][Code]

DataLab: A Platform for Data Analysis and Intervention
Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, Pengfei Liu
Outstanding Demo Paper! System Demonstrations @ACL 2022 [Paper][Code][Demo]

SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization
Yixin Liu, Pengfei Liu
ACL 2021 [Paper][Code]

EXPLAINABOARD: An Explainable Leaderboard for NLP
Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Graham Neubig
Best Demo Paper! System Demonstrations @ACL 2021 [Paper][Code][Demo]

RefSum: Refactoring Neural Summarization
Yixin Liu, Zi-Yi Dou, Pengfei Liu
NAACL 2021 [Paper][Code]

On Learning Text Style Transfer with Direct Rewards
Yixin Liu, Graham Neubig, John Wieting
NAACL 2021 [Paper][Code]