In 2020, I completed my bachelar and happily spent my summer with Professor Xuezhe Ma (Max). In 2021, I was a FAIR AI resident with Dr. Xian Li and Dr. Veselin Stoyanov, working in the FAIR Accelerator Project.
I am broadly interested in Natural Language Processing and Machine Learning (ML). In particular, I am interested in interpretability and modeling techniques of language models. Moreover, I am interested in Causal Inference and Representation Learning.
Thanks to the support of my research mentors, I am fortunate to conduct researcher during my college. I’m happy to help ambitious undergraduate students interested in natural language processing or machine learning get started with research — please feel free to email me!
Email: zeyuliu2 [strudel] cs.washington.edu
Links: [Full CV]
|Apr 15, 2022||Our paper Emergent Communication Fine-tuning (EC-FT) for Pretrained Language Models is selected as the runner-up best paper for ICLR Emecom!|
|Aug 26, 2021||Our paper Probing Across Time: What Does RoBERTa Know and When? will appear at EMNLP Finding 2021!|
|Oct 20, 2020||Our paper: Linguistically-Informed Transformations (LIT): A Method for Automatically Generating Contrast Sets will appear at BlackboxNLP 2020!|