Cheng-Han Chiang (姜成翰)

PhD student at National Taiwan University
Machine Learning and Speech Processing Lab

Hi! I am a second-year PhD student at National Taiwan University (NTU) in Taipei, Taiwan. I am a member of Speech Processing and Machine Learning (SPML) Lab. I am advised by Prof. Hung-yi Lee.

My main research interest is natural language processing, especially self-supervised learning and pre-trained language models. I started my research from the BERT-era and I investigates why BERT works so well on downstream tasks. In the LLM-era, I still focus on pre-trained language models, including how to use those LLMs on diverse scenarios and how to augment LLMs with retrieval. I am also interested in the evaluation of diverse tasks and how to reliably assess an ML system.

Latest News

  • (10.14.2023) Excited to share that I am a recipient of Google PhD Fellowship 2023
  • (10.10.2023) One paper accepted to EMNLP 2023 findings! See you at Singapore!
  • (06.10.2023) Our paper Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS, is accepted as a poster paper at RepL4NLP 2023 workshop! Check out the paper and dataset!
  • (06.04.2023) I am attending ICASSP at Rhodes and I am one of the tutorial speaker for the tutorial "Parameter-Efficient Learning for Speech and Language Processing: Adapters, Prompts, and Reprogramming
  • (05.18.2023) One paper accepted to INTERSPEECH 2023
  • (05.02.2023) Two papers accepted to ACL 2023 (one main conference paper and one in Findings)! See you at Toronto!
  • (03.23.2022) I will be giving a tutorial at AACL-IJCNLP 2022 with Yung-Sung Chuang and Hung-yi Lee.

Publications

A Closer Look into Automatic Evaluation Using Large Language Models
Cheng-Han Chiang, Hung-yi Lee
(To appear) In Findings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)
Findings paper (short paper)

Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS
Cheng-Han Chiang, Yung-Sung Chuang, James Glass, Hung-yi Lee
In the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023), co-located with ACL 2023
Workshop poster paper

Why We Should Report the Details in Subjective Evaluation of TTS More Rigorously
Cheng-Han Chiang, Wei-Ping Huang, Hung-yi Lee
In INTERSPEECH 2023 (INTERSPEECH 2023)
Main conference paper

Can Large Language Models Be an Alternative to Human Evaluations in NLP?
Cheng-Han Chiang, Hung-yi Lee
In The 2023 Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Main conference paper

Are Synonym Substitution Attacks Really Synonym Substitution Attacks?
Cheng-Han Chiang, Hung-yi Lee
In Findings of The 2023 Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Findings

On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
Cheng-Han Chiang, Hung-yi Lee
In the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI 2022)
Oral paper
 

Re-Examining Human Annotations for Interpretable NLP
Cheng-Han Chiang, Hung-yi Lee
In the Explainable Agency in Artificial Intelligence Workshop (EAAI) at the Thirty-Sixth AAAI Conference on Artificial Intelligence
 

Pretrained Language Model Embryology: The Birth of ALBERT
Cheng-Han Chiang, Sung-Feng Huang, Hung-yi Lee
In The 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)
 

Talks

Here are some talks a presented, pre-recorded videos for virtual conferences and lecture videos in Machine Learning.

Contact

Email: dcml0714 AT gmail DOT com

Site visitors