Cheng-Han Chiang (姜成翰)

PhD student at National Taiwan University
Machine Learning and Speech Processing Lab

Hi! I am a second-year PhD student at National Taiwan University (NTU) in Taipei, Taiwan. I am a member of Speech Processing and Machine Learning (SPML) Lab. I am advised by Prof. Hung-yi Lee.

My main research interest is natural language processing, especially self-supervised learning and pre-trained language models. I started my research from the BERT-era and I investigates why BERT works so well on downstream tasks. In the LLM-era, I still focus on pre-trained language models, including how to use those LLMs on diverse scenarios and how to augment LLMs with retrieval. I am also interested in the evaluation of diverse tasks and how to reliably assess an ML system.

Latest News

  • (01.23.2024) One paper accepted to EACL 2024! See you at Malta🇲🇹!
  • (10.14.2023) Excited to share that I am a recipient of Google PhD Fellowship 2023
  • (10.10.2023) One paper accepted to EMNLP 2023 findings! See you at Singapore!

Publications

Over-Reasoning and Redundant Calculation of Large Language Models
Cheng-Han Chiang, Hung-yi Lee
(To appear) In The 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2024)
Main conference paper
 

A Closer Look into Automatic Evaluation Using Large Language Models
Cheng-Han Chiang, Hung-yi Lee
In Findings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)
Findings paper (short paper)
 

Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS
Cheng-Han Chiang, Yung-Sung Chuang, James Glass, Hung-yi Lee
In the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023), co-located with ACL 2023
Workshop poster paper

Why We Should Report the Details in Subjective Evaluation of TTS More Rigorously
Cheng-Han Chiang, Wei-Ping Huang, Hung-yi Lee
In INTERSPEECH 2023 (INTERSPEECH 2023)
Main conference paper
 

Can Large Language Models Be an Alternative to Human Evaluations in NLP?
Cheng-Han Chiang, Hung-yi Lee
In The 2023 Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Main conference paper
 

Are Synonym Substitution Attacks Really Synonym Substitution Attacks?
Cheng-Han Chiang, Hung-yi Lee
In Findings of The 2023 Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Findings

On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
Cheng-Han Chiang, Hung-yi Lee
In the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI 2022)
Oral paper
 

Re-Examining Human Annotations for Interpretable NLP
Cheng-Han Chiang, Hung-yi Lee
In the Explainable Agency in Artificial Intelligence Workshop (EAAI) at the Thirty-Sixth AAAI Conference on Artificial Intelligence
 

Pretrained Language Model Embryology: The Birth of ALBERT
Cheng-Han Chiang, Sung-Feng Huang, Hung-yi Lee
In The 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)
 

Talks

Here are some talks a presented, pre-recorded videos for virtual conferences and lecture videos in Machine Learning.

Contact

Email: dcml0714 AT gmail DOT com

Site visitors