Cheng-Han Chiang (姜成翰)

PhD student at National Taiwan University
Machine Learning and Speech Processing Lab

Hi! I am a third-year PhD student at National Taiwan University (NTU) in Taipei, Taiwan. I am a member of Speech Processing and Machine Learning (SPML) Lab. I am advised by Prof. Hung-yi Lee.

My main research interest is natural language processing, especially self-supervised learning and pre-trained language models. I started my research from the BERT-era and I investigates why BERT works so well on downstream tasks. In the LLM-era, I still focus on pre-trained language models, including how to use those LLMs on diverse scenarios and how to augment LLMs with retrieval. I am also interested in the evaluation of diverse tasks and how to reliably assess an ML system.

Latest News

  • (08.16.2024) Our paper, Merging Facts, Crafting Fallacies: Evaluating the Contradictory Nature of Aggregated Factual Claims in Long-Form Generations, is awarded the Best Paper Award at Towards Knowledgeable Language Models @ ACL 2024 Workshop!
  • (05.17.2024) One paper accepted to Findings of ACL'24! See you at Bangkok!
  • (01.23.2024) One paper accepted to EACL 2024! See you at Malta🇲🇹!
  • (10.14.2023) Excited to share that I am a recipient of Google PhD Fellowship 2023

Selected Publications

Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Cheng-Han Chiang, Wei-Chih Chen, Chun-Yi Kuan, Chienchou Yang, Hung-yi Lee
Preprint, 2024
 

Merging Facts, Crafting Fallacies: Evaluating the Contradictory Nature of Aggregated Factual Claims in Long-Form Generations
Cheng-Han Chiang, Hung-yi Lee
In Findings of The 2024 Annual Meeting of the Association for Computational Linguistics (ACL 2024)
Findings paper; also presented at KnowledgeableLMs workshop and Knowledge-Augmented NLP workshop at ACL 2024
🏆 Best paper award at KnowledgeableLMs workshop
     

Over-Reasoning and Redundant Calculation of Large Language Models
Cheng-Han Chiang, Hung-yi Lee
In The 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2024)
Main conference paper
 

A Closer Look into Automatic Evaluation Using Large Language Models
Cheng-Han Chiang, Hung-yi Lee
In Findings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)
Findings paper (short paper)
 

Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS
Cheng-Han Chiang, Yung-Sung Chuang, James Glass, Hung-yi Lee
In the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023), co-located with ACL 2023
Workshop poster paper

Why We Should Report the Details in Subjective Evaluation of TTS More Rigorously
Cheng-Han Chiang, Wei-Ping Huang, Hung-yi Lee
In INTERSPEECH 2023 (INTERSPEECH 2023)
Main conference paper
 

Can Large Language Models Be an Alternative to Human Evaluations in NLP?
Cheng-Han Chiang, Hung-yi Lee
In The 2023 Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Main conference paper
 

Are Synonym Substitution Attacks Really Synonym Substitution Attacks?
Cheng-Han Chiang, Hung-yi Lee
In Findings of The 2023 Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Findings

On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
Cheng-Han Chiang, Hung-yi Lee
In the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI 2022)
Oral paper
 

Re-Examining Human Annotations for Interpretable NLP
Cheng-Han Chiang, Hung-yi Lee
In the Explainable Agency in Artificial Intelligence Workshop (EAAI) at the Thirty-Sixth AAAI Conference on Artificial Intelligence
 

Pretrained Language Model Embryology: The Birth of ALBERT
Cheng-Han Chiang, Sung-Feng Huang, Hung-yi Lee
In The 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)
 

Talks

Here are some talks a presented, pre-recorded videos for virtual conferences and lecture videos in Machine Learning.

Contact

Email: dcml0714 AT gmail DOT com

Site visitors