Skip to main content

Ke-Han Lu

I am Ke-Han Lu, a first-year Ph.D. student at National Taiwan University, working under the supervision of Prof. Hung-Yi Lee. My research interests lie in the field of Multimodal Language Models, with a particular focus on cross-modal alignment and leveraging powerful language models to enhance multi-modal systems.

Research Experience

Instruction-following Speech Language Models I am currently working on developing instruction-following speech language models in cooperation with NVIDIA. We have proposed a scalable and robust framework called DeSTA [9, 12] for training these general-purpose speech systems. Additionally, I have co-authored papers related to evaluation benchmarks [7] and systems [10, 11] in this research direction. I have experience in fine-tuning large-scale language models using NeMo and Megatron-LM.

Automatic Speech Recognition I have focused on improving the recognition accuracy of non-autoregressive ASR systems by injecting linguistic knowledge from pre-trained language models through cross-modal alignment [4] and knowledge distillation [5]. I have experience in training ASR systems using ESPnet and pre-training Mandarin wav2vec2.0 with fairseq.

Publications

  1. Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data
    Ke-Han LuZhehuai ChenSzu-Wei FuChao-Han Huck YangJagadeesh BalamBoris GinsburgYu-Chiang Frank WangHung-yi Lee
    arXiv preprintPaperGitHub
  2. SpeechCaps: Advancing Instruction-Based Universal Speech Models with Multi-Talker Speaking Style Captioning
    Chien-yu HuangMin-Han ShihKe-Han LuChi-Yuan HsiaoHung-yi Lee
    arXiv preprintPaper
  3. Speech-Copilot: Leveraging Large Language Models for Speech Processing via Task Decomposition, Modularization, and Program Generation
    Chun-Yi KuanChih-Kai YangWei-Ping HuangKe-Han LuHung-yi Lee
    IEEE SLT 2024Paper
  4. DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment
    Ke-Han LuZhehuai ChenSzu-Wei FuHe HuangBoris GinsburgYu-Chiang Frank WangHung-yi Lee
    InterSpeech 2024Paper
  5. HypR: A comprehensive study for ASR hypothesis revising with a reference corpus
    Yi-Wei WangKe-Han LuKuan-Yu Chen
    InterSpeech 2024PaperGitHub
  6. Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech
    Chien-yu HuangKe-Han LuShih-Heng WangChi-Yuan HsiaoChun-Yi KuanHaibin WuSiddhant AroraKai-Wei ChangJiatong ShiYifan PengRoshan SharmaShinji WatanabeBhiksha RamakrishnanShady ShehataHung-yi Lee
    ICASSP 2024PaperGitHub
  7. Investigating zero-shot generalizability on mandarin-english code-switched asr and speech-to-text translation of recent foundation models with self-supervision and weak supervision
    Chih-Kai YangKuan-Po HuangKe-Han LuChun-Yi KuanChi-Yuan HsiaoHung-yi Lee
    ICASSP workshop 2024Paper
  8. A Context-aware Knowledge Transferring Strategy for CTC-based ASR
    Ke-Han LuKuan-Yu Chen
    IEEE SLT 2022PaperGitHub
  9. Non-autoregressive ASR Modeling using Pre-trained Language Models for Chinese Speech Recognition
    Fu-Hao YuKuan-Yu ChenKe-Han Lu
    IEEE/ACM Transactions on Audio, Speech, and Language ProcessingPaper
  10. A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021
    Ke-Han LuBo-Han FangKuan-Yu Chen
    Poster spotlight, VQA workshop, CVPR 2021PaperVideoLeaderBoard
  11. ntust-nlp-2 at ROCLING-2021 Shared Task: BERT-based semantic analyzer with word-level information
    Ke-Han LuKuan-Yu Chen
    ROCLING 2021: Conference on Computational Linguistics and Speech ProcessingPaper
  12. A Preliminary Study of Formosa Speech Recognition Challenge 2020 – Taiwanese ASR
    Fu-Hao YuKe-Han LuYi-Wei WangWei-Zhe ChangWei-Kai HuangKuan-Yu Chen
    International Journal of Computational Linguistics and Chinese Language ProcessingPaper

Education

  • National Taiwan University
    • Ph.D. in Communication Engineering
      • Feb 2024 - Present
  • National Taiwan University of Science and Technology
    • M.S. in Computer Science and Information Engineering
      • Sep 2020 - Feb 2023
  • National Taiwan University of Science and Technology
    • B.S. in Computer Science and Information Engineering
      • Sep 2016 - Jun 2020

Award

  • NSTC Graduate Research Fellowship(NSTC-GRF)
  • 16th TaiwanTech Outstanding Youth Award

Skills

  • Programming: Python, PyTorch, Javascript, Latex
  • Software and tools: Linux, Docker, Git, NeMo, Megatron-LM, ESPNET, Huggingface Transformers, fairseq
  • Language: Mandarin(native), English(fluent)