Skip to content
View ChanLiang's full-sized avatar

Highlights

  • Pro

Block or report ChanLiang

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ChanLiang/README.md

👋 Hi, I'm Liang Chen (陈亮)

I'm a third-year Ph.D. student in the Department of Systems Engineering and Engineering Management at The Chinese University of Hong Kong (CUHK), advised by Prof. Kam-Fai Wong.

My research focuses on trustworthy large language models (LLMs). Recent work includes:

  • VAA (ICML 2025): A safety alignment method that identifies and upweights vulnerable examples to improve safety retention.
  • PEARL (ICLR 2025): An instruction tuning method to enhance LLM robustness in ICL and RAG scenarios.
  • WatME (ACL 2024): A lossless text watermarking approach leveraging lexical redundancy during decoding.
  • CONNER (EMNLP 2023): An automatic evaluation framework to assess LLMs as generative search engines.

🧠 I'm currently exploring large reasoning models (LRMs) and am open to research discussions or collaboration opportunities.

📬 Feel free to reach out via: lchen [at] se [dot] cuhk [dot] edu [dot] hk

🌐 Homepage: https://chanliang.github.io

Pinned Loading

  1. VAA VAA Public

    [ICML 2025] Vulnerability-Aware Alignment: Mitigating Uneven Forgetting in Harmful Fine-Tuning

    Python 9

  2. PEARL PEARL Public

    [ICLR 2025] PEARL: Towards Permutation-Resilient LLMs

    Python 7

  3. WatME WatME Public

    [ACL 2024] WatME: Towards Lossless Watermarking Through Lexical Redundancy

    Python 9

  4. CONNER CONNER Public

    [EMNLP 2023] Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators

    Python 32 2

  5. ORIG ORIG Public

    [ACL 2023 findings] Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization

    Python 17

  6. MLGroupJLU/LLM-eval-survey MLGroupJLU/LLM-eval-survey Public

    The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".

    1.5k 98