About me


I’m a research fellow at the CCDS, Nanyang Technological University, under the supervision of Prof. Liu Yang. I have been awarded the AISG PhD Fellowship and the DAAD AInet Fellowship, along with third place in the AISG Trusted Media Challenge, receiving a cash prize of 25,000 SGD. I feel fortunate to have the opportunity to work with Tianyu Pang, Chao Du, Qian Liu, and Min Lin at Sea AI Lab.

My research focuses on developing trustworthy AI software, a direction at the intersection of AI and Software Engineering. My research on AI software trustworthiness spans three levels: AI infrastructure, AI models, and AI agents, across applications such as chatbots, code generation, and medical scenarios.

News


  •   June 2025: I have been awarded the Chinese Government Award for Outstanding Self-Financed Students.
  •   May 2025: Our paper “Defending LVLMs Against Vision Attacks through Partial-Perception Supervision” is accepted by ICML 2025.
  •   April 2025: I have been selected as one of the best reviewers for AISTATS 2025.
  •   April 2025: Our paper “Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?” is accepted by FSE 2025.
  •   Mar 2025: Our paper “Compromising embodied agents with contextual backdoor attacks” is accepted by TIFS 2025.
  •   Mar 2025: Our paper “NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing” is accepted by TOSEM 2025.
  •   Mar 2025: Our paper “JailGuard: A Universal Detection Framework for Prompt-based Attacks on LLM Systems” is accepted by TOSEM 2025.
  •   Jan 2025: Our paper “Dormant: Defending against Pose-driven Human Image Animation” is accepted by USENIX Security 2025.
  •   Jan 2025: Our paper “Speculative Coreset Selection for Task-Specific Fine-tuning” is accepted by ICLR 2025.
  •   Jan 2025: Our paper “Understanding the Effectiveness of Coverage Criteria for Large Language Models: A Special Angle from Jailbreak Attacks” is accepted by ICSE 2025.
  •   Jan 2025: Our paper “Dissecting Global Search: A Simple yet Effective Method to Boost Individual Discrimination Testing and Repair” is accepted by ICSE 2025.
  •   Jan 2025: Our paper, “Perception-Guided Jailbreak Against Text-to-Image Models,” has been selected for an oral presentation at AAAI 2025.
  •   Jan 2025: Our paper “Teaching Code LLMs to Use Autocompletion Tools in Repository-Level Code Generation” is accepted by TOSEM 2025.
  •   Dec 2024: Our paper “Towards Trustworthy LLMs for Code: A Data-Centric Synergistic Auditing Framework” is accepted by ICSE 2025 NIER track.
  •   Dec 2024: Our paper “Perception-guided jailbreak against text-to-image models” is accepted by AAAI 2025.
  •   Nov 2024: I have been selected as one of the top reviewers for NeurIPS 2024 (1304/15160 8.6%).
  •   Nov 2024: We won the championship in the NTU 2024 Staff 3x3 Basketball Tournament and achieved 1st runner-up in the 2024 Sports Challenge Basketball Event.
  •   Oct 2024: Our paper “BDefects4NN: A Backdoor Defect Database for Controlled Localization Studies in Neural Networks” is accepted by ICSE 2025. Congrats to Yisong!
  •   Sept 2024: Our paper “SampDetox: Black-box Backdoor Defense via Perturbation-based Sample Detoxification” is accepted by NeurIPS 2024.
  •   Aug 2024: Our paper “VulAdvisor: Natural Language Suggestion Generation for Software Vulnerability Repair” is accepted by ASE 2024.
  •   July 2024: Our paper “CaBaFL: Asynchronous Federated Learning via Hierarchical Cache and Feature Balance” is accepted by EMSOFT 2024 and TCAD.
  •   May 2024: Our paper “Improving Neural Logic Machines via Failure Reflection” is accepted by ICML 2024. Congrats to Zhiming!
  •   April 2024: I get the DAAD AInet Fellowship.
  •   Feb 2024: Our paper “Unveiling project-specific bias in neural code models” is accepted by COLING 2024.
  •   Feb 2024: Our paper “BadEdit: Backdooring Large Language Models by Model Editing” is accepted by ICLR 2024.
  •   Feb 2024: Our paper “IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks” is accepted by ICLR 2024.
  •   Dec 2023: Our paper “FedMut: Generalized Federated Learning via Stochastic Mutation” is accepted by AAAI 2024 (oral).
  •   Dec 2023: Our paper “Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Model” is accepted by AAAI 2024.