Publications
You can also find my articles on my Google Scholar profile. (* for co-first author, # for corresponding author)
AI Security:
[Preprint] Tianlin Li, Qian Liu, Tianyu Pang, Chao Du, Qing Guo, Yang Liu, Min Lin. Purifying Large Language Models by Ensembling a Small Language Model.
[Preprint] Yihao Huang, Yue Cao, Tianlin Li#, Felix Juefei-Xu, Di Lin, Ivor W Tsang, Yang Liu, Qing Guo#. On the robustness of segment anything.
[Neurocomputing 2025] Mingsi Wang, Jiachen Zhou, Tianlin Li, Guozhu Meng, Kai Chen. A survey on physical adversarial attacks against face recognition systems.
[TDSC 2025] Qi Zhou, Dongxia Wang, Tianlin Li, Zhihong Xu, Yang Liu, Kui Ren, Wenhai Wang, Qing Guo. Foolsdedit: Deceptively steering your edits towards targeted attribute-aware distribution.
[ICECCS 2025 Position] Tianlin Li, Qiang Hu#, Chong Wang, Jian Zhang, Wei Ma, Aishan Liu, Jingyi Wang, Yang Liu. An Analytical Perspective on Software Engineering for Large Language Models.
[ICML 2025 Workshop]Zeming Wei, Tianlin Li, Xiaojun Jia, Yihao Zhang, Yang Liu, Meng Sun. Position: Agent-specific trustworthiness risk as a research priority.
[ICML 2025] Qi Zhou, Dongxia Wang#, Tianlin Li#, Yun Lin, Yang Liu, Jin Song Dong, Qing Guo. Defending LVLMs Against Vision Attacks through Partial-Perception Supervision.
[TIFS 2025] Aishan Liu, Yuguang Zhou, Xianglong Liu, Tianyuan Zhang, Siyuan Liang, Jiakai Wang, Yanjun Pu, Tianlin Li, Junqi Zhang, Wenbo Zhou, Qing Guo, Dacheng Tao. Compromising Embodied Agents with Contextual Backdoor Attacks.
[TOSEM 2025] Xiaoyu Zhang, Cen Zhang, Tianlin Li, Yihao Huang, Xiaojun Jia, Xiaofei Xie, Yang Liu, Chao Shen. A Mutation-Based Method for Multi-Modal Jailbreaking Attack Detection.
[USENIX Security 2025] Jiachen Zhou, Mingsi Wang, Tianlin Li, Guozhu Meng, Kai Chen, Dormant: Defending against Pose-driven Human Image Animation.
[ICSE 2025] Shide Zhou, Tianlin Li#, Kailong Wang#, Yihao Huang, Ling Shi, Yang Liu, Haoyu Wang, Understanding the Effectiveness of Coverage Criteria for Large Language Models: A Special Angle from Jailbreak Attacks.
[ICSE 2025] Yisong Xiao, Aishan Liu, Xinwei Zhang, Tianyuan Zhang, Tianlin Li, Siyuan Liang, Xianglong Liu, Yang Liu, Dacheng Tao, BDefects4NN: A Backdoor Defect Database for Controlled Localization Studies in Neural Networks.
[AAAI 2025 oral] Yihao Huang, Le Liang, Tianlin Li, Xiaojun Jia, Run Wang, Weikai Miao, Geguang Pu, Yang Liu, Perception-guided jailbreak against text-to-image models.
[NeurIPS 2024] Yanxin Yang, Chentao Jia, Dengke Yan, Ming Hu, Tianlin Li, Xiaofei Xie, Xian Wei, and Mingsong Chen, SampDetox: Black-box Backdoor Defense via Perturbation-based Sample Detoxification.
[Forge 2024] Guanyu Wang, Yuekang Li, Yi Liu, Gelei Deng, Tianlin Li, Guosheng Xu, Yang Liu, Haoyu Wang, Kailong Wang. Metmap: Metamorphic testing for detecting false vector matching problems in LLM augmented generation.
[ICLR 2024] Yanzhou Li, Tianlin Li#, Kangjie Chen#, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu. BadEdit: Backdooring Large Language Models by Model Editing.
[ICLR 2024] Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo. IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks.
[TMM 2024] Zixin Yin, Jiakai Wang, Yisong Xiao, Hanqing Zhao, Tianlin Li, Wenbo Zhou, Aishan Liu, and Xianglong Liu. Improving Deepfake Detection Generalization by Invariant Risk Minimization.
[AAAI 2024] Yihao Huang, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Hu Ming, Tianlin Li, Geguang Pu, Yang Liu. Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models.
[TOSEM 2021] Xiaofei Xie*, Tianlin Li*, Jian Wang, Lei Ma, Qing Guo, Felix Juefei-Xu, Yang Liu. Npc: Neuron path coverage via characterizing decision logic of deep neural networks.
[TIP 2020] Chongzhi Zhang, Aishan Liu, Xianglong Liu, Yitao Xu, Hang Yu, Yuqing Ma, Tianlin Li. Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity.
[Information Science 2020] Tianlin Li*, Aishan Liu*, Xianglong Liu, Yitao Xu, Chongzhi Zhang, Xiaofei Xie. Understanding adversarial robustness via critical attacking route.
AI Fairness:
[Preprint] Tianlin Li*, Xiaoyu Zhang*, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu. Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One.
[FSE 2025] Zhenpeng Chen, Xinyue Li, Jie M. Zhang, Weisong Sun, Ying Xiao, Tianlin Li, Yiling Lou, Yang Liu. Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
[ICSE 2025] Lili Quan, Tianlin Li#, Xiaofei Xie, Zhenpeng Chen, Xiaofei Xie, Sen Chen, Lingxiao Jiang, Xiaohong Li#, Dissecting Global Search: A Simple Yet Effective Method to Boost Individual Discrimination Testing and Repair.
[ICSE 2024] Tianlin Li*, Yue Cao*, Jian Zhang, Shiqian Zhao, Yihao Huang, Aishan Liu, Qing Guo, Yang Liu. RUNNER: Responsible UNfair NEuron Repair for Enhancing Deep Neural Network Fairness.
[TOSEM 2023] Tianlin Li, Xiaofei Xie, Jian Wang, Qing Guo, Aishan Liu, Lei Ma, Yang Liu. Faire: Repairing Fairness of Neural Networks via Neuron Condition Synthesis.
[ICML 2023] Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu. FAIRER: FAIRNESS AS DECISION RATIONALE ALIGNMENT.
[IJCAI 2023] Tianlin Li, Zhiming Li, Anran Li, Mengnan Du, Aishan Liu, Qing Guo, Guozhu Meng, Yang Liu. Fairness via Group Contribution Matching.
[ISSTA 2023] Yisong Xiao, Aishan Liu, Tianlin Li, Xianglong Liu. Latent Imitator: Generating Natural Individual Discriminatory.
Trustworthy Code Intelligence:
[ICSE NIER 2026] Jingyao Zhang, Tianlin Li#, Xiaoyu Zhang, Qiang Hu, Bin Shi#, Unveiling the Potential of Diffusion Large Language Models in Software Engineering Tasks: An Empirical Study.
[TOSEM 2025] Chong Wang, Jian Zhang, Yebo Feng, Tianlin Li, Weisong Sun, Yang Liu, Xin Peng, Teaching Code LLMs to Use Autocompletion Tools in Repository-Level Code Generation.
[ICSE NEIR 2025] Chong Wang, Zhenpeng Chen, Tianlin Li, Yilun Zhao, Yang Liu, Towards Trustworthy LLMs for Code: A Data-Centric Synergistic Auditing Framework.
[ASE 2024] Jian Zhang, Chong Wang, Anran Li, Wenhan Wang, Tianlin Li, Yang Liu, VulAdvisor: Natural Language Suggestion Generation for Software Vulnerability Repair.
[LREC-Coling 2024] Zhiming Li, Yanzhou Li, Tianlin Li#, Mengnan Du, Bozhi Wu, Yushi Cao, Xiaofei Xie, Yi Li, Yang Liu. Unveiling Project-Specific Bias in Neural Code Models.
[ASE 2023] Jian Zhang, Shangqing Liu, Xu Wang, Tianlin Li, Yang Liu. Learning to Locate and Describe Vulnerabilities.
Interpretability and Its Applications:
[TOSEM 2025] Shide Zhou,Tianlin Li#, Yihao Huang, Ling Shi, Kailong Wang#, Yang Liu, Haoyu Wang. NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing.
[ICLR 2025] Xiaoyu Zhang, Juan Zhai, Shiqing Ma, Chao Shen,Tianlin Li, Weipeng Jiang, Yang Liu. Speculative Coreset Selection for Task-Specific Fine-tuning.
[EMSOFT 2024, TCAD] Zeke Xia, Ming Hu, Dengke Yan, Xiaofei Xie, Tianlin Li, Anran Li, Junlong Zhou, and Mingsong Chen. CaBaFL: Asynchronous Federated Learning via Hierarchical Cache and Feature Balance.
[ICML 2024] Zhiming Li, Yushi Cao, Yan Zheng#, Xu Liu, Bozhi Wu, Tianlin Li#, Xiufeng Xu, Junzhe Jiang, Yon Shin Teo, Shang-Wei Lin, Yang Liu. Improving Neural Logic Machines via Failure Reflection.
[AAAI 2024 oral] Ming Hu, Yue Cao, Anran Li, Zhiming Li, Chengwei Liu, Tianlin Li, Mingsong Chen, Yang Liu. FedMut: Generalized Federated Learning via Stochastic Mutation.
[ICLR 2020] Ruofan Liang*, Tianlin Li*, Longfei Li, Jing Wang, Quanshi Zhang. Knowledge consistency between neural networks.
Others:
[全国抗恶劣环境计算机第二十八届学术年会]最佳论文 李恬霖; 杨宏伟; 马殿富. CPU流水线通路结构形式验证方法与实现.
