Tianci Liu

portrait.png

I am a Ph.D. Candidate in ECE@Purdue University under the supervision of Prof. Jing Gao. Before coming to Purdue, I spent two wonderful years at University of Michigan to acquire my MS degree in Statistics. Prior to that, I got my BS degree from Xiamen University.

My research goal is to develop principled methods for building knowledgeable and efficient machine learning models. My work is primarily focused on the following pillars:

  • Knowledgeable & Efficient LLMs: I design scalable methods for knowledge editing, retrieval-augmented generation (RAG), and efficient fine-tuning to build precise, adaptable, and resource-efficient (M)LLMs, enabling seamless integration of diverse knowledge sources in real-world deployments.

  • Trustworthy AI/ML: I create principled methods to understand and improve fairness and integrity in AI systems with minimal data, mitigating risks and delivering reliable outcomes with minimal data requirement.

I am on the job market and am open to academic positions and industrial research roles. If you believe I might be a good fit for your institution or organization, I’d love to chat! Please feel free to reach out to me liu3351[AT]purdue.edu

news

Nov 23, 2025 Our paper “PEANuT: Parameter-Efficient Adaptation with Weight-aware Neural Tweakers” was accepted at KDD’26 Research Track.
Sep 21, 2025 Our paper “Toward Multimodal, General-Purpose, and Generalizable Knowledge Editing for Foundation Models” was accepted at ICDM’25 BlueSky Track.
Aug 20, 2025 Our paper “Towards Universal Debiasing for Language Models-based Tabular Data Generation” and “Learning to Instruct: Fine-Tuning a Task-Aware Instruction Optimizer for Black-Box LLMs” were accepted at EMNLP’25 Findings.
May 15, 2025 Our paper “RoseRAG: Robust Retrieval-augmented Generation with Small-scale LLMs via Margin-aware Preference Optimization” was accepted at ACL’25 Findings.
May 08, 2025 Our paper “RAM-Hand: Robust Acoustic Multi-Hand Pose Reconstruction Using a Microphone Array” won Best Paper Award at Sensys’25.

selected publications

  1. Preprint
    Alternating Reinforcement Learning for Rubric-Based Reward Modeling in Non-Verifiable LLM Post-Training
    Ran Xu*, Tianci Liu*, Zihan Dong, and 6 more authors
    In arXiv preprint arXiv:2602.01511, 2026
  2. ACL’26
    OpenRubrics: Towards Scalable Synthetic Rubric Generation for Reward Modeling and LLM Alignment
    Tianci Liu*, Ran Xu*, Tony Yu, and 4 more authors
    In Main of the Association for Computational Linguistics: ACL 2026, 2026
  3. ACL’25 Findings
    RoseRAG: Robust Retrieval-augmented Generation with Small-scale LLMs via Margin-aware Preference Optimization
    Tianci Liu*, Haoxiang Jiang*, Tianze Wang, and 5 more authors
    In Findings of the Association for Computational Linguistics: ACL 2025, 2025
  4. ICML’25
    Mitigating Heterogeneous Token Overfitting in LLM Knowledge Editing
    Tianci Liu, Ruirui Li, Zihan Dong, and 6 more authors
    In The Fourty-Second International Conference on Machine Learning, 2025
  5. ICLR’25
    Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning
    Tianci Liu, Ruirui Li, Yunzhe Qi, and 8 more authors
    In The Thirteenth International Conference on Learning Representations, 2025
  6. ICML’24
    LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models
    Tianci Liu, Haoyu Wang, Shiyang Wang, and 2 more authors
    In The Fourty-First International Conference on Machine Learning, 2024
  7. AAAI’23
    Simfair: A unified framework for fairness-aware multi-label classification
    Tianci Liu, Haoyu Wang, Yaqing Wang, and 3 more authors
    In Proceedings of the AAAI Conference on Artificial Intelligence, 2023
  8. EMNLP’24
    RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained Language Model for Knowledge Editing and Fine-tuning
    Haoyu Wang, Tianci Liu, Ruirui Li, and 3 more authors
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024