Highly motivated MS candidate specializing in AI Safety, Robustness, and Trustworthy Machine Learning with a strong focus on Multimodal Large Language Models (MLLMs). Proven expertise in developing advanced adversarial attacks, enhancing model generalization, and improving data valuation for high-impact research outcomes. Eager to leverage deep technical skills and research acumen to drive innovation in cutting-edge AI applications.
Grader
University of California, Irvine
Mar 2024 - May 2024
Provided academic support and evaluation for undergraduate courses in Databases and Data Management at the School of Information and Computer Sciences.
Research Intern
University of California, Santa Barbara
Feb 2023 - May 2024
Conducted advanced research on AI safety and robustness in Multimodal Large Language Models (MLLMs), developing novel adversarial attacks and enhancing model generalization techniques.
Part-time Open Source Contributor
Cleanlab Inc.
Nov 2023 - Apr 2024
Contributed to Cleanlab's open-source library by developing advanced Out-of-Distribution (OOD) detection methods and integrating data valuation modules.
Research Intern
Virginia Tech
May 2022 - Oct 2022
Investigated the linearity of representation in backdoored models, focusing on data poisoning and stealthy trigger generation.
Computer Science (Networked Systems)
University of California, Irvine
3.83/4.00
Sep 2023 - Jun 2025
Computer Science
Southeast University
3.78/4.00 | 88.18
Sep 2019 - Jun 2023
Student Scholarship
Southeast University
Jun 2023
Received a prestigious student scholarship from Southeast University, acknowledging academic excellence and research potential.
Student Travel Support Award
IEEE Conference on Secure and Trustworthy ML (SaTML)
Jan 2023
Awarded travel support to attend the inaugural IEEE Conference on Secure and Trustworthy ML (SaTML) in 2023, recognizing contributions to the field.
Prompt-insensitive evaluation: Generalizing llm evaluation across prompts through fine-tuning
Not specified
Jan 2025
Co-authored a forthcoming paper on prompt-insensitive evaluation for LLMs, focusing on generalizing evaluation across prompts via fine-tuning.
Improving adversarial transferability in MLLMs via dynamic vision-language alignment attack
Under Review
Jan 2024
Co-authored a paper proposing a novel Dynamic Vision-Language Alignment (DynVLA) Attack to enhance adversarial transferability in Multimodal Large Language Models, currently under review for publication.
Frameworks & Libraries
Machine Learning & AI
Programming Languages
Research Interests
Prof. Yao Qin
Assistant Professor and Senior Research Scientist, University of California, Santa Barbara; Google DeepMind
Dr. Jindong Gu
Senior Research Fellow and Faculty Researcher, University of Oxford; Google DeepMind