Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
University of California, Riverside
Ph.D. in Computer Science (2022 – present)
- Arxived, 2024
We survey and systematize vulnerabilities in large language models exposed by adversarial attacks, including jailbreaks, prompt injection, privacy leakage, and multimodal threats.
- ACL Tutorial, 2024
A comprehensive taxonomy of adversarial attacks on large language models, covering jailbreaks, prompt injection, and multimodal threats.
- USENIX Security, 2024
We present the first end-to-end attacks on shared state in commercial multi-user augmented reality systems, enabling cross-user data poisoning and exfiltration.
- EMNLP, 2025
We introduce the first attention-manipulation jailbreak, demonstrating how targeted attention perturbations can bypass safety alignment in large language models with high success and low cost.
- IEEE Symposium on Security and Privacy (S&P) — under submission, 2026
We present AttenMIA, the first membership inference attack on large language models that exploits attention signals to infer training data membership.
- The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) — under submission, 2026
We show that over-parameterized machine learning models can act as covert, high-capacity storage channels, enabling hidden data injection and exfiltration without degrading task performance.
Published:
I presented our paper “That Doesn’t Go There: Attacks on Shared State in Multi-User Augmented Reality Applications” at the USENIX Security Symposium 2024.
Undergraduate Course, UC Riverside, Computer Science, 2026
I’ve been a TA of this course for these quarters: Fall 2023, Winter 2024, Fall 2024, Winter 2025, Spring 2025, Fall 2025, Winter 2026.