Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
education
Portfolio item number 1
Short description of portfolio item number 1
Ph.D. in Computer Science
University of California, Riverside
Ph.D. in Computer Science (2022 – present)
portfolio
publications
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
- Arxived, 2024
We survey and systematize vulnerabilities in large language models exposed by adversarial attacks, including jailbreaks, prompt injection, privacy leakage, and multimodal threats.
Vulnerabilities of Large Language Models to Adversarial Attacks
- ACL Tutorial, 2024
A comprehensive taxonomy of adversarial attacks on large language models, covering jailbreaks, prompt injection, and multimodal threats.
That Doesn’t Go There: Attacks on Shared State in Multi-User Augmented Reality Applications
- USENIX Security, 2024
We present the first end-to-end attacks on shared state in commercial multi-user augmented reality systems, enabling cross-user data poisoning and exfiltration.
Attention Eclipse: Manipulating Attention to Bypass LLM Safety-Alignment
- EMNLP, 2025
We introduce the first attention-manipulation jailbreak, demonstrating how targeted attention perturbations can bypass safety alignment in large language models with high success and low cost.
AttenMIA: LLM Membership Inference Attack through Attention Signals
- IEEE Symposium on Security and Privacy (S&P) — under submission, 2026
We present AttenMIA, the first membership inference attack on large language models that exploits attention signals to infer training data membership.
Co(ve)rtex: Machine Learning Models as Storage Channels and Their (Mis-)Applications
- The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) — under submission, 2026
We show that over-parameterized machine learning models can act as covert, high-capacity storage channels, enabling hidden data injection and exfiltration without degrading task performance.
talks
That Doesn’t Go There: Attacks on Shared State in Multi-User Augmented Reality Applications Permalink
Published:
I presented our paper “That Doesn’t Go There: Attacks on Shared State in Multi-User Augmented Reality Applications” at the USENIX Security Symposium 2024.
teaching
Introduction to Computing (CS008)
Undergraduate Course, UC Riverside, Computer Science, 2026
I’ve been a TA of this course for these quarters: Fall 2023, Winter 2024, Fall 2024, Winter 2025, Spring 2025, Fall 2025, Winter 2026.
