About

I lead research on AI security and trustworthy machine learning, with a focus on adversarial attacks and defenses, robustness under deployment constraints, and secure perception systems. My work bridges theory, system-level design, and real-world evaluation, targeting vision, autonomous, embedded, and multimodal AI systems.


🔥 News

  • 2026.01: 🎉 2 papers accepted at ICLR 2026
  • 2025.11: 1 paper accepted at DATE 2026
  • 2025.10: Selected as Top Reviewer at NeurIPS 2025
  • 2025.06: 1 paper accepted at ICCV 2025
  • 2024.06: 1 paper accepted at IROS 2024
  • 2024.06: 3 papers accepted at ICIP 2024
  • 2024.02: 1 paper accepted at CVPR 2024
  • 2024.02: 1 paper accepted at DAC 2024

Research Overview

My research aims to advance the security, robustness, and trustworthiness of machine learning systems under adversarial threats and realistic deployment constraints. I study how architecture choices (e.g., Vision Transformers), quantization and approximation, physical-world effects, and multimodal interactions shape both vulnerabilities and defenses. A recurring theme in my work is breaking brittle alignment—semantic, gradient, or representational—to improve robustness while preserving efficiency and deployability.


Selected Research Projects

Below are representative research projects spanning adversarial machine learning, robustness, and secure AI systems.
Click on each project to view paper links, figures, and technical details.


TESSER
Transfer-Enhancing Adversarial Attacks from Vision Transformers (arXiv 2025)

Improves black-box adversarial transferability from Vision Transformers via spectral and semantic regularization.

Project Page →
---
DRIFT
Divergent Response Defense (ICLR 2026)

A stochastic transformation defense that disrupts gradient consensus to reduce adversarial transferability.

Project Page →
---
TriQDef
Quantized Patch Defense (ICLR 2026)

Disrupts semantic and gradient alignment across quantized models to prevent patch transferability.

Project Page →
---
DAP
Dynamic Adversarial Patch (CVPR 2024)

Adaptive adversarial patch that evades person detectors under pose, scale, and environmental variation.

Project Page →

🏆 Awards & Honors

  • Outstanding Reviewer, NeurIPS 2025
  • Best Senior Researcher Award, eBRAIN Lab, NYU Abu Dhabi (2023)
  • Erasmus+ Scholarship (2019)
  • DAAD Grants: ATIoT (2018), Young ESEM Program (2016)

🧑‍🏫 Academic Service & Community

  • Reviewer: ICLR, NeurIPS, ICCV, CVPR, DAC, DATE, ICIP; IEEE TIFS, TCAD, TCSVT
  • Tutorial Organizer & Speaker: ML Security in Autonomous Systems, IROS 2024

  • Email: ag9321@nyu.edu

I am always open to collaborations on AI security, adversarial robustness, and trustworthy ML systems.