I lead research on AI security and trustworthy machine learning, with a focus on adversarial attacks and defenses, robustness under deployment constraints, and secure perception systems. My work bridges theory, system-level design, and real-world evaluation, targeting vision, autonomous, embedded, and multimodal AI systems.


πŸ”₯ News

  • 2026.01: πŸŽ‰ 2 papers accepted at ICLR 2026
  • 2025.11: πŸŽ‰ 1 paper accepted at DATE 2026
  • 2025.10: πŸŽ‰ Selected as Top Reviewer at NeurIPS 2025
  • 2025.06: πŸŽ‰ 1 paper accepted at ICCV 2025
  • 2024.06: πŸŽ‰ 1 paper accepted at IROS 2024
  • 2024.06: πŸŽ‰ 3 papers accepted at ICIP 2024
  • 2024.02: πŸŽ‰ 1 paper accepted at CVPR 2024
  • 2024.02: πŸŽ‰ 1 paper accepted at DAC 2024

Research Overview

My research aims to advance the security, robustness, and trustworthiness of machine learning systems under adversarial threats and realistic deployment constraints. I study how architecture choices (e.g., Vision Transformers), quantization and approximation, physical-world effects, and multimodal interactions shape both vulnerabilities and defenses. A recurring theme in my work is breaking brittle alignmentβ€”semantic, gradient, or representationalβ€”to improve robustness while preserving efficiency and deployability.


Selected Research Projects

Below are representative research projects spanning adversarial machine learning, robustness, and secure AI systems.


DRIFT is a stochastic ensemble defense that disrupts gradient consensus across transformations, significantly reducing adversarial transferability while preserving clean accuracy.

Disrupts semantic and gradient alignment across quantized models to prevent patch transferability.

TESSER introduces spectral and semantic regularization to enhance adversarial transferability from Vision Transformers to CNNs and hybrid architectures, achieving state-of-the-art black-box attack performance across diverse model families.

DAP proposes an adaptive adversarial patch that dynamically adjusts spatial placement and appearance to evade person detectors under real-world transformations.

πŸ† Awards & Honors

  • Outstanding Reviewer, NeurIPS 2025
  • Best Senior Researcher Award, eBRAIN Lab, NYU Abu Dhabi (2023)
  • Erasmus+ Scholarship (2019)
  • DAAD Scholarships: ATIoT (2018), Young ESEM Program (2016)

πŸ§‘β€πŸ« Academic Service & Community

  • Reviewer: ICLR, NeurIPS, ICCV, CVPR, AAAI, ECCV, DAC, DATE, ICIP; IEEE TIFS, TCAD, TCSVT
  • Tutorial Organizer & Speaker: ML Security in Autonomous Systems, IROS 2024

  • Email: ag9321@nyu.edu

I am always open to collaborations on AI security, adversarial robustness, and trustworthy ML systems.