I lead research on AI security and trustworthy machine learning, with a focus on adversarial attacks and defenses, robustness under deployment constraints, and secure perception systems. My work bridges theory, system-level design, and real-world evaluation, targeting vision, autonomous, embedded, and multimodal AI systems.


๐Ÿ”ฅ News

  • 2026.01: ๐ŸŽ‰ 2 papers accepted at ICLR 2026
  • 2025.11: ๐ŸŽ‰ 1 paper accepted at DATE 2026
  • 2025.10: ๐ŸŽ‰ Selected as Top Reviewer at NeurIPS 2025
  • 2025.06: ๐ŸŽ‰ 1 paper accepted at ICCV 2025
  • 2024.06: ๐ŸŽ‰ 1 paper accepted at IROS 2024
  • 2024.06: ๐ŸŽ‰ 3 papers accepted at ICIP 2024
  • 2024.02: ๐ŸŽ‰ 1 paper accepted at CVPR 2024
  • 2024.02: ๐ŸŽ‰ 1 paper accepted at DAC 2024

Research Overview

My research aims to advance the security, robustness, and trustworthiness of machine learning systems under adversarial threats and realistic deployment constraints. I study how architecture choices (e.g., Vision Transformers), quantization and approximation, physical-world effects, and multimodal interactions shape both vulnerabilities and defenses. A recurring theme in my work is breaking brittle alignmentโ€”semantic, gradient, or representationalโ€”to improve robustness while preserving efficiency and deployability.


Selected Research Projects

Below are representative research projects spanning adversarial machine learning, robustness, and secure AI systems.


DRIFT is a stochastic ensemble defense that disrupts gradient consensus across transformations, significantly reducing adversarial transferability while preserving clean accuracy.

Disrupts semantic and gradient alignment across quantized models to prevent patch transferability.

TESSER introduces spectral and semantic regularization to enhance adversarial transferability from Vision Transformers to CNNs and hybrid architectures, achieving state-of-the-art black-box attack performance across diverse model families.

DAP proposes an adaptive adversarial patch that dynamically adjusts spatial placement and appearance to evade person detectors under real-world transformations.

๐Ÿ† Awards & Honors

  • Outstanding Reviewer, NeurIPS 2025.
  • Best Senior Researcher Award, eBRAIN Lab, NYUAD, 2023.
  • Erasmus+ Scholarship, France, 2019.
  • DAAD Scholarship: Advanced Technologies based on IoT (ATIoT), Germany, 2018.
  • DAAD Scholarship: Young ESEM Program (Embedded Systems for Energy Management), Germany, 2016.

๐Ÿง‘โ€๐Ÿซ Academic Service & Community

  • Reviewer: ICLR, NeurIPS, ICCV, CVPR, AAAI, ECCV, DAC, DATE, ICIP; IEEE TIFS, TCSVT, TCAD,
  • Organizer & Speaker: Tutorial: ML Security in Autonomous Systems, IROS 2024

  • Email: ag9321@nyu.edu

I am always open to collaborations on AI security, adversarial robustness, and trustworthy ML systems.