I lead research on AI security and trustworthy machine learning, with a focus on adversarial attacks and defenses, robustness under deployment constraints, and secure perception systems. My work bridges theory, system-level design, and real-world evaluation, targeting vision, autonomous, embedded, and multimodal AI systems.
π₯ News
- 2026.01: π 2 papers accepted at ICLR 2026
- 2025.11: π 1 paper accepted at DATE 2026
- 2025.10: π Selected as Top Reviewer at NeurIPS 2025
- 2025.06: π 1 paper accepted at ICCV 2025
- 2024.06: π 1 paper accepted at IROS 2024
- 2024.06: π 3 papers accepted at ICIP 2024
- 2024.02: π 1 paper accepted at CVPR 2024
- 2024.02: π 1 paper accepted at DAC 2024
Research Overview
My research aims to advance the security, robustness, and trustworthiness of machine learning systems under adversarial threats and realistic deployment constraints. I study how architecture choices (e.g., Vision Transformers), quantization and approximation, physical-world effects, and multimodal interactions shape both vulnerabilities and defenses. A recurring theme in my work is breaking brittle alignmentβsemantic, gradient, or representationalβto improve robustness while preserving efficiency and deployability.
Selected Research Projects
Below are representative research projects spanning adversarial machine learning, robustness, and secure AI systems.




π Awards & Honors
- Outstanding Reviewer, NeurIPS 2025
- Best Senior Researcher Award, eBRAIN Lab, NYU Abu Dhabi (2023)
- Erasmus+ Scholarship (2019)
- DAAD Scholarships: ATIoT (2018), Young ESEM Program (2016)
π§βπ« Academic Service & Community
- Reviewer: ICLR, NeurIPS, ICCV, CVPR, AAAI, ECCV, DAC, DATE, ICIP; IEEE TIFS, TCAD, TCSVT
- Tutorial Organizer & Speaker: ML Security in Autonomous Systems, IROS 2024
π¬ Contact & Links
- Email: ag9321@nyu.edu
I am always open to collaborations on AI security, adversarial robustness, and trustworthy ML systems.
