About
Dr. Amira Guesmi
Research Team Leader, Electrical and Computer Engineering
New York University Abu Dhabi
I lead research on AI security and trustworthy machine learning, with a focus on adversarial attacks and defenses, robustness under deployment constraints, and secure perception systems. My work bridges theory, system-level design, and real-world evaluation, targeting vision, autonomous, embedded, and multimodal AI systems.
🔥 News
- 2026.01: 🎉 2 papers accepted at ICLR 2026
- 2025.11: 1 paper accepted at DATE 2026
- 2025.10: Selected as Top Reviewer at NeurIPS 2025
- 2025.06: 1 paper accepted at ICCV 2025
- 2024.06: 1 paper accepted at IROS 2024
- 2024.06: 3 papers accepted at ICIP 2024
- 2024.02: 1 paper accepted at CVPR 2024
- 2024.02: 1 paper accepted at DAC 2024
🔬 Research Overview
My research aims to advance the security, robustness, and trustworthiness of machine learning systems under adversarial threats and realistic deployment constraints. I study how architecture choices (e.g., Vision Transformers), quantization and approximation, physical-world effects, and multimodal interactions shape both vulnerabilities and defenses. A recurring theme in my work is breaking brittle alignment—semantic, gradient, or representational—to improve robustness while preserving efficiency and deployability.
📚 Recent Publications by Research Theme
1. Robustness and Security of Quantized & Approximate Neural Networks
- TriQDef: Disrupting Semantic and Gradient Alignment to Prevent Adversarial Patch Transferability in Quantized Neural Networks (ICLR 2026)
- Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks (CoRR 2022)
- Defensive Approximation: Securing CNNs Using Approximate Computing (ASPLOS 2021)
2. Adversarial Machine Learning — Foundations
- DRIFT: Divergent Response in Filtered Transformations for Robust Adversarial Defense (ICLR 2026)
- TESSER: Transfer-Enhancing Adversarial Attacks from Vision Transformers via Spectral and Semantic Regularization (CoRR 2025)
- Anomaly Unveiled: Securing Image Classification Against Adversarial Patch Attacks (ICIP 2024)
- Defending Against Adversarial Patches Using Dimensionality Reduction (DAC 2024)
- ROOM: Adversarial Machine Learning Attacks under Real-Time Constraints (IJCNN 2022)
- SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks (IEEE Design & Test 2021)
3. Adversarial Machine Learning — Vision & Autonomous Systems
- DAP: A Dynamic Adversarial Patch for Evading Person Detectors (CVPR 2024)
- SSAP: Shape-Sensitive Adversarial Patch for Monocular Depth Estimation (IROS 2024)
- SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation (IEEE Access 2024)
- AdvART: Adversarial Art for Camouflaged Object Detection Attacks (ICIP 2024)
- PatchBlock: A Lightweight Defense Against Adversarial Patches for Embedded EdgeAI Devices (DATE 2026)
- AdvRain: Adversarial raindrops to attack camera-based smart vision systems (Information 2023)
- Adversarial attack on radar-based environment perception systems (CoRR 2022)
4. Physical-World and Multi-Modal Adversarial Threats: Surveys
- Navigating threats: A survey of physical adversarial attacks on lidar perception systems in autonomous vehicles (CoRR 2024)
- Physical adversarial attacks for camera-based smart systems: Current trends, categorization, applications, research challenges, and future outlook (IEEE Access 2023)
5. Privacy-Preserving and Trustworthy Machine Learning
- Exploring machine learning privacy/utility trade-off from a hyperparameters lens (IJCNN 2023)
6. Interpretability and Robustness
- Exploring the Interplay of Interpretability and Robustness in Deep Neural Networks: A Saliency-Guided Approach (ICIP 2024)
7. Continual Learning and Representation Dynamics
- Examining Changes in Internal Representations of Continual Learning Models Through Tensor Decomposition, (Unconference 2024)
8. Vision–Language Model (VLM) Security, Hallucination, and Privacy
- Do Not Leave a Gap (under review)
(See the Publications page for full lists and links.)
🏆 Awards & Honors
- Outstanding Reviewer, NeurIPS 2025
- Best Senior Researcher Award, eBRAIN Lab, NYU Abu Dhabi (2023)
- Erasmus+ Scholarship (2019)
- DAAD Grants: ATIoT (2018), Young ESEM Program (2016)
🧑🏫 Academic Service & Community
- Reviewer: ICLR, NeurIPS, ICCV, CVPR, DAC, DATE, ICIP; IEEE TIFS, TCAD, TCSVT
- Tutorial Organizer & Speaker: ML Security in Autonomous Systems, IROS 2024
📬 Contact & Links
- Email: ag9321@nyu.edu
I am always open to collaborations on AI security, adversarial robustness, and trustworthy ML systems.
