Dr. Amira Guesmi

Research Team Leader, Electrical and Computer Engineering
New York University Abu Dhabi

I lead research on AI security and trustworthy machine learning, with a focus on adversarial attacks and defenses, robustness under deployment constraints, and secure perception systems. My work bridges theory, system-level design, and real-world evaluation, targeting vision, autonomous, embedded, and multimodal AI systems.


🔥 News

  • 2026.01: 🎉 2 papers accepted at ICLR 2026
  • 2025.11: 🎉 1 paper accepted at DATE 2026
  • 2025.10: 🎉 Selected as Top Reviewer at NeurIPS 2025
  • 2025.06: 🎉 1 paper accepted at ICCV 2025
  • 2024.06: 🎉 1 paper accepted at IROS 2024
  • 2024.06: 🎉 3 papers accepted at ICIP 2024
  • 2024.02: 🎉 1 paper accepted at CVPR 2024
  • 2024.02: 🎉 1 paper accepted at DAC 2024

🔬 Research Overview

My research aims to advance the security, robustness, and trustworthiness of machine learning systems under adversarial threats and realistic deployment constraints. I study how architecture choices (e.g., Vision Transformers), quantization and approximation, physical-world effects, and multimodal interactions shape both vulnerabilities and defenses. A recurring theme in my work is breaking brittle alignment—semantic, gradient, or representational—to improve robustness while preserving efficiency and deployability.


📚 Recent Publications by Research Theme

1. Robustness and Security of Quantized & Approximate Neural Networks

  • TriQDef: Disrupting Semantic and Gradient Alignment to Prevent Adversarial Patch Transferability in Quantized Neural Networks, Amira Guesmi, Bassem Ouni, Muhammad Shafique (ICLR 2026)
  • Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks, Amira Guesmi, Ihsen Alouani, Khaled N Khasawneh, Mouna Baklouti, Tarek Frikha, Mohamed Abid, Nael Abu-Ghazaleh (CoRR 2022)
  • Defensive Approximation: Securing CNNs Using Approximate Computing, Amira Guesmi, Ihsen Alouani, Khaled N Khasawneh, Mouna Baklouti, Tarek Frikha, Mohamed Abid, Nael Abu-Ghazaleh (ASPLOS 2021)

2. Adversarial Machine Learning — Foundations

  • DRIFT: Divergent Response in Filtered Transformations for Robust Adversarial Defense, Amira Guesmi, Muhammad Shafique (ICLR 2026)
  • TESSER: Transfer-Enhancing Adversarial Attacks from Vision Transformers via Spectral and Semantic Regularization, Amira Guesmi, Bassem Ouni, Muhammad Shafique (CoRR 2025)
  • Anomaly Unveiled: Securing Image Classification Against Adversarial Patch Attacks, Nandish Chattopadhyay, Amira Guesmi, Muhammad Shafique (ICIP 2024)
  • Defending Against Adversarial Patches Using Dimensionality Reduction, Nandish Chattopadhyay, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique (DAC 2024)
  • ROOM: Adversarial Machine Learning Attacks under Real-Time Constraints, Amira Guesmi, Khaled N Khasawneh, Nael Abu-Ghazaleh, Ihsen Alouani (IJCNN 2022)
  • SIT: Stochastic Input Transformation to Defend Against Adversarial Attacks, Amira Guesmi, Ihsen Alouani, Mouna Baklouti, Tarek Frikha, Mohamed Abid (IEEE Design & Test 2021)

3. Adversarial Machine Learning — Vision & Autonomous Systems

  • DAP: A Dynamic Adversarial Patch for Evading Person Detectors, Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique (CVPR 2024)
  • SSAP: Shape-Sensitive Adversarial Patch for Monocular Depth Estimation, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Bassem Ouni, Muhammad Shafique (IROS 2024)
  • SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique (IEEE Access 2024)
  • AdvART: Adversarial Art for Camouflaged Object Detection Attacks, Amira Guesmi, Ioan Marius Bilasco, Muhammad Shafique, Ihsen Alouani (ICIP 2024)
  • PatchBlock: A Lightweight Defense Against Adversarial Patches for Embedded EdgeAI Devices, Nandish Chattopadhyay, Abdul Basit, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique (DATE 2026)
  • AdvRain: Adversarial raindrops to attack camera-based smart vision systems, Amira Guesmi, Muhammad Abdullah Hanif, Muhammad Shafique (Information 2023)
  • Adversarial attack on radar-based environment perception systems, Amira Guesmi, Ihsen Alouani (CoRR 2022)

4. Physical-World and Multi-Modal Adversarial Threats: Surveys

  • Navigating threats: A survey of physical adversarial attacks on lidar perception systems in autonomous vehicles, Amira Guesmi, Muhammad Shafique, (CoRR 2024)
  • Physical adversarial attacks for camera-based smart systems: Current trends, categorization, applications, research challenges, and future outlook, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique (IEEE Access 2023)

5. Privacy-Preserving and Trustworthy Machine Learning

  • Exploring machine learning privacy/utility trade-off from a hyperparameters lens, Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique (IJCNN 2023)

6. Interpretability and Robustness

  • Exploring the Interplay of Interpretability and Robustness in Deep Neural Networks: A Saliency-Guided Approach, Amira Guesmi, Nishant Suresh Aswani, Muhammad Shafique (ICIP 2024)

7. Continual Learning and Representation Dynamics

  • Examining Changes in Internal Representations of Continual Learning Models Through Tensor Decomposition, Nishant Suresh Aswani, Amira Guesmi, Muhammad Abdullah Hanif, Muhammad Shafique (Unconference 2024)

8. Vision–Language Model (VLM) Security, Hallucination, and Privacy

  • Do Not Leave a Gap, Amira Guesmi, Muhammad Shafique (under review)

(See the Publications page for full lists and links.)


🏆 Awards & Honors

  • Outstanding Reviewer, NeurIPS 2025
  • Best Senior Researcher Award, eBRAIN Lab, NYU Abu Dhabi (2023)
  • Erasmus+ Scholarship (2019)
  • DAAD Grants: ATIoT (2018), Young ESEM Program (2016)

🧑‍🏫 Academic Service & Community

  • Reviewer: ICLR, NeurIPS, ICCV, CVPR, DAC, DATE, ICIP; IEEE TIFS, TCAD, TCSVT
  • Tutorial Organizer & Speaker: ML Security in Autonomous Systems, IROS 2024

  • Email: ag9321@nyu.edu

I am always open to collaborations on AI security, adversarial robustness, and trustworthy ML systems.