Categories
Misc

Learning to Defend AI Deployments Using an Exploit Simulation Environment

MintNV, an AI/ML educational exercise that showcases how an adversary can bypass defensive ML mechanisms to compromise a host, is now on the NVIDIA NGC catalog.

Machine Learning (ML) comes in many forms that have evaded the standard tools and techniques of cybersecurity professionals. Attacking ML requires an intersection of knowledge between data science and offensive security to answer the question, “How can this be attacked?” Cybersecurity professionals and data scientists need to hone these new skills to answer this difficult question. NVIDIA wants to inspire the ecosystem to better address this gap.

MintNV, an AI/ML educational exercise that showcases how an adversary can bypass defensive ML mechanisms to compromise a host, is now on the NVIDIA NGC catalog, NVIDIA’s hub of GPU-optimized HPC and AI applications. The MintNV docker container challenges the user to apply an adversarial thought process to ML. Creating MintNV as a vulnerable environment is a step in the right direction for ML, aligning closely with other NVIDIA contributions such as the Adversarial ML Threat Matrix.

MintNV is a bridge between AI/ML researchers and cybersecurity professionals throughout the ML landscape. It enables the offensive security community to practice adversarial ML techniques. We will continue contributing research, tools and training to promote community growth and to inspire more of this kind.

Share this exercise and enjoy learning about various offensive security concepts such as enumeration, networking protocols, and administrative functions as you compromise MintNV. Learning about potential vulnerabilities of a ML system using the MintNV simulation helps ML developers understand how to build more secure solutions.

For more information, please visit MintNV’s NGC page.

NVIDIA would like to thank Will Pearce from Microsoft for providing the guidance necessary to implement Machine Learning elements into this educational exercise.

Happy Hacking!

NVIDIA Product Security Team

About NVIDIA Product Security Team:

NVIDIA takes security seriously and values contributions to secure, safe and unbiased use of Artificial Intelligence and Machine Learning. We will continue to create additional educational opportunities for the community. If you have any questions or feedback, please contact psirt@nvidia.com or tweet us at @NVIDIAPSIRT.  See NVIDIA’s Corporate Social Responsibility website and NVIDIA’s 2020 CSR Report for more information.

Leave a Reply

Your email address will not be published. Required fields are marked *