Skip to main content

Adversarial Examples

Inputs specifically designed to fool AI models into making mistakes. Important for understanding model robustness and security.

Related Terms

Explore More Terms

Browse Full Glossary