Red Teaming (AI) Adversarial testing of AI systems to find vulnerabilities, biases, and safety issues before deployment. Related Terms ai safety jailbreaking testing