What is Red Teaming?

Testing AI system security by simulating attacker perspectives. Professional red teams try prompt injection, jailbreaking, data exfiltration, and other attack techniques. 360 uses AI multi-agent systems for automated red teaming — ‘using AI to find AI vulnerabilities’.