What is Adversarial Attack?

Making small but carefully crafted modifications to AI model inputs, causing completely different outputs. In image recognition, adding invisible noise can make AI mistake a panda for a gibbon. In text models, synonym substitution can bypass safety detection.