Abstract: Adversarial attacks pose significant threats to machine learning models, with white-box attacks such as Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Basic ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results