##### 🌏 Source

Adversarial Camouflage, AdvCam, transfers large adversarial perturbations into customized styles, which are then “hidden” on-target object or off-target background. Focuses on physical-world scenarios, are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers.

## Main contributions

• AdvCam allows the generation of large perturbations, customizable attack regions and camouflage styles.
• It is very flexible and useful for vulnerability evaluation of DNNs against large perturbations for physical-world attacks.

## Background

• Digital settings.
• Physical-world settings.

• Adversarial strength: represents the ability to fool DNNs.
• Adversarial stealthiness: describes whether the adversarial perturbations can be detected by human observers.
• Camouflage flexibility: the degree to which the attacker can control the appearance of adversarial image.

### Existing attacks

Normal attacks try to find an adversarial example for a given image by solving the following optimization problem:

$\textrm{minimize}\ \mathcal{D}(x,x')+\lambda\cdot\mathcal{L}_{adv}(x')\ \textrm{such that}\ x'\in[0,255]^m$

Where the $\mathcal{D}(x,x')$ defines the "stealthiness" property of the adversarial example, and the $\mathcal{L}_{adv}$​ is the adversarial loss. Which means that there is a trade off between stealthiness and adversarial strength.

We use style transfer techniques to achieve the goal of camouflage and adversarial attack techniques to achieve adversarial strength. In order to do so, the loss function, which we call the adversarial camouflage loss, becomes:

$\mathcal{L}=(\mathcal{L}_{s}+\mathcal{L}_{c}+\mathcal{L}_{m})+\lambda\cdot\mathcal{L}_{adv}$

The final overview of the AdvCam approach: