Perceptual Similarity

Cover image
🌏 Source

Downloadable at: Open Access - CVPR 2018. Source code is available at: GitHub - richzhang/PerceptualSimilarity.

Brief Introduction

The paper argues that widely used image quality metrics like SSIM and PSNR are simple and shallow functions that may fail to account for many nuances of human perception. The paper introduces a new dataset of human perceptual similarity judgments to systematically evaluate deep features across different architectures and tasks and compare them with classic metrics.

Findings of this paper suggests that perceptual similarity is an emergent property shared across deep visual representations.

Main contributions

In this paper, the author provides a hypothesis that perceptual similarity is not a special function all of its own, but rather a consequence of visual representations tuned to be predictive about important structure in the world.

  • To testify this theory, the paper introduces a large scale, highly varied perceptual similarity dataset containing 484k human judgments.
  • The paper shows that deep features trained on supervised, self-supervised, and unsupervised objectives alike, model low-level perceptual similarity surprisingly well, outperforming previous, widely-used metrics.
  • The paper also demonstrates that network architecture alone doesn't account for the performance: untrained networks achieve much lower performance.

The paper suggests that with this data, we can improve performance by calibrating feature responses from a pre-trained network.

Methodology

The perceptual similarity dataset

This content is less related to my interests. I'll cover them briefly.

  • Traditional distortions: photometric distortions, random noise, blurring, spatial shifts, corruptions.
  • CNN-based distortions: input corruptions (white noise, color removal, downsampling), generator networks, discriminators, loss/learning.
  • Distorted image patches.
  • Superresolution.
  • Frame interpolation.
  • Video deblurring.
  • Colorization.

Similarity measures

  • 2AFC similarity judgments. Two-alternative forced choice (2AFC) is a method for measuring the subjective experience of a person or animal through their pattern of choices and response times. The subject is presented with two alternative options, only one of which contains the target stimulus, and is forced to choose which one was the correct option. Both options can be presented concurrently or sequentially in two intervals (also known as two-interval forced choice, 2IFC).
  • Just noticeable differences. Just-noticeable difference or JND is the amount something must be changed in order for a difference to be noticeable, detectable at least half the time (absolute threshold). This limen is also known as the difference limen, difference threshold, or least perceptible difference.

Deep Feature Spaces

Network activations to distance

The distance between reference and distorted patches xx and x0x_0 is calculated using this workflow and the equation below with a network F\mathcal{F}. The paper extract feature stack from L layers and unit-normalize in the channel dimension. Then the paper scales the activations channel-wise and computes the 2\ell_2 distance.

d(x,x0)=l1HlWlh,wwl(y^hwly^0hwl)22d(x,x_0)=\sum_l\frac{1}{H_lW_l}\sum_{h,w}\|w_l\odot(\hat{y}^l_{hw}-\hat{y}^l_{0hw})\|_2^2

Training on this data

The paper considers the following variants:

  • lin: the paper keep pre-trained network weights fixed and learn linear weights ww on top.
  • tune: the paper initializes from a pre-trained classification model and allow all the weights for network F\mathcal{F} to be fine-tuned.
  • scratch: the paper initializes the network from random Gaussian weights and train it entirely on the author's judgments.

Finally, the paper refer to these as variants of the proposed Learned Perceptual Image Patch Similarity (LPIPS).

Experiments

Performance of low-level metrics and classification networks

Figure 4 shows the performance of various low-level metrics (in red), deep networks, and human ceiling (in black).

Metrics correlate across different perceptual tasks

The 2AFC distortion preference test has high correlation to JND: ρ=.928\rho = .928 when averaging the results across distortion types. This indicates that 2AFC generalizes to another perceptual test and is giving us signal regarding human judgments.

Where do deep metrics and low-level metrics disagree?

Pairs which BiGAN perceives to be far but SSIM to be close generally contain some blur. BiGAN tends to perceive correlated noise patterns to be a smaller distortion than SSIM.

Conclusions

The stronger a feature set is at classification and detection, the stronger it is as a model of perceptual similarity judgments.

Features that are good at semantic tasks, are also good at self-supervised and unsupervised tasks, and also provide good models of both human perceptual behavior and macaque neural activity.

◀ Riemann-Stieltjes Integration and AsymptoticsThe CW Attack Algorithm ▶