NeurIPS 2020: Key Research Papers in Computer Vision

  • by

Our team reviewed the papers accepted to NeurIPS 2020 and shortlisted the most interesting ones across different research areas. Here are the topics we cover:

If you’re interested in the remarkable keynote presentations, interesting workshops, and exciting tutorials presented at the conference, check our guide to NeurIPS 2020.

Subscribe to our AI Research mailing list at the bottom of this article to be alerted when we release new summaries.

Top Computer Vision Research Papers at NeurIPS 2020

NeurIPS 2020 features a large number of interesting computer vision research papers. Leading research labs such as Google Brain, ETH Zurich, Adobe Research, MIT, UC Berkeley, and Nanjing University of Science and Technology:

  • introduce novel approaches to semantic and instance segmentation, object detection;
  • investigate texture bias in CNNs and qualities of adversarially robust models;
  • explore image manipulation with autoencoders.

Here are the abstracts of several research papers that we find the most interesting.

Causal Intervention for Weakly-Supervised Semantic Segmentation

Dong Zhang (Nanjing University of Science and Technology), Hanwang Zhang (Nanyang Technological University), Jinhui Tang (Nanjing University of Science and Technology), Xian-Sheng Hua (Damo Academy, Alibaba Group), Qianru Sun (Singapore Management University)

We present a causal inference framework to improve Weakly-Supervised Semantic Segmentation (WSSS). Specifically, we aim to generate better pixel-level pseudo-masks by using only image-level labels — the most crucial step in WSSS. We attribute the cause of the ambiguous boundaries of pseudo-masks to the confounding context, e.g., the correct image-level classification of “horse” and “person” may be not only due to the recognition of each instance, but also their co-occurrence context, making the model inspection (e.g., CAM) hard to distinguish between the boundaries. Inspired by this, we propose a structural causal model to analyze the causalities among images, contexts, and class labels. Based on it, we develop a new method: Context Adjustment (CONTA), to remove the confounding bias in image-level classification and thus provide better pseudo-masks as ground-truth for the subsequent segmentation model. On PASCAL VOC 2012 and MS-COCO, we show that CONTA boosts various popular WSSS methods to new state-of-the-arts.

Code: official PyTorch implementation is available here.

causal intervention

SOLOv2: Dynamic and Fast Instance Segmentation

Xinlong Wang (University of Adelaide), Rufeng Zhang (Tongji University), Tao Kong (Bytedance), Lei Li (ByteDance AI Lab), Chunhua Shen (University of Adelaide)

In this work, we aim at building a simple, direct, and fast instance segmentation framework with strong performance. We follow the principle of the SOLO method of Wang et al. “SOLO: segmenting objects by locations”. Importantly, we take one step further by dynamically learning the mask head of the object segmenter such that the mask head is conditioned on the location. Specifically, the mask branch is decoupled into a mask kernel branch and mask feature branch, which are responsible for learning the convolution kernel and the convolved features respectively. Moreover, we propose Matrix NMS (non maximum suppression) to significantly reduce the inference time overhead due to NMS of masks. Our Matrix NMS performs NMS with parallel matrix operations in one shot, and yields better results. We demonstrate a simple direct instance segmentation system, outperforming a few state-of-the-art methods in both speed and accuracy. A light-weight version of SOLOv2 executes at 31.3 FPS and yields 37.1% AP. Moreover, our state-of-the-art results in object detection (from our mask byproduct) and panoptic segmentation show the potential to serve as a new strong baseline for many instance-level recognition tasks besides instance segmentation.

Code: official implementation is available here and also here.

SOLOv2

Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection

Xiang Li (Nanjing University of Science and Technology), Wenhai Wang (Nanjing University), Lijun Wu (Sun Yat-sen University), Shuo Chen (Nanjing University of Science and Technology), Xiaolin Hu (Tsinghua University), Jun Li (Nanjing University of Science and Technology), Jinhui Tang (Nanjing University of Science and Technology), Jian Yang (Nanjing University of Science and Technology)

One-stage detector basically formulates object detection as dense classification and localization. The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an individual prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the representations of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference and (2) the inflexible Dirac delta distribution for localization when there is ambiguity and uncertainty in complex scenes. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation of localization quality and classification, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain continuous labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the continuous version for successful optimization. On COCO test-dev, GFL achieves 45.0% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5%) and ATSS (43.6%) with higher or comparable inference speed, under the same backbone and training settings. Notably, our best model can achieve a single-model single-scale AP of 48.2%, at 10 FPS on a single 2080Ti GPU. 

Code: official PyTorch implementation is available here.

Generalized Focal Loss

RandAugment: Practical Automated Data Augmentation With a Reduced Search Space

Ekin Dogus Cubuk (Google Brain), Barret Zoph (Google Brain), Jon Shlens (Google Research), Quoc V Le (Google)

Recent work has shown that data augmentation has the potential to significantly improve the generalization of deep learning models. Recently, automated augmentation strategies have led to state-of-the-art results in image classification and object detection. While these strategies were optimized for improving validation accuracy, they also led to state-of-the-art results in semi-supervised learning and improved robustness to common corruptions of images. An obstacle to a large-scale adoption of these methods is a separate search phase which increases the training complexity and may substantially increase the computational cost. Additionally, due to the separate search phase, these approaches are unable to adjust the regularization strength based on model or dataset size. Automated augmentation policies are often found by training small models on small datasets and subsequently applied to train larger models. In this work, we remove both of these obstacles. RandAugment has a significantly reduced search space which allows it to be trained on the target task with no need for a separate proxy task. Furthermore, due to the parameterization, the regularization strength may be tailored to different model and dataset sizes. RandAugment can be used uniformly across different tasks and datasets and works out of the box, matching or surpassing all previous automated augmentation approaches on CIFAR-10/100, SVHN, and ImageNet. On the ImageNet dataset we achieve 85.0% accuracy, a 0.6% increase over the previous state-of-the-art and 1.0% increase over baseline augmentation. On object detection, RandAugment leads to 1.0-1.3% improvement over baseline augmentation, and is within 0.3% mAP of AutoAugment on COCO. Finally, due to its interpretable hyperparameter, RandAugment may be used to investigate the role of data augmentation with varying model and dataset size. 

Code: official TensorFlow implementation is available here and unofficial PyTorch implementation is available here.

RandAugment

Evolving Normalization-Activation Layers

Hanxiao Liu (Google Brain), Andy Brock (DeepMind), Karen Simonyan (DeepMind), Quoc V Le (Google)

Normalization layers and activation functions are fundamental components in deep networks and typically co-locate with each other. Here we propose to design them using an automated approach. Instead of designing them separately, we unify them into a single tensor-to-tensor computation graph, and evolve its structure starting from basic mathematical functions. Examples of such mathematical functions are addition, multiplication and statistical moments. The use of low-level mathematical functions, in contrast to the use of high-level modules in mainstream NAS, leads to a highly sparse and large search space which can be challenging for search methods. To address the challenge, we develop efficient rejection protocols to quickly filter out candidate layers that do not work well. We also use multi-objective evolution to optimize each layer’s performance across many architectures to prevent overfitting. Our method leads to the discovery of EvoNorms, a set of new normalization-activation layers with novel, and sometimes surprising structures that go beyond existing design patterns. For example, some EvoNorms do not assume that normalization and activation functions must be applied sequentially, nor need to center the feature maps, nor require explicit activation functions. Our experiments show that EvoNorms work well on image classification models including ResNets, MobileNets and EfficientNets but also transfer well to Mask R-CNN with FPN/SpineNet for instance segmentation and to BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers in many cases.

Code: official TensorFlow implementation of EvoNorms on ResNets is available here and unofficial PyTorch implementation is available here.

EvoNorm

Rethinking Pre-training and Self-training

Barret Zoph (Google Brain), Golnaz Ghiasi (Google), Tsung-Yi Lin (Google Brain), Yin Cui (Google), Hanxiao Liu (Google Brain), Ekin Dogus Cubuk (Google Brain), Quoc V Le (Google)

Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet pre-training is commonly used to initialize the backbones of object detection and segmentation models. He et al., however, show a surprising result that ImageNet pre-training has limited impact on COCO object detection. Here we investigate self-training as another method to utilize additional data on the same setup and contrast it against ImageNet pre-training. Our study reveals the generality and flexibility of self-training with three additional insights: 1) stronger data augmentation and more labeled data further diminish the value of pre-training, 2) unlike pre-training, self-training is always helpful when using stronger data augmentation, in both low-data and high-data regimes, and 3) in the case that pre-training is helpful, self-training improves upon pre-training. For example, on the COCO object detection dataset, pre-training benefits when we use one fifth of the labeled data, and hurts accuracy when we use all labeled data. Self-training, on the other hand, shows positive improvements from +1.3 to +3.4AP across all dataset sizes. In other words, self-training works well exactly on the same setup that pre-training does not work (using ImageNet to help COCO). On the PASCAL segmentation dataset, which is a much smaller dataset than COCO, though pre-training does help significantly, self-training improves upon the pre-trained model. On COCO object detection, we achieve 54.3AP, an improvement of +1.5AP over the strongest SpineNet model. On PASCAL segmentation, we achieve 90.5 mIOU, an improvement of +1.5% mIOU over the previous state-of-the-art result by DeepLabv3+.

Code: official TensorFlow implementation is available here.

Convolutional Generation of Textured 3D Meshes

Dario Pavllo (ETH Zurich), Graham Spinks (KU Leuven), Thomas Hofmann (ETH Zurich), Marie-Francine Moens (KU Leuven), Aurelien Lucchi (ETH Zurich)

While recent generative models for 2D images achieve impressive visual results, they clearly lack the ability to perform 3D reasoning. This heavily restricts the degree of control over generated objects as well as the possible applications of such models. In this work, we bridge this gap by leveraging recent advances in differentiable rendering. We design a framework that can generate triangle meshes and associated high-resolution texture maps, using only 2D supervision from single-view natural images. A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN. We demonstrate the efficacy of our method on Pascal3D+ Cars and CUB, both in an unconditional setting and in settings where the model is conditioned on class labels, attributes, and text. Finally, we propose an evaluation methodology that assesses the mesh and texture quality separately.

Code: official PyTorch implementation is available here.

Convolutional Generation of Textured 3D Meshes

The Origins and Prevalence of Texture Bias in Convolutional Neural Networks

Katherine L. Hermann (Stanford University), Ting Chen (Google Research, Toronto), Simon Kornblith (Google Research, Toronto)

Recent work has indicated that, unlike humans, ImageNet-trained CNNs tend to classify images by texture rather than by shape. How pervasive is this bias, and where does it come from? We find that, when trained on datasets of images with conflicting shape and texture, CNNs learn to classify by shape at least as easily as by texture. What factors, then, produce the texture bias in CNNs trained on ImageNet? Different unsupervised training objectives and different architectures have small but significant and largely independent effects on the level of texture bias. However, all objectives and architectures still lead to models that make texture-based classification decisions a majority of the time, even if shape information is decodable from their hidden representations. The effect of data augmentation is much larger. By taking less aggressive random crops at training time and applying simple, naturalistic augmentation (color distortion, noise, and blur), we train models that classify ambiguous images by shape a majority of the time, and outperform baselines on out-of-distribution test sets. Our results indicate that apparent differences in the way humans and ImageNet-trained CNNs process images may arise not primarily from differences in their internal workings, but from differences in the data that they see.

Do Adversarially Robust ImageNet Models Transfer Better?

Hadi Salman (Microsoft Research), Andrew Ilyas (MIT), Logan Engstrom (MIT), Ashish Kapoor (Microsoft Research), Aleksander Madry (MIT)

Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on standard datasets can be efficiently adapted to downstream tasks. Typically, better pre-trained models yield better transfer results, suggesting that initial accuracy is a key aspect of transfer learning performance. In this work, we identify another such aspect: we find that adversarially robust models, while less accurate, often perform better than their standard-trained counterparts when used for transfer learning. Specifically, we focus on adversarially robust ImageNet classifiers, and show that they yield improved accuracy on a standard suite of downstream classification tasks. Further analysis uncovers more differences between robust and standard models in the context of transfer learning. Our results are consistent with (and in fact, add to) recent hypotheses stating that robustness leads to improved feature representations.

Code and models: official PyTorch implementation is available here and here.

Do Adversarially Robust ImageNet Models Transfer Better?

Swapping Autoencoder for Deep Image Manipulation

Taesung Park (UC Berkeley), Jun-Yan Zhu (Adobe, CMU), Oliver Wang (Adobe Research), Jingwan Lu (Adobe Research), Eli Shechtman (Adobe Research, US), Alexei Efros (UC Berkeley), Richard Zhang (Adobe)

Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging. We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation, rather than random sampling. The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image. In particular, we encourage the components to represent structure and texture, by enforcing one component to encode co-occurrent patch statistics across different parts of an image. As our method is trained with an encoder, finding the latent codes for a new input image becomes trivial, rather than cumbersome. As a result, it can be used to manipulate real input images in various ways, including texture swapping, local and global editing, and latent code vector arithmetic. Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.

Code: unofficial PyTorch implementation is available here.

Swapping Autoencoder for Deep Image Manipulation

Top Research Papers From 2020

To be prepared for NeurIPS, you should be aware of the major research papers published in the last year in popular topics such as computer vision, NLP, and general machine learning approaches, even if they are not being presented at this specific event. 

We’ve shortlisted top research papers in these areas so you can review them quickly: 

Enjoy this article? Sign up for more AI research updates.

We’ll let you know when we release more summary articles like this one.

The post NeurIPS 2020: Key Research Papers in Computer Vision appeared first on TOPBOTS.

Leave a Reply

Your email address will not be published. Required fields are marked *