[R] NeurIPS 2020 | Teaching Transformers New Tricks

  • by

Transformers are a class of attention-based neural architectures that have enabled advanced pretrained language models such as Google’s BERT and OpenAI’s GPT series and produced numerous breakthroughs in speech recognition and other natural language processing (NLP) tasks since their debut in 2017. Transformers perform exceptionally well on problems with sequential data, and have more recently been extended to reinforcement learning, computer vision and symbolic mathematics.

This year, 22 Transformer-related research papers were accepted by NeurIPS, the world’s most prestigious machine learning conference. Synced has selected ten of these works to showcase the latest Transformer trends — from extended use of the neural architecture to innovative advancements in technique, architectural design changes and more.

Here is a quick read: NeurIPS 2020 | Teaching Transformers New Tricks

submitted by /u/Yuqing7
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *