This year, NeurIPS is hosting two workshops dedicated to self-supervised learning: “Self-Supervised Learning for Speech and Audio Processing” from 6:50 am to 4:25 pm PT (2:50 pm to 12:25 am UTC) on Friday, December 11; and “Self-Supervised Learning — Theory and Practice” from 8:50 am to 6:40 pm PT (4:50 pm to 2:40 am UTC) on Saturday, December 12.
Workshop organizers say the machine learning community is keen to adopt self-supervised approaches to pre-train deep networks as this makes it possible to use the tremendous amount of unlabelled data available on the Internet to train large networks and solve complicated tasks.
The main active SSL research direction is in speech and audio processing, particularly automatic speech recognition, speaker identification and speech translation. Challenges in the field include the modelling of diverse speech and languages and improving audio processing. Also, most existing SSL research has been driven to improve empirical performance, proceeding at speed but without a strong theoretical foundation. NeurIPS 2020 is offering these workshops to open and encourage discussion on such unexplored territories in SSL research.
LeCun will give a talk in the Self-Supervised Learning — Theory and Practice workshop, which will feature SSL-interested researchers from various domains, including Google Brain Research Scientists Quoc V. Le and Chelsea Finn. The workshop will explore the theoretical foundations of empirically well-performing SSL approaches, and how the theoretical insights can further improve SSL’s empirical performance.
Finn is also scheduled for a talk at the Self-Supervised Learning for Speech and Audio Processing workshop, where he will be joined by Dong Yu from Tencent, Mirco Ravanelli from MILA, among other speakers.
Here is a quick read: NeurIPS 2020 | Conference Watch on Self-Supervised Learning