[R] ICLR 2021 | UT Austin Training-Free Framework Performs High-Quality NAS on ImageNet in Four GPU Hours

  • by

Researchers from the University of Texas, Austin have proposed a novel framework called Training-Free Neural Architecture Search (TE-NAS) for “training-free” neural architecture search. The method can effectively select the best neural architectures without any training, significantly reducing NAS cost while also improving speed. It is introduced in the ICLR 2021 paper Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective.

Read the paper here

Read a summary of it

submitted by /u/Yuqing7
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *