Shrinking massive neural networks used to model language

  • by

Deep learning neural networks can be massive, demanding major computing power. In a test of the ‘lottery ticket hypothesis,’ researchers have found leaner, more efficient subnetworks hidden within BERT models. The discovery could make natural language processing more accessible.

Leave a Reply

Your email address will not be published. Required fields are marked *