At present, machine learning (ML) models find applications in various fields, and many ML systems have achieved remarkable accuracy in a variety of domains. However, in many cases, the answers seem inconsistent depending on the training dataset and the tasks they encounter. Therefore, whether the model’s reasoning or judgment is correct has come under question. Understanding human intelligence will help in building intelligent machines. But as in physics, the principles alone are not sufficient to predict the behavior of complex systems like brains; we need substantial computation to simulate human-like intelligence.
Anirudh Goyal and Yoshua Bengio at Mila, University of Montreal, suggests that deep learning can be extended qualitatively instead of adding additional data and computing resources. In their new paper, Inductive Biases for Deep Learning of Higher-Level Cognition, they explore how inductive biases can bridge the gap between current deep learning and human cognitive abilities to bring deep learning closer to human-level AI.
Currently, Deep learning (DL) incorporates several fundamental inductive biases found in humans and other animals. The team proposes that augmenting these inductive biases can advance deep learning. Focusing on biases that involve higher-level and sequential conscious processing can improve DL from its current successes on in-distribution understanding in highly supervised learning tasks to more robust and human-like out-of-distribution conception transfer learning abilities.