Accepted Papers

Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks

Surgan Jandial: IIT Hyderabad; Ayush Chopra: Media and Data Science Research Lab, Adobe; Mausoom Sarkar: Media and Data Science Research Lab, Adobe; Piyush Gupta: Media and Data Science Research Lab, Adobe; Balaji Krishnamurthy: Media and Data Science Research Lab, Adobe; Vineeth Balasubramanian: IIT Hyderabad


Download

Deep neural networks (DNNs) are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce a new retrospective loss to improve the training of deep neural network models by utilizing the prior experience available in past model states during training. Minimizing the retrospective loss, along with the task-specific loss, pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. Although a simple idea, we analyze the method as well as to conduct comprehensive sets of experiments across domains - images, speech, text, and graphs - to show that the proposed loss results in improved performance across input domains, tasks, and architectures.

How can we assist you?

We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!

Please enter the word you see in the image below: