On the Transferability of Representations in Neural Networks Between Datasets and Tasks

Fayek, H, Cavedon, L and Wu, H 2018, 'On the Transferability of Representations in Neural Networks Between Datasets and Tasks', in Continual Learning Workshop, 32nd Neural Information Processing Systems (NeurIPS 2018), Montreal, Canada, 2-8 December 2018, pp. 1-7.


Document type: Conference Paper
Collection: Conference Papers

Title On the Transferability of Representations in Neural Networks Between Datasets and Tasks
Author(s) Fayek, H
Cavedon, L
Wu, H
Year 2018
Conference name Continual Learning Workshop, 32nd Neural Information Processing Systems (NeurIPS 2018)
Conference location Montreal, Canada
Conference dates 2-8 December 2018
Proceedings title Continual Learning Workshop, 32nd Neural Information Processing Systems (NeurIPS 2018)
Publisher Curran Associates
Place of publication Montreal, Canada
Start page 1
End page 7
Total pages 7
Abstract Deep networks, composed of multiple layers of hierarchical distributed representations, tend to learn low-level features in initial layers and transition to high-level features towards final layers. Paradigms such as transfer learning, multi-task learning, and continual learning leverage this notion of generic hierarchical distributed representations to share knowledge across datasets and tasks. Herein, we study the layer-wise transferability of representations in deep networks across a few datasets and tasks and note some interesting empirical observations.
Subjects Artificial Intelligence and Image Processing not elsewhere classified
Knowledge Representation and Machine Learning
Neurocognitive Patterns and Neural Networks
Copyright notice ©
Versions
Version Filter Type
Access Statistics: 20 Abstract Views  -  Detailed Statistics
Created: Thu, 21 Feb 2019, 12:10:00 EST by Catalyst Administrator
© 2014 RMIT Research Repository • Powered by Fez SoftwareContact us