Skip navigation

상단메뉴

글로벌메뉴

좌측메뉴

학술행사

검색

논문

tab menu

  • View
  • All
  • 수학부
  • 물리학부
  • 계산과학부
  • Center for Advanced Computation

Seminar View

Seminar
TITLE Information Flows of Diverse Autoencoders
KIAS AUTHORS Jo, Junghyo
JOURNAL ENTROPY, 2021
ARCHIVE  
ABSTRACT Deep learning methods have had outstanding performances in various fields. A fundamental query is why they are so effective. Information theory provides a potential answer by interpreting the learning process as the information transmission and compression of data. The information flows can be visualized on the information plane of the mutual information among the input, hidden, and output layers. In this study, we examine how the information flows are shaped by the network parameters, such as depth, sparsity, weight constraints, and hidden representations. Here, we adopt autoencoders as models of deep learning, because (i) they have clear guidelines for their information flows, and (ii) they have various species, such as vanilla, sparse, tied, variational, and label autoencoders. We measured their information flows using Renyi's matrix-based alpha-order entropy functional. As learning progresses, they show a typical fitting phase where the amounts of input-to-hidden and hidden-to-output mutual information both increase. In the last stage of learning, however, some autoencoders show a simplifying phase, previously called the "compression phase", where input-to-hidden mutual information diminishes. In particular, the sparsity regularization of hidden activities amplifies the simplifying phase. However, tied, variational, and label autoencoders do not have a simplifying phase. Nevertheless, all autoencoders have similar reconstruction errors for training and test data. Thus, the simplifying phase does not seem to be necessary for the generalization of learning.
  • before page
  • list
  • next page
Seminar List

keyword

fiel&date

~