Infomax
Infomax is an optimization principle for artificial neural networks and other information processing systems. It prescribes that a function that maps a set of input values I to a set of output values O should be chosen or learned so as to maximize the average Shannon mutual information between I and O, subject to a set of specified constraints and/or noise processes. Infomax algorithms are learning algorithms that perform this optimization process. The principle was described by Linsker in 1988.[1]
Infomax, in its zero-noise limit, is related to the principle of redundancy reduction proposed for biological sensory processing by Horace Barlow in 1961,[2] and applied quantitatively to retinal processing by Atick and Redlich.[3]
One of the applications of infomax has been to an independent component analysis algorithm that finds independent signals by maximising entropy. Infomax-based ICA was described by Bell and Sejnowski in 1995.[4]
References
- ↑ Linsker R (1988). "Self-organization in a perceptual network" (PDF). IEEE Computer. 21 (3): 105–17. doi:10.1109/2.36.
- ↑ Barlow, H. (1961). "Possible principles underlying the transformations of sensory messages". In Rosenblith, W. Sensory Communication. Cambridge MA: MIT Press. pp. 217–234.
- ↑ Atick JJ, Redlich AN (1992). "What does the retina know about natural scenes?". Neural Computation. 4 (2): 196–210. doi:10.1162/neco.1992.4.2.196.
- ↑ Bell AJ, Sejnowski TJ (November 1995). "An information-maximization approach to blind separation and blind deconvolution". Neural Comput. 7 (6): 1129–59. doi:10.1162/neco.1995.7.6.1129. PMID 7584893.
- Bell AJ, Sejnowski TJ (December 1997). "The "Independent Components" of Natural Scenes are Edge Filters". Vision Res. 37 (23): 3327–38. doi:10.1016/S0042-6989(97)00121-1. PMC 2882863. PMID 9425547.
- Linsker R (1997). "A local learning rule that enables information maximization for arbitrary input distributions". Neural Computation. 9 (8): 1661–65. doi:10.1162/neco.1997.9.8.1661.
- Stone, J. V. (2004). Independent Component Analysis: A tutorial introduction. Cambridge MA: MIT Press. ISBN 0-262-69315-1.