亚洲欧美中文日韩视频

  • 
    

      • go to Herbert A. Simon's profile page
      • go to Kenneth E. Iverson 's profile page
      • go to Douglas Engelbart's profile page
      • go to David Patterson's profile page
      • go to Whitfield Diffie 's profile page
      • go to Peter Naur's profile page
      • go to Andrew C Yao's profile page
      • go to Leonard M. Adleman's profile page
      • go to Joseph Sifakis's profile page
      • go to Edmund Clarke's profile page
      • go to Juris Hartmanis's profile page
      • go to Shafi Goldwasser 's profile page
      • go to William Kahan's profile page
      • go to Donald E. Knuth's profile page
      • go to Marvin Minsky 's profile page
      • go to Alan Kay's profile page
      • go to Fernando Corbato's profile page
      • go to C. Antony R. Hoare 's profile page
      • go to Yann LeCun's profile page
      • go to Edward A Feigenbaum's profile page
      • go to Sir Tim Berners-Lee's profile page
      • go to Kristen Nygaard 's profile page
      • go to Leslie Lamport's profile page
      • go to Edsger W. Dijkstra's profile page
      A.M. TURING AWARD WINNERS BY...

      Geoffrey E Hinton DL Author Profile link

      Canada – 2018
      Short Annotated Bibliography
      1. Ackley, D. H., G. E. Hinton and T. J. Sejnowski (1985) “A learning algorithm for Boltzmann machines. Cognitive Science, 9, 147-169.
        An early and highly influential description of the Boltzmann machine, a class of neural networks inspired by statistical approaches to physics. This innovation underpinned much of Hinton’s later work.
      2. Rumelhart, D. E., G. E. Hinton and R. J. Williams (1986) “Learning representations by back-propagating errors,” Nature, 323, 533—536 and Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986) “Learning internal representations by error propagation” In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, MIT Press, Cambridge, MA. pp 318-362.
        Two descriptions of the new approach to training neural networks, which Hinton and his collaborators termed “back-propagation.”  The Nature paper is concise and clearly argued, while the longer paper provides compelling detail. Together they helped to revive interest in connectionist approaches to machine learning.
      3. Hinton, G. E., S. Osindero, and Y. Teh, (2006) “A Fast Learning Algorithm for Deep Belief Nets,” Neural Computation, 18, pp 1527-1554.
        Returning to Boltzmann machines, Hinton and his collaborators introduced a new and efficient unsupervised learning algorithm applicable to a restricted subclass of Boltzmann machines. This demonstrated unexpected gains from the introduction of pre-trained “hidden” layers of neurons between input and output.
      4. Hinton, G. et al. (2012) “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Processing Magazine, 29, 82–97.
        In this paper, Hinton partnered with co-authors from the groups working on speech recognition at Microsoft Research, Google and IBM Research to document the success they were achieving by applying deep learning to phonetic classification. This was the application that moved deep learning from experimental technique to industrial practice.
      5. Krizhevsky, A., I. Sutskever & G. Hinton (2012) “ImageNet Classification With Deep Convolutional Neural Networks,” Proc. Advances in Neural Information Processing Systems 25, pp. 1090–1098.
        This report described the design of the SuperVision program that won the 2012 ImageNet classification competition with a spectacular improvement over the performance of existing methods. Following its publication the designers of computer vision systems shifted rapidly towards deep learning methods.
      6. LeCun, Y., Y. Bengio and G. E Hinton. (2015) “Deep Learning,” Nature, Vol. 521, pp 436-444.
        A recent and accessible summary of the methods that Hinton and his co-winners termed “deep learning,” because of their reliance on neural networks with multiple, specialized, layers of neurons between input and output nodes. It addressed a surge of interest in their work following the successful demonstration of these methods for object categorization, face identification, and speech recognition.

       

      亚洲欧美中文日韩视频
    1.