МАШИННЫЙ ИНТЕЛЛЕКТ. ОЧЕРКИ ПО ТЕОРИИ МАШИННОГО ОБУЧЕНИЯ И ИСКУССТВЕННОГО ИНТЕЛЛЕКТА
Аннотация и ключевые слова
Аннотация (русский):
Эта книга о природе разума, человеческого и искусственного, с позиций теории машинного обучения. В ее фокусе – проблема создания сильного искусственного интеллекта. Автор показывает, как можно использовать принципы работы нашего мозга для создания искусственной психики роботов. Как впишется в нашу жизнь этот все более сильный искусственный интеллект? Что ожидает нас в ближайшие 10-15 лет? Чем надо заниматься тому, кто хочет принять участие в новой научной революции – создании науки о разуме?

Ключевые слова:
машинное обучение, искусственный интеллект
Текст
Текст произведения (PDF): Читать Скачать
Список литературы

1. A. А. Ежов and С.А. Шумский. Нейрокомпьютинг и его приложения в экономике и бизнесе. МИФИ, 1998. ISBN 5-722-0252-Х.

2. Д.А. Ковалевич and П.Г. Щедровицкий. Конвейер инноваций. 2015. https://asi.ru/conveyor-of-iimovations/.

3. B. Завадовская and К. Карпов. Рейтинг компаний по производительности труда сотрудников. 2017. https://bcs-express.ru/novosti-i-analitika/reiting-kompanii-po-proizvoditel-nosti-truda-sotrudnikov.

4. Александр Марков and Михаил Марков. Многоуровневый отбор и проблема роста мозга у плейстоценовых homo. Опыт компьютерного моделирования сопряженной эволюции генов и мемов, 2019. URL https://www.youtube.com/watch? v=AERQrIyk7og&t=5192s.

5. Владимир Иванович Вернадский. Труды, по всеобщей истории науки. Рипол Классик, 1988.

6. Лев Семенович Выготский. Мышление и речь. Directmedia, 2014.

7. И. Р. Агамирзян. Технологическое лидерство: воспользоваться шансом. In Вызов 2035, pages 8-15. Олимп-Бизнес, 2016.

8. Лиза Фельдман Барретт. Как рождаются эмоции. Революция в понимании мозга и управлении эмоциями. Манн, Иванов и Фабер, 2018.

9. Томас Кун. Структура научных революций. М.: Прогресс, 1977.

10. С.П. Капица. Общая теория роста человечества: сколько людей жило, живет, и будет жить на Земле. М.: Наука, 1999.

11. Станислав Лем. Golem, XIV. Библиотек XXI века. ACT, 2002.

12. И.М. Ножов. Морфологическая и синтаксическая обработка текста (модели и программы). Канд. диссертация,, 2003.

13. Марк Бейкер. Атомы языка: Грамматика в темном поле сознания. ЛКИ, 2008. ISBN 9785382004303.

14. Саймон Хайкин. Нейронные сет,и: полный курс, 2-е издание. Издательский дом Вильяме, 2008.

15. Карлота Перес. Технологические революции и финансовый капитал. Дело, 2011.

16. Крис Андерсон. Длинный хвост. Эффективная модель бизнеса, в Интернете. МИФ, 2012.

17. С.А. Шумский. Язык и мозг: как человек понимает речь. In Сборник научных т,рудое XV Всероссийской научной конференции Нейроинформатика-2013. Лекции по нейроинфор- матике, pages 72-105, 2013.

18. К.В. Анохин. Когнитом: в поисках общей теории когнитивной науки. In Шестая международная конференция, по когнитивной науке: тез. докл. Калининград, pages 26-28, 2014.

19. С.А. Шумский. Реинжиниринг архитектуры мозга: роль и взаимодействие основных подсистем. In Сборник научных трудов XVII Всероссийской научной конференции Нейроинформатика-2015. Лекции по пейроипформатике, pages 13-45, 2015.

20. Евгений Кузнецов. Россия и мир технологического диктата: 3 сценария будущего, 2016. URL https://www.youtube.com/ watch?v=9GtG_kczrFE.

21. Михаил Никитин. Происхождение жизни. От туманности до клетки. Альпина нон-фикшн, 2016.

22. П.Г. Щедровицкий. История промышленных революций и вызовы III промышленной революции. 2016. https://youtu. be/_cpWkGwZMSI.

23. Джордж Лакофф. Женщины, огонь и опасные вещи. Что категории языка говорят нам о мышлении. Litres, 2017.

24. Александр Марков. Эволюция разума и сопротивление науке, 2017. URL https://www.youtube.com/watch?v= qTOyKOryWQY.

25. К.В. Анохин. Когнитом - гиперсетевая модель мозга, 2018а. URL https: //youtu.Ье/tDalzRYEhss.

26. К.В. Анохин. Мозг, как сеть, и разум, как сеть - вызовы математике, 2018b. URL https://youtu.be/tDalzRYEhss.

27. Светлана Бурлак. Происхождение языка: Факты, исследования, гипотезы. Альпина Паблишер, 2018.

28. С.В. Карелов. Впереди ИИ-национализм и ИИ-национализация. 2018. http://russiancouncil.ru/activity/digest/longreads/ vperedi-ii-natsionalizm-i-ii-natsionalizatsiya/.

29. Эрвин Шредингер. Что такое жизнь? Litres, 2018.

30. С.А. Шумский. Глубокое структурное обучение: Новый взгляд на обучение с подкреплением. In Сборник научных т,рудое XX Всероссийской научной конференции Нейроинформатика-2018. Лекции по пейроипформатике, pages 11-43, 2018.

31. С.А. Терехов. Тензорные декомпозиции в статистическом принятии решений. In Сборник научных трудов XX Всероссийской научной конференции Нейроинформатика-2018. Лекции по нейроинформатике, pages 53-58, 2018. URL http://raai.org/library/books/Konf_II_problem-2018/ bookl_intellect.pdf.

32. ДХ Медоуз, И Рандерс, and ДЛ Медоуз. Пределы роста: 30 лет спустя пер. с англ. ЕС Оганесян; под ред. НП Тарасовой. 2012.

33. Ян Гудфеллоу, Бенджио Иошуа, and Аарон Курвилль. Глубокое обучение. Litres, 2018.

34. А.В. Коротаев, С.Ю. Малков, and Л.Е. Гринина. Анализ и моделирование глобальной динамики. Ленанд, 2018.

35. С. Николенко, А. Кадурин, and Е. Архангельская. Глубокое обучение. Погружение в мир нейронных сетей. Питер, 2018. ISBN 978-5-496-02536-2.

36. Alphastar: Mastering the real-time strategy game starcraft ii, 2018. URL https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/.

37. Michal Aharon, Michael Elad, and Alfred Bruckstein. rmk-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11): 4311-4322, 2006.

38. R Aharonov and N Slonim. Watch ibm's ai system debate a human champion live at think 2019. IBM Research blog, 2019. URL https://www.ibm.com/blogs/research/2019/02/ ai-debate-think-2019/.

39. Dario Amodei, Sundaram Ananthanaravanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning, pages 173-182, 2016.

40. Relja Arandjelovic and Andrew Zisserman. Look, listen and learn. arXiv preprint arXiv:1705.08168, 2017.

41. Martin Arjovskv, Soumith Chintala, and Leon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.

42. F Gregory Ashbv, Shawn W Ell, Vivian V Valentin, and Michael В Casale. Frost: a distributed neurocomputational model of working memory maintenance. Journal of cognitive neuroscience, 17( 11): 172S 1713. 2005.

43. Bernard J Baars, Stan Franklin, and Thomas Zoiga Rams0v. Global workspace dynamics: cortical "binding and propagation" enables conscious contents. Frontiers in psychology, 4:200, 2013.

44. Joscha Bach. The cortical conductor theory: Towards addressing consciousness in ai models. In Biologically Inspired Cognitive Architectures Meeting, pages 16-26. Springer, 2018.

45. Dzmitrv Bahdanau, Kvunghvun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv: Ц09.04-73, 2014.

46. Lisa Feldman Barrett. How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt, 2017.

47. Lisa Feldman Barrett and W Kyle Simmons. Interoceptive predictions in the brain. Nature Reviews Neuroscience, 16(7): 419, 2015.

48. Andre М Bastos, W Martin Usrev, Rick A Adams, George R Mangun, Pascal Fries, and Karl J Friston. Canonical microcircuits for predictive coding. Neuron, 76(4):695-711, 2012.

49. Francesco P Battaglia, Karim Benchenane, Anton Sirota, Cvriel MA Pennartz, and Sidney I Wiener. The hippocampus: hub of brain network communication for memory. Trends in cognitive sciences, 15(7):310-318, 2011.

50. James A Bednar and Stuart P WTilson. Cortical maps. The Neuroscientist, 22(6):604-617, 2016.

51. Eric D Beinhocker. The origin of wealth: Evolution, complexity, and the radical remaking of economics. Harvard Business Press, 2006.

52. Timothy С Bell, John G Clearv, and Ian H WTitten. Text compression, volume 348. Prentice Hall Englewood Cliffs, 1990.

53. Yoshua Bengio. Deep learning of representations: Looking forward. In International Conference on Statistical Language and Speech Processing, pages 1-37. Springer, 2013.

54. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Advances in neural information processing system,s, pages 153-160, 2007.

55. Yoshua Bengio, Ian J Goodfellow, and Aaron Courville. Deep learning. Nature, 521:436-444, 2015.

56. Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1-127, 2009.

57. Charles H Bennett. The thermodynamics of computation^a review. International Journal of Theoretical Physics, 21(12): 905-940, 1982.

58. David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.

59. Christopher M Bishop. Pattern recognition and machine learning. springer, 2006.

60. David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3 (Jan):993-1022, 2003.

61. Matthew Michael Botvinick. Hierarchical reinforcement learning and decision making. Current opinion in neurobiology, 22(6): 956-962, 2012.

62. Clemens Boucsein, Martin Nawrot, Philipp Schnepel, and Ad Aertsen. Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround. Frontiers in neuroscience, 5:32, 2011.

63. Alan J Bray and David S Dean. Statistics of critical points of gaussian fields on large-dimensional spaces. Physical review letters, 98(15):150201, 2007.

64. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Delia Pietra, and Jenifer С Lai. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479, 1992a.

65. Peter F Brown, Vincent J Delia Pietra, Robert L Mercer, Stephen A Delia Pietra, and Jennifer С Lai. An estimate of an upper bound for the entropy of english. Computational Linguistics, 18(l):31-40, 1992b.

66. J Bughin, J Seong, J Manvika, M Chui, and R Joshi. Notes from the ai frontier: Modeling the impact of ai on the world economy. McKinsey Global Institute, 2018.

67. Jacques Bughin, E Hazan, S Ramaswamv, M Chui, T Alias, P Dahlstrom, N Henke, and M Trench. Artificial intelligence- the next digital frontier. McKinsey Global Institute, 2017. URL https://www.mckinsey.de/files/170620_studie_ai.pdf.

68. Gvorgv Buzsaki. Rhythms of the Brain. Oxford University Press, 2006.

69. Gvorgv Buzsaki and Edvard I Moser. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature neuroscience, 16(2):130, 2013.

70. Bradley P Carlin and Thomas A Louis. Bayes and empirical Bayes methods for data analysis. Chapman and Hall/CRC, 2010.

71. Chung-Cheng Chiu, Тага N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Ekaterina Gonina, et al. State-of- the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4774-4778. IEEE, 2018.

72. Noam Chomsky. Knowledge of language: Its nature, origin, and use. Greenwood Publishing Group, 1986.

73. Noam Chomsky. The minimalist program. MIT press, 2014.

74. Dan Cire§an, Alessandro Giusti, Luca M Gambardella, and Jiirgen Schmidhuber. Deep neural networks segment neuronal membranes in electron microscopy images. In Advances in neural information processing system,s, pages 2843-2851, 2012a.

75. Dan Cire§an, Ueli Meier, Jonathan Masci, and Jiirgen Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, 32:333-338, 2012b.

76. Dan С Cire§an, Alessandro Giusti, Luca M Gambardella, and Jiirgen Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In International Conference on Medical Image Computing and Computer- assisted Intervention, pages 411-418. Springer, 2013.

77. Andy Clark. Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press, 2015.

78. Michael W Cole, Jeremy R Reynolds, Jonathan D Power, Grega Repovs, Alan Anticevic, and Todd S Braver. Multi-task connectivity reveals flexible hubs for adaptive task control. Nature neuroscience, 16(9):1348, 2013.

79. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Korav Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493-2537, 2011.

80. Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denover, and Herve Jegou. Word translation without parallel data. arXiv preprint arXiv:1710.04087, 2017.

81. Gavin E Crooks. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E, 60(3):2721, 1999.

82. Egidio D'Angelo. Neural circuits of the cerebellum: hypothesis for function. Journal of integrative neuroscience, 10(03):317-352, 2011.

83. Egidio D'Angelo and CAM WTheeler-Kingshott. Modelling the brain: elementary components to explain ensemble functions. Riv. del nuovo Cim, 40:297-333, 2017.

84. Terrence WT Deacon. The symbolic species: The co-evolution of language and the brain. WW Norton & Company, 1998.

85. Jeffrey Dean, Greg Corrado, Raj at Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing system,s, pages 1223-1231, 2012.

86. Paul Dean, John Porrill, Carl-Fredrik Ekerot, and Henrik Jorntell. The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nature Reviews Neuroscience, 11 (1):30, 2010.

87. Thomas Dean. A computational model of the cerebral cortex. In Proceedings of the National Conference on Artificial Intelligence, volume 20, page 938. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.

88. Stanislas Dehaene. Consciousness and the brain: Deciphering how the brain codes our thoughts. Penguin, 2014.

89. Stanislas Dehaene, Hakwan Lau, and Sid Kouider. What is consciousness, and could machines have it? Science, 358(6362): 486-492, 2017.

90. Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1 2): 1 1 12. 2013.

91. Dori Derdikman, Rina Hildesheim, Ehud Ahissar, Amos Arieli, and Amiram Grinvald. Imaging spatiotemporal dynamics of surround inhibition in the barrels somatosensory cortex. Journal of Neuroscience, 23(8):3100-3105, 2003.

92. Jeff Desjardins. The 8 major forces shaping the future of the global economy. 2018. URL https://worldview.stratfor.com/article/ 8-major-forces-shaping-future-global-economy.

93. Alexev Dosovitskiv and Vladlen Koltun. Learning to act by predicting the future. arXiv preprint arXiv:1611.01779, 2016.

94. Rodney J Douglas and Kevan AC Martin. Recurrent neuronal circuits in the neocortex. Current biology, 17(13) :R496-R500, 2007.

95. Kenji Dova. Complementary roles of basal ganglia and cerebellum in learning and motor control. Current opinion in neurobiology, 10(6):732-739, 2000.

96. Robin IM Dunbar. Neocortex size as a constraint on group size in primates. Journal of human evolution, 22(6):469-493, 1992.

97. David Eagleman. Incognito: The Secret Lives of the Brain. New York City: Pantheon, 2011.

98. Chris Eliasmith, Terrence С Stewart, Xuan Choo, Trevor Bekolav, Travis DeWolf, Yichuan Tang, and Daniel Rasmussen. A large- scale model of the functioning brain, science, 338(6111):1202- 1205, 2012.

99. Daniel Everett. How language began: the story of humanity's greatest invention. Profile Books, 2017.

100. Aldo Faisal, Dietrich Stout, Jan Apel, and Bruce Bradley. The manipulative complexity of lower paleolithic stone toolmaking. PloS one, 5(ll):el3718, 2010.

101. Bo Fan, Lijuan Wang, Frank К Soong, and Lei Xie. Photo-real talking head with deep bidirectional lstm. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4884-4888. IEEE, 2015.

102. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. Sparse overcomplete word vector representations. arXiv preprint arXiv:1506.02004, 2015.

103. Michael J Frank and David Badre. Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: computational analysis. Cerebral cortex, 22(3):509-526, 2011.

104. Michael J Frank, Bryan Loughrv, and Randall С O'Reilly. Interactions between frontal cortex and basal ganglia in working memory: a computational model. Cognitive, Affective, & Behavioral Neuroscience, 1(2):137-160, 2001.

105. Stan Franklin, Tamas Madl, Sidney D'mello, and Javier Snaider. Lida: A systems-level architecture for cognition, emotion, and learning. IEEE Transactions on Autonomous Mental Development, 6(1):19-41, 2014.

106. Karl Friston. A theory of cortical responses. Philosophical transactions of the Royal Society B: Biological sciences, 360 (1456):815-836, 2005.

107. Karl Friston, Francesco Rigoli, Dimitri Ognibene, Christoph Mathvs, Thomas Fitzgerald, and Giovanni Pezzulo. Active inference and epistemic value. Cognitive neuroscience, 6(4): 187-214, 2015.

108. Kunihiko Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position- neocognitron. Electron. & Commun. Japan, 62(10) :11-18, 1979.

109. Joaquin M Fuster. Cortex and mind: Unifying cognition. Oxford university press, 2003.

110. Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, and Dmitry Vetrov. Ultimate tensorization: compressing convolutional and fc layers alike. arXiv preprint arXiv:1611.032Ц, 2016.

111. Leon A Gatvs, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.

112. Sergey Gavrilets and Aaron Vose. The dynamics of machiavellian intelligence. Proceedings of the National Academy of Sciences, 103(45):16823-16828, 2006.

113. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.

114. Dileep George and Jeff Hawkins. Towards a mathematical theory of cortical micro-circuits. PLoS computational biology, 5(10): el000532, 2009.

115. Avniel Singh Ghuman, Nicolas M Brunet, Yuanning Li, Roma О Koneckv, John A Pvles, Shawn A Walls, Vincent Destefino, Wei Wang, and R Mark Richardson. Dynamic encoding of face information in the human fusiform gyrus. Nature communications, 5:5672, 2014.

116. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680, 2014.

117. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook. org.

118. Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.

119. Alex Graves, Santiago Fernandez, Faustino Gomez, and Jiirgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376. ACM, 2006.

120. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645-6649. IEEE, 2013.

121. Kevin Gurnev, Tony J Prescott, and Peter Redgrave. A computational model of action selection in the basal ganglia. i. a new functional anatomy. Biological cybernetics, 84(6) :401- 410, 2001.

122. Habr. Нейросеть Яндекса стала соавтором пьесы для альта с оркестром, 2019. URL https://habr.com/ru/post/441286/.

123. Patric Hagmann, Leila Cammoun, Xavier Gigandet, Reto Meuli, Christopher J Honey, Van J Wedeen, and Olaf Sporns. Mapping the structural core of human cerebral cortex. PLoS biology, 6 (7):el59, 2008.

124. Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00Ц9, 2015a.

125. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing system,s, pages 1135-1143, 2015b.

126. Marc D Hauser, Noam Chomsky, and W Tecumseh Fitch. The faculty of language: what is it, who has it, and how did it evolve? science, 298(5598) :1569-1579, 2002.

127. Jeff Hawkins and Subutai Ahmad. WThv neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in neural circuits, 10:23, 2016.

128. Jeff Hawkins, Dileep George, and Jamie Niemasik. Sequence memory for prediction, inference and behaviour. Philosophical Transactions of the Royal Society B: Biological, Sciences, 364 (1521):1203-1209, 2009.

129. Jeff Hawkins, Subutai Ahmad, and Yuwei Cui. A theory of how columns in the neocortex enable learning the structure of the world. Frontiers in neural circuits, 11:81, 2017.

130. Kaiming Не, Xiangvu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034, 2015.

131. Kaiming He, Xiangvu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.

132. Donald Olding Hebb. The organization of behavior: A neuropsychological theory. Psychology Press, 2005.

133. Suzana Herculano-Houzel. The human advantage: a new understanding of how our brain became remarkable. MIT Press, 2016.

134. M Hilbert and P Lopez. The world's technological capacity to store, communicate, and compute information. Science (New York, NY), 332(6025) :60-65, 2011.

135. G Hinton, N Srivastava, and К Swerskv. Rmsprop: Divide the gradient by a running average of its recent magnitude. Neural networks for machine learning, Coursera lecture 6e, 2012a.

136. Geoffrey E Hinton, Simon Osindero, and Yee-Whve Teh. A fast learning algorithm for deep belief nets. Neural computation, 18 (7): 1527 155 i. 2006.

137. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevskv, Ilva Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012b.

138. Sepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.

139. Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jiirgen Schmidhuber, et al. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001.

140. Thomas Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine learning, 42(1):177-196, 2001.

141. Fu Jie Huang, Y-Lan Boureau, Yann LeCun, et al. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Computer Vision and Pattern Recognition, 2007. С VPR'07. IEEE Conference on, pages 1-8. IEEE, 2007.

142. Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.

143. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, pages 646-661. Springer, 2016b.

144. Alexander G Huth, Wendy A de Heer, Thomas L Griffiths, Frederic E Theunissen, and Jack L Gallant. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600) :453-458, 2016.

145. IFPMA. The pharmaceutical industry and global health. facts and figures, 2017. URL https: //www.ifpma.org/wp-content/uploads/2017/02/ IFPMA-Facts-And-Figures-2017.pdf.

146. Sergey Ioffe and Christian Szegedv. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456, 2015.

147. Makoto Ito and Kenji Dova. Multiple representations and algorithms for reinforcement learning in the cortico-basal ganglia circuit. Current opinion in neurobiology, 21(3):368- 373, 2011.

148. Eugene M Izhikevich and Gerald M Edelman. Large-scale model of mammalian thalamocortical systems. Proceedings of the national academy of sciences, 105(9):3593-3598, 2008.

149. Ray Jackendoff. Language, consciousness, culture: Essays on mental structure, volume 2007. MIT Press, 2007.

150. Max Jaderberg, Volodymvr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Korav Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.

151. Rafal Jozefowicz, Wojciech Zaremba, and Ilva Sutskever. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342-2350, 2015.

152. Dan Jurafskv and James H Martin. Speech and language processing, volume 3. Pearson London, 2014.

153. Daniel Kahneman. Thinking, fast and slow. Macmillan, 2011.

154. Pentti Kanerva. Hvperdimensional computing: An introduction to computing in distributed representation with high- dimensional random vectors. Cognitive Commutation, 1(2) :139- 159, 2009.

155. M Kawato. Cerebellum: models. Encyclopedia of neuroscience, 2007.

156. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelvanskiy, and Ping Так Peter Tang. On large- batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.

157. Raymond Р Kesner and Edmund T Rolls. A computational theory of hippocampal function, and tests of the theory: new developments. Neuroscience & Biobehavioral Reviews, 48:92147, 2015.

158. Mehdi Khamassi and Mark D Humphries. Integrating cortico- limbic-basal ganglia architectures for learning model-based and model-free navigation strategies. Frontiers in behavioral neuroscience, 6:79, 2012.

159. Hvunjik Kim, Andriv Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinvals, and Yee Whve Teh. Attentive neural processes. arXiv preprint arXiv:1901.05761, 2019.

160. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:Ц12.6980, 2014.

161. Dan Klein and Christopher D Manning. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the Annual Meeting on Association for Computational Linguistics, page 478. Association for Computational Linguistics, 2004.

162. Teuvo Kohonen. Self-organized formation of topologicallv correct feature maps. Biological cybernetics, 43(l):59-69, 1982.

163. Teuvo Kohonen. Self-Organizing Maps. Springer-Verlag New York, 2001.

164. Augustine Kong, Michael L Frigge, Gudmar Thorleifsson, Hreinn Stefansson, Alexander I Young, Florian Zink, Gudrun A Jonsdottir, Avsu Okbav, Patrick Sulem, Gisli Masson, et al. Selection against variants in the genome associated with educational attainment. Proceedings of the National Academy of Sciences, 11 !(5):K727 K732. 2017.

165. Jonathan Koomev and Samuel Naffziger. Moore's law might be slowing down, but not energy efficiency. IEEE Spectrum, 2015.

166. Eugene V Koonin. The logic of chance: the nature and origin of biological evolution. FT press, 2011.

167. Leonard F Koziol and Deborah Ely Budding. Subcortical structures and cognition: Implications for neuropsychological assessment. Springer Science k, Business Media, 2009.

168. Leonard F Koziol, Lauren A Barker, Arthur W Joyce, and Skip Hrin. Structure and function of large-scale brain systems. Applied Neuropsychology: Child, 3(4):236-244, 2014a.

169. Leonard F Koziol, Deborah Budding, Nancy Andreasen, Stefano D'Arrigo, Sara Bulgheroni, Hiroshi Imamizu, Masao Ito, Mario Manto, Cherie Marvel, Krvstal Parker, et al. Consensus paper: the cerebellum's role in movement and cognition. The Cerebellum, 13(1):151-177, 2014b.

170. Michael Kremer. Population growth and technological change: One million be to 1990. The Quarterly Journal of Economics, 108(3) :681-716, 1993.

171. Alex Krizhevskv, Ilva Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.

172. John E Laird, Christian Lebiere, and Paul S Rosenbloom. A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. AI Magazine, 38(4), 2017.

173. Guillaume Lample, Alexis Conneau, Ludovic Denover, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043, 2017.

174. Nick Lane. Life ascending: the ten great inventions of evolution. Profile books, 2010.

175. Nick Lane. The vital question: energy, evolution, and the origins of complex life. WW Norton k, Company, 2015.

176. Sascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Neural Networks (IJ CNN), The 2010 International Joint Conference on, pages 1-8. IEEE, 2010.

177. Eric Laukien, Richard Crowder, and Fergal Byrne. Fevnman machine: The universal dynamical systems computer. arXiv preprint arXiv:1609.03971, 2016.

178. Quoc V Le. Building high-level features using large scale unsupervised learning. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8595-8598. IEEE, 2013.

179. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541-551, 1989.

180. Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(ll):2278-2324, 1998.

181. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521 (7553): 136 i i i. 2015. *

182. Kai-Fu Lee. Al Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin, 2018.

183. Tao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470-4481, 2018.

184. Ed S Lein, Michael J Hawrvlvcz, Nancy Ao, Mikael Avres, Amy Bensinger, Amy Bernard, Andrew F Вое, Mark S Boguski,

185. Kevin S Brockwav, Emi J Byrnes, et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature, 445(7124): 168, 2007.

186. Peter Lennie. The cost of cortical computation. Current biology, 13(6):493-497, 2003.

187. Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing system,s, pages 2177-2185, 2014.

188. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.

189. James Manvika, Jaana Remes, Jan Mischke, and Mekala Krishnan. The productivity puzzle: a closer look at the United States. McKinsev Global Institute, 2017.

190. James G March. Exploration and exploitation in organizational learning. Organization science, 2(l):71-87, 1991.

191. Henry Markram, Eilif Muller, Srikanth Ramaswamv, Michael W Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, et al. Reconstruction and simulation of neocortical microcircuitrv. Cell, 163(2):456-492, 2015.

192. DM Mateos, R Wennberg, R Guevara, and JL Perez Velazquez. Consciousness as a global property of brain dynamic activity. Physical, Review E, 96(6):062410, 2017.

193. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.

194. Melanie Mitchell. An introduction to genetic algorithms. 1998.

195. Volodymyr Mnih, Korav Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.

196. Volodymyr Mnih, Korav Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas К Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.

197. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harlev, David Silver, and Korav Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928-1937, 2016.

198. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. arXiv preprint arXiv:1701.05369, 2017.

199. Edvard I Moser, Emilio Kropff, and Mav-Britt Moser. Place cells, grid cells, and the brain's spatial representation system. Annual review of neuroscience, 31, 2008.

200. Vernon В Mountcastle. Introduction. Cerebral cortex, 13(1):2 I. 2003.

201. Urs Muller, Jan Ben, Eric Cosatto, Beat Flepp, and Yann L Cun. Off-road obstacle avoidance through end-to-end learning. In Advances in neural information processing system,s, pages 739-746, 2006.

202. Vipul Naik. Distribution, 2014. URL https://intelligence.org/wp-content/uploads/2014/02/ Naik-Distribution-of-Computation.pdf.

203. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807-814, 2010.

204. Craig G Nevill-Manning and Ian H Witten. Identifying hierarchical structure in sequences: A linear-time algorithm. Journal of Artificial Intelligence Research, 7:67-82, 1997.

205. Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexev Dosovitskiv, and Jeff Clune. Plug k, play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.

206. Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In Advances in neural information processing system,s, pages 442-450, 2015.

207. Erkki Oja. Simplified neuron model as a principal component analyzer. Journal of mathematical biology, 15(3):267-273, 1982.

208. Erkki Oja and Juha Karhunen. Signal separation by nonlinear hebbian learning. In Computational intelligence: A dynamic system perspective, pages 83-97. Citeseer, 1995.

209. Randall С O'Reilly and Michael J Frank. Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural computation, 18 (2):283-328, 2006.

210. Randall С O'Reilly, Dean Wvatte, and John Rohrlich. Learning through time in the thalamocortical loops. arXiv preprint arXiv:1407.3432, 2014.

211. Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Commuting, 33(5):2295-2317, 2011.

212. Giinther Palm. Neural associative memories and sparse coding. Neural Networks, 37:165-171, 2013.

213. К Panetta. 5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies, 2018. 2018. https://www.gartner.com/smarterwithgartner/5-trends-emerge- in-gartner-hype- cycle-for-emerging-te chnologie

214. Judea Pearl and Dana Mackenzie. The Book of Why: The New Science of Cause and Effect. Basic Books, 2018.

215. Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543, 2014.

216. Nikolav Perunov, Robert A Marsland, and Jeremy L England. Statistical physics of adaptation. Physical Review X, 6(2): 021036, 2016.

217. Steven Pinker. The language instinct: How the mind creates language. Penguin UK, 2003.

218. Christopher Poultnev, Sumit Chopra, Yann L Cun, et al. Efficient learning of sparse representations with an energy-based model. In Advances in neural information processing systems, pages 1137-1144, 2007.

219. Jonathan D Power, Alexander L Cohen, Steven M Nelson, Gagan S Wig, Kelly Anne Barnes, Jessica A Church, Alecia С Vogel, Timothy О Laumann, Fran M Miezin, Bradley L Schlaggar, et al. Functional network organization of the human brain. Neuron, 72(4):665-678, 2011.

220. Gil Press. The thriving ai landscape in israel and what it means for global ai competition. Forbes, Sep 2018. https://www.forbes.com/sites/gilpress/2018/09/24/ the-thriving-ai-landscape-in-israel-and-what-it-means-for-glot

221. Friedemann Pulvermiiller. How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9):458-470, 2013.

222. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06^34, 2015.

223. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilva Sutskever. Language models are unsupervised multitask learners. 2019. URL https://blog.openai.com/ better-language-models/.

224. Maithra Raghu, Ben Poole, Jon Kleinberg, Surva Ganguli, and Jascha Sohl-Dickstein. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.

225. Maxwell JD Ramstead, Michael D Kirchhoff, Axel Constant, and Karl J Friston. Multiscale integration: Beyond internalism and externalism, 2019.

226. Scott Reed, Zevnep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.

227. Anton Reiner, Loreta Medina, and С Leo Veenman. Structural and functional evolution of the basal ganglia in vertebrates. Brain Research Reviews, 28(3):235-285, 1998.

228. Jeremy R Reynolds and Randall С O'Reilly. Developing pfc representations using reinforcement learning. Cognition, 113 (3):281-292, 2009.

229. Urs Ribarv. Dynamics of thalamo-cortical network oscillations and human perception. Progress in brain research, 150:127142, 2005.

230. Gerard J Rinkus. A cortical sparse distributed coding model linking mini-and macrocolumn-scale functionality. Frontiers in neuroanatomy, 4:17, 2010.

231. Jorma Rissanen. Modeling by shortest data description. Automatica, 14(5):465-471, 1978.

232. Edmund Т Rolls. A computational theory of episodic memory formation in the hippocampus. Behavioural brain research, 215 (2):180-196, 2010.

233. Frank Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.

234. Daniel J Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, Zheng Wen, et al. A tutorial on thompson sampling. Foundations and Trends@ in Machine Learning, 11(1):1-96, 2018.

235. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Advances in Neural Information Processing Systems, pages 3859-3869, 2017.

236. Jenny R Saffian, Ann Senghas, and John С Trueswell. The acquisition of language by children. Proceedings of the National Academy of Sciences, 98(23):12874-12875, 2001.

237. Ruslan Salakhutdinov, Andriv Mnih, and Geoffrey Hinton. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24-th international conference on Machine learning, pages 791-798. ACM, 2007.

238. Jared M Saletin and Matthew P Walker. Nocturnal mnemonics: sleep and hippocampal memory processing. Frontiers in neurology, 3:59, 2012.

239. Gerard Salton, Anita Wong, and Chung-Shu Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11) :613-620, 1975.

240. Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427, 2017.

241. Lara Schlaflke, L Schweizer, NN Riither, R Luerding, Martin Tegenthoff, Christian Bellebaum, and Tobias Schmidt-Wilcke. Dynamic changes of resting state connectivity related to the acquisition of a lexico-semantic skill. Neurolmage, 146:429437, 2017.

242. Jurgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85-117, 2015.

243. Noam Shazeer, Azalia Mirhoseini, Krzvsztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparselv-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.

244. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitlv, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Rvan, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779-4783. IEEE, 2018.

245. Stewart Shipp, Rick A Adams, and Karl J Friston. Reflections on agranular architecture: predictive coding in the motor cortex. Trends in neurosciences, 36(12):706-716, 2013.

246. Yoav Shoham, Raymond Perrault, Erik Brvnjolfsson, Jack Clark, James Manvika, Juan Carlos Niebles, Terah Lyons, John Etchemendv, Barbara Grosz, and Zoe Bauer. The Al Index 2018 Annual Report. Stanford University, 2018.

247. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.

248. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-plav with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017.

249. Karen Simonvan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

250. Soren Van Hout Solari and Rich Martin Stoner. Cognitive consilience: primate non-primarv neuroanatomical circuits underlying cognition. Frontiers in neuroanatomy, 5:65, 2011.

251. Hagen Soltau, Hank Liao, and Hasim Sak. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. arXiv preprint arXiv:1610.09975, 2016.

252. Sho Sonoda and Noboru Murata. Transport analysis of infinitely deep neural network. Journal of Machine Learning Research, 20(2):1-52, 2019.

253. Eelke Spaak, Mathilde Bonnefond, Alexander Maier, David A Leopold, and Ole Jensen. Layer-specific entrainment of gamma- band neural activity by the alpha rhythm in monkey visual cortex. Current Biology, 22(24):2313-2318, 2012.

254. Michael W Spratling. A review of predictive coding algorithms. Brain and cognition, 112:92-97, 2017.

255. Pablo Sprechmann and Guillermo Sapiro. Dictionary learning and sparse coding for unsupervised clustering. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 2042-2045. IEEE, 2010.

256. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevskv, Ilva Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929-1958, 2014.

257. Kimberlv L Stachenfeld, Matthew M Botvinick, and Samuel J Gershman. The hippocampus as a predictive map. Nature neuroscience, 20(11):1643, 2017.

258. STATISTA. Number of apps available in leading app stores as of 3rd quarter 2018, 2018. URL https://www.statista.com/statistics/276623/ number-of-apps-available-in-leading-app-stores/.

259. Greg Ver Steeg. Unsupervised learning via total correlation explanation. arXiv preprint arXiv:1706.08984, 2017.

260. Greg Ver Steeg and Aram Galstvan. Low complexity gaussian latent factor models and a blessing of dimensionality. arXiv preprint arXiv:1706.03353, 2017.

261. Andreas Stolcke and Stephen Omohundro. Inducing probabilistic grammars by bavesian model merging. In International Colloquium on Grammatical Inference, pages 106-118. Springer, 1994.

262. Xu Sun, Xuancheng Ren, Shuming Ma, Bingzhen Wei, Wei Li, Jingjing Xu, Houfeng Wang, and Yi Zhang. Training simplification and model simplification for deep learning: A minimal effort back propagation method. IEEE Transactions on Knowledge and Data Engineering, 2018.

263. Ilva Sutskever and Geoffrey Hinton. Learning multilevel distributed representations for high-dimensional sequences. In Artificial Intelligence and Statistics, pages 548-555, 2007.

264. Ilva Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139-1147, 2013.

265. Ilva Sutskever, Oriol Vinvals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing system,s, pages 3104-3112, 2014.

266. Richard S Sutton. Dvna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin, 2(4):160-163, 1991.

267. TASS.RU. Влияние экосистемы МСП на мировую экономику. 2017. https://tass.ru/pmef-2017/articles/4278934.

268. Emanuel Todorov. Parallels between sensory and motor information processing. The cognitive neurosciences, pages 613-24, 2009.

269. Michael Tomasello. Constructing a language. Harvard university press, 2009.

270. Giulio Tononi and Christof Koch. Consciousness: here, there and everywhere? Phil. Trans. R. Soc. B, 370(1668):20140167, 2015.

271. TVkultura. На аукционе Christies впервые продали написанную искусственным интеллектом картину, 2018. URL https://tvkultura.ru/article/show/article_id/ 302385/.

272. Naonori Ueda and Rvohei Nakano. Deterministic annealing em algorithm. Neural networks, 11(2) :271-282, 1998.

273. Marvlka Uusisaari and Erik De Schutter. The mysterious microcircuitrv of the cerebellar nuclei. The Journal of physiology, 589(14):3441-3457, 2011.

274. Kurt Vanlehn and William Ball. A version space approach to learning context-free grammars. Machine learning, 2(1):39 71. 1987.

275. Vladimir Naumovich Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998.

276. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008, 2017.

277. Paul FMJ Verschure. Distributed adaptive control: a theory of the mind, brain, body nexus. Biologically Inspired Cognitive Architectures, 1:55-72, 2012.

278. Paul FMJ Verschure, Cvriel MA Pennartz, and Giovanni Pezzulo. The why, what, where, when and how of goal-directed choice: neuronal and computational principles. Philosophical Transactions of the Royal Society B: Biological Sciences, 369 (1655) :20130483, 2014.

279. Oriol Vinvals, Alexander Toshev, Samv Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156-3164, 2015.

280. Alexander Volokh and Giinter Neumann. Task-oriented dependency parsing evaluation methodology. In 2012 IEEE 13th International Conference on Information Reuse & Integration (IRI), pages 132-137. IEEE, 2012.

281. Christoph Von der Malsburg. Binding in models of perception and brain function. Current opinion in neurobiology, 5(4):520- 526, 1995.

282. Heinz Von Foerster, Patricia M Mora, and Lawrence W Amiot. Doomsday: Friday, 13 november, ad 2026. Science, 132(3436): 1291-1295, 1960.

283. John Von Neumann, Arthur W Burks, et al. Theory of self-reproducing automata. IEEE Transactions on Neural Networks, 5(1):3-14, 1966.

284. Jian-Ping Wang, Sachin S Sapatnekar, Chris H Kim, Paul Crowell, Steve Koester, Suprivo Datta, Kaushik Roy, Anand Raghunathan, X Sharon Hu, Michael Niemier, et al. A pathway to enable exponential scaling for the bevond-cmos era. In Proceedings of the 54-th Annual Design Automation Conference 2017, page 16. ACM, 2017a.

285. Ruohan Wang, Antoine Cullv, Hvung Jin Chang, and Yiannis Demiris. Magan: Margin adaptation for generative adversarial networks. arXiv preprint arXiv:1704-03817, 2017b.

286. Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. In European Conference on Computer Vision, pages 318-335. Springer, 2016.

287. Yisen Wang, Xuejiao Deng, Songbai Pu, and Zhiheng Huang. Residual convolutional ctc networks for automatic speech recognition. arXiv preprint arXiv:1702.07793, 2017c.

288. Lawrence M Ward. The thalamic dynamic core theory of conscious experience. Consciousness and Cognition, 20(2): 464-86, 2011.

289. Christopher JCH Watkins and Peter Davan. Q-learning. Machine learning, 8(3-4):279-292, 1992.

290. Nicholas Watters, Andrea Tacchetti, Theophane Weber, Razvan Pascanu, Peter Battaglia, and Daniel Zoran. Visual interaction networks. arXiv preprint arXiv:1706.01433, 2017.

291. Terry A Welch. Technique for high-performance data compression. Computer, 6(17):8-19, 1984.

292. Ashia С WTilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. arXiv preprint arXiv:1705.08292, 2017.

293. Edward О WTilson. The social conquest of earth. WW Norton k, Company, 2012.

294. J Gerard Wolff. An algorithm for the segmentation of an artificial language analogue. British journal of psychology, 66(1):79-90, 1975.

295. J Gerard Wolff. Language acquisition, data compression and generalization. Pergamon, 1982.

296. J Gerard Wolff. Learning syntax and meanings through optimization and distributional analysis. Categories and processes in language acquisition, 1(1), 1988.

297. Richard Wrangham. Catching fire: How cooking made us human. Basic Books, 2009.

298. Y. Wu, G. Wayne, A. Graves, and T. Lillicrap. The Kanerva Machine: A Generative Distributed Memory. ArXiv e-prints, April 2018.

299. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherev, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherev, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint агХю:1609.08Щ, 2016.

300. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kvunghvun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048-2057, 2015.

301. Tom Young, Devamanvu Hazarika, Soujanva Poria, and Erik Cambria. Recent trends in deep learning based natural language processing, ieee Computational intelligence magazine, 13(3):55 75. 2018.

302. Hujia Yu, Chang Yue, and Chao Wang. News article summarization with attention-based deep recurrent neural networks, 2016.

303. Yan M Yufik and Karl Friston. Life and understanding: the origins of "understanding" in self-organizing nervous systems. Frontiers in system,s neuroscience, 10:98, 2016.

304. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer, 2014.

305. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242, 2016.

306. Ying Zhang, Mohammad Pezeshki, Philemon Brakel, Saizheng Zhang, Cesar Laurent Yoshua Bengio, and Aaron Courville. Towards end-to-end speech recognition with deep convolutional neural networks. arXiv preprint arXiv:1701.021'20, 2017.

307. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.

Войти или Создать
* Забыли пароль?