We are pleased to announce that David Ha, a research scientist at Google Brain, will give a talk about deep learning, especially about deep generative models and machine creativity. You are all invited. Registration in advance is required. #codh8
|Title||Exploring Deep Learning for Classical Japanese Literature, Machine Creativity, and Recurrent World Models!|
|Date||14:30-15:30, November 22 (Thu), 2018|
|Venue||1208/1210 Meeting Room (12F), National Institute of Informatics. Access to NII.|
|Abstract||Deep generative models are proving to become powerful methods to generate realistic media, such as images, speech, and even video. However, they are also seen to be black boxes that are without much interpretability. My recent research interest has been to investigate the abstract representations created using deep generative models. Our group has shown that understanding the latent space of these models not only allows deep neural networks to become more interpretable, but also opens up vast applications. In this talk, I will highlight some recent applications of generative models to the domain of classical Japanese literature. I will also be talking about potential use case of machine learning algorithms in creative applications, and discuss whether we believe the algorithms are just a tool for an artist, or if there is something inherently creative about an algorithm. Finally, we discuss some interesting applications of using generative models for generating reinforcement learning game environments.|
|Bio||David Ha is a staff research scientist at Google Brain. His research interests include Recurrent Neural Networks, Creative AI, and Evolutionary Computing. Prior to joining Google, he worked at Goldman Sachs as a Managing Director, where he ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Math from the University of Toronto.|
You are all invited, free of charge.
The seminar has fininshed. Thank you for your participation.