JUNE 18–22, 2017
FRANKFURT AM MAIN, GERMANY

Presentation Details

 
Name: Generative Adversarial Networks Architecture for Image Synthesis from Text
 
Time: Wednesday, June 21, 2017
09:15 am - 10:00 am
 
Room:   Panorama 3 – DEEP LEARNING DAY
Messe Frankfurt
 
Breaks:10:00 am - 11:00 am Coffee Break
 
Speaker:   Zeynep Akata, AMLab/University of Amsterdam
 
Abstract:  
Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years, generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories such as faces, album covers, room interiors and flowers. 
In a research paper, we developed a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrated the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. As an improved architecture in a follow-on paper we demonstrate how images can be synthesized, giving instructions describing what content to draw in which location, and in another paper we illustrate how images can be generated of the same scene in different conditions by conditioning the GAN with transient attributes and semantic layouts.