Image Generation from Caption

Image Generation from Caption
Mahima Pandya and Sonal Rami Charotar University of Science and Technology, India
ABSTRACT
Generating images from a text description is as challenging as it is interesting. The Adversarial network performsinacompetitive fashion where the networks aretherivalry of each other. With the introduction ofGenerative Adversarial Network, lots of development is happening in the field of Computer Vision. Withgenerative adversarial networks as the baseline model, studied Stack GAN consisting oftwo-stageGANSstep-by- step in this paper that could be easily understood. This paper presents visual comparative study ofother models attempting to generate image conditioned on the text description. One sentencecan be relatedto many images. And to achieve this multi-modal characteristic, conditioning augmentation is also performed. The performance of Stack- GAN is better in generating images from captions due to its unique architecture. As it consists of two GANS instead of one, it first draws a rough sketch and then corrects the defects yielding a high-resolutionimage
KEYWORDS Adversarial Networks, Stack-GAN. Original Source URL: http://aircconline.com/ijscai/V7N2/7218ijscai01.pdf http://airccse.org/journal/ijscai/current2018.html

Comments

Popular posts from this blog

Cancer Prognosis Prediction Using Balanced Stratified Sampling

Design and Implementation of Smart Cooking Based on Amazon Echo