dc.contributor.author |
Ngxande, Mkhuseli
|
|
dc.contributor.author |
Tapamo, J
|
|
dc.contributor.author |
Burke, Michael
|
|
dc.date.accessioned |
2019-05-07T06:53:39Z |
|
dc.date.available |
2019-05-07T06:53:39Z |
|
dc.date.issued |
2019-01 |
|
dc.identifier.citation |
Ngxande, N., Tapamo, J., and Burke, M. 2019. DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis. SAUPEC/RobMech/PRASA 2019, Bloemfontein, South Africa, 28 - 31 January 2019, 6pp. |
en_US |
dc.identifier.isbn |
978-1-7281-0369-3 |
|
dc.identifier.isbn |
978-1-7281-0370-9 |
|
dc.identifier.uri |
https://arxiv.org/abs/1903.02225
|
|
dc.identifier.uri |
https://ieeexplore.ieee.org/document/8704766
|
|
dc.identifier.uri |
DOI: 10.1109/RoboMech.2019.8704766
|
|
dc.identifier.uri |
http://hdl.handle.net/10204/10983
|
|
dc.description |
Copyright: IEEE 2019. This is the accepted version of the published item. |
en_US |
dc.description.abstract |
Recent work has shown significant progress in the direction of synthetic data generation using Generative Adversarial Networks (GANs). GANs have been applied in many fields of computer vision including text-to-image conversion, domain transfer, super-resolution, and image-to-video applications. In computer vision, traditional GANs are based on deep convolutional neural networks. However, deep convolutional neural networks can require extensive computational resources because they are based on multiple operations performed by convolutional layers, which can consist of millions of trainable parameters. Training a GAN model can be difficult and it takes a significant amount of time to reach an equilibrium point. In this paper, we investigate the use of depthwise separable convolutions to reduce training time while maintaining data generation performance. Our results show that a DepthwiseGAN architecture can generate realistic images in shorter training periods when compared to a StarGan architecture, but that model capacity still plays a significant role in generative modelling. In addition, we show that depthwise separable convolutions perform best when only applied to the generator. For quality evaluation of generated images, we use the Frechet Inception Distance (FID), which compares the similarity between the generated image distribution and that of the training dataset. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
IEEE |
en_US |
dc.relation.ispartofseries |
Worklist;22307 |
|
dc.subject |
Depthwise Separable Convolution |
en_US |
dc.subject |
Frechet Inception Distance |
en_US |
dc.subject |
FID |
en_US |
dc.subject |
Generative Adversarial Networks |
en_US |
dc.subject |
GANs |
en_US |
dc.subject |
Synthetic Data |
en_US |
dc.title |
DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis |
en_US |
dc.type |
Conference Presentation |
en_US |
dc.identifier.apacitation |
Ngxande, M., Tapamo, J., & Burke, M. (2019). DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis. IEEE. http://hdl.handle.net/10204/10983 |
en_ZA |
dc.identifier.chicagocitation |
Ngxande, Mkhuseli, J Tapamo, and Michael Burke. "DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis." (2019): http://hdl.handle.net/10204/10983 |
en_ZA |
dc.identifier.vancouvercitation |
Ngxande M, Tapamo J, Burke M, DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis; IEEE; 2019. http://hdl.handle.net/10204/10983 . |
en_ZA |
dc.identifier.ris |
TY - Conference Presentation
AU - Ngxande, Mkhuseli
AU - Tapamo, J
AU - Burke, Michael
AB - Recent work has shown significant progress in the direction of synthetic data generation using Generative Adversarial Networks (GANs). GANs have been applied in many fields of computer vision including text-to-image conversion, domain transfer, super-resolution, and image-to-video applications. In computer vision, traditional GANs are based on deep convolutional neural networks. However, deep convolutional neural networks can require extensive computational resources because they are based on multiple operations performed by convolutional layers, which can consist of millions of trainable parameters. Training a GAN model can be difficult and it takes a significant amount of time to reach an equilibrium point. In this paper, we investigate the use of depthwise separable convolutions to reduce training time while maintaining data generation performance. Our results show that a DepthwiseGAN architecture can generate realistic images in shorter training periods when compared to a StarGan architecture, but that model capacity still plays a significant role in generative modelling. In addition, we show that depthwise separable convolutions perform best when only applied to the generator. For quality evaluation of generated images, we use the Frechet Inception Distance (FID), which compares the similarity between the generated image distribution and that of the training dataset.
DA - 2019-01
DB - ResearchSpace
DP - CSIR
KW - Depthwise Separable Convolution
KW - Frechet Inception Distance
KW - FID
KW - Generative Adversarial Networks
KW - GANs
KW - Synthetic Data
LK - https://researchspace.csir.co.za
PY - 2019
SM - 978-1-7281-0369-3
SM - 978-1-7281-0370-9
T1 - DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis
TI - DepthwiseGANs: Fast training generative adversarial networks for realistic image synthesis
UR - http://hdl.handle.net/10204/10983
ER -
|
en_ZA |