Glossary
Wasserstein GAN (WGAN)
Wasserstein GAN (WGAN) is a type of Generative Adversarial Network (GAN) that aims to improve on some of the limitations of traditional GANs.
In a traditional GAN, the generator is trained to produce fake samples that are as close as possible to the real samples. The discriminator then tries to distinguish between the fake and real samples. However, this approach can lead to mode collapse, where the generator produces only a few distinct samples and ignores the rest of the sample space.
Wasserstein GAN addresses this issue by using a different loss function called the Wasserstein distance. The Wasserstein distance measures the distance between the distribution of the fake and real samples, rather than the distance between individual samples themselves. This allows for a more stable training process, as the generator is incentivized to cover a larger area of the sample space, rather than just a few modes.
In addition, WGAN also introduces a technique called weight clipping, where the weights of the discriminator are clipped to a fixed range. This helps to prevent the discriminator from becoming too powerful, as it is unable to assign extremely high or low scores to samples.
Overall, Wasserstein GAN is a powerful tool in the field of deep learning and has been applied to various domains such as image and speech generation. Its ability to generate diverse and high-quality samples has made it a popular choice among researchers and practitioners alike.
A wide array of use-cases
Discover how we can help your data into your most valuable asset.
We help businesses boost revenue, save time, and make smarter decisions with Data and AI