Stable Diffusion — The Invisible Watermark in Generated Images

Stable Diffusion — The Invisible Watermark in Generated Images

Originally posted on My Medium. While everyone is using Stable Diffusion to generate artwork, have you ever realized there is a watermark in the generated images? Invisible Watermark The official Stable Diffusion code uses a Python library called invisible-watermark to embed an invisible watermark on the generated images. By “invisible”, I mean real invisible — […]

DiagonalGAN — Content-Style Disentanglement in StyleGAN Explained!

DiagonalGAN — Content-Style Disentanglement in StyleGAN Explained!

Paper Explained: Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Translation Originally posted on My Medium. Introduction In image generation, Content-Style Disentanglement has been an important task. It aims to separate the content and style of the generated image from the latent space that is learned by a GAN. This DiagonalGAN […]

JoJoGAN — Style Transfer on Faces Using StyleGAN — Create JoJo Faces (with codes)

JoJoGAN — Style Transfer on Faces Using StyleGAN — Create JoJo Faces (with codes)

Paper Explained: JoJoGAN — One-Shot Face Stylization Originally posted on My Medium. Introduction JoJoGAN is a style transfer procedure that lets you transfer the style of a face image to another style. It accepts only one style reference image and quickly produces a style mapper that accepts an input and applies the style to the […]

StyleGAN vs StyleGAN2 vs StyleGAN2-ADA vs StyleGAN3

StyleGAN vs StyleGAN2 vs StyleGAN2-ADA vs StyleGAN3

Originally posted on My Medium. In this article, I will compare and show you the evolution of StyleGAN, StyleGAN2, StyleGAN2-ADA, and StyleGAN3. Note: some details will not be mentioned since I want to make it short and only talk about the architectural changes and their purposes. StyleGAN The purpose of StyleGAN is to synthesize photorealistic/high-fidelity […]

SeFa — Finding Semantic Vectors in Latent Space for GANs

SeFa — Finding Semantic Vectors in Latent Space for GANs

Paper Explained: SeFa — Closed-Form Factorization of Latent Semantics in GANs Originally posted on My Medium. Motivation The generator in GANs usually takes a randomly sampled latent vector  as the input and generates a high-fidelity image. By changing the latent vector , we can change the output image. However, in order to change a specific attribute in the output […]