Recovering latent coordinates of an image in GAN

2 visualizaciones (últimos 30 días)
Alexander Hagg
Alexander Hagg el 21 de En. de 2021
Comentada: Alexander Hagg el 2 de Feb. de 2021
I have been working with Matlab for years now. Been enjoying working with VAE/Matlab but with GANs I am missing a crucial functionality: retrieving the latent coordinates of an image. I have used an evolutionary algorithm (CMA-ES, minimizing MSE between input image and reconstructions) to bluntly search through the latent space but with my training data sets becoming more complex (20k samples based on details from 1000 paintings), this approach isn't working very well anymore. I configured my WGAN to have 100 latent dimensions, which is enough to get some pretty nice generated images. However, I really need to be able to find the best latent coordinates for e.g. training images or any input image I give the WGAN. CMA-ES, which is already way past its limits with a 100 dimensional search space, is just not giving me good matches (between original image and image reconstructed based upon the parameters it finds).
I am trying to figure out how to use gradient descent to solve this problem. Commonly in GAN literature you will find people starting with a random latent input x, forwardpropagating it through the generator, calculating the MSE of the reconstruction and the target image, and then backpropagating the MSE through the generator network to determine the gradient of the error over the latent coordinates. the dlfeval/dlgradient examples seem to only discuss the gradients of a network's learnables, not the gradient over the latent space.
Is there some way to get use gradient descent as I described? Would make my day.
  1 comentario
Alexander Hagg
Alexander Hagg el 2 de Feb. de 2021
I have by now found a solution through the Matlab subreddit. Let me post this here. The solution is very(!) slow.
You can just call
gradients = dlgradient(loss,dlTransfer);
with dlTransfer being an image. So the dlfeval/dlgradient setup really can track any variable (with some exceptions that are described in the documentation).
For those who read this looking for an answer as well, the dlgradient call has to be made inside a function, in this case inside
function [gradients,losses] = imageGradients(dlnet,dlTransfer,contentFeatures,styleFeatures,params)
which is then called using
[grad,losses] = dlfeval(@imageGradients,dlnet,dlTransfer,contentFeatures,styleFeatures,styleTransferOptions);

Iniciar sesión para comentar.

Respuestas (1)

Shashank Gupta
Shashank Gupta el 2 de Feb. de 2021
Hi Alexander,
It is very interesting question. Although as far as I am aware of, there is no such example in MATLAB to demonstrate what you want to acheive. I can think of some way to implement this. its rough idea and I am not sure whether it will work or not. But I think you need to define 2 set of learnable parameters, define them into a cell array and compute gradients accordingly. One set of learnable parameters are typical GAN parameters and other one is define according to latent space. So the gradients computation related to first set of learnable parameters are obvious(because it is typical gradient flow which happens in GAN) and gradient computation to second set of learnable parameters will be define with respect to latent space. You can use dlgradient to accommodate the differentiation wrt to latent space.
So the workflow will go like learnable1 for gradient1 and learnable2 for gradient2. Start with computation of gradient1 and then update the weights (learnable1 parameter) of GAN. then compute the gradent2 and update the learnable2, then use this updated learnable2 parameter and update gradient1 and loop continues untill saturation.
I hope my understanding of the topic works.
Cheers

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by