I have been working with Matlab for years now. Been enjoying working with VAE/Matlab but with GANs I am missing a crucial functionality: retrieving the latent coordinates of an image. I have used an evolutionary algorithm (CMA-ES, minimizing MSE between input image and reconstructions) to bluntly search through the latent space but with my training data sets becoming more complex (20k samples based on details from 1000 paintings), this approach isn't working very well anymore. I configured my WGAN to have 100 latent dimensions, which is enough to get some pretty nice generated images. However, I really need to be able to find the best latent coordinates for e.g. training images or any input image I give the WGAN. CMA-ES, which is already way past its limits with a 100 dimensional search space, is just not giving me good matches (between original image and image reconstructed based upon the parameters it finds).
I am trying to figure out how to use gradient descent to solve this problem. Commonly in GAN literature you will find people starting with a random latent input x, forwardpropagating it through the generator, calculating the MSE of the reconstruction and the target image, and then backpropagating the MSE through the generator network to determine the gradient of the error over the latent coordinates. the dlfeval/dlgradient examples seem to only discuss the gradients of a network's learnables, not the gradient over the latent space.
Is there some way to get use gradient descent as I described? Would make my day.