Training VAE model with multiple input images
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I am using the MATLAB VAE model defined here: https://www.mathworks.com/help/deeplearning/ug/train-a-variational-autoencoder-vae-to-generate-images.html
The example used here has just one input and one output. But my test dataset has 810 imageswith 2 input images for each output/predicted image. Could you please suggest something on this matter? I have tried many things, but the model does not train well.
I will add some part of my code in later replies, just in case if someone faces the same issue in future.
Thank You,
Rahul
1 comentario
Ben
el 8 de Abr. de 2024
I would consider having an encoder for each input image, and then combining the encoded representations in some way, typically with an additionLayer or a concatenationLayer.
If both input images are "similar" (could be from the same distribution of images) then you could use the same encoder on both images similar to this example https://uk.mathworks.com/help/deeplearning/ug/train-twin-network-to-compare-images.html
If the input images are from distinct distributions, you may not want to use a shared encoder, and instead design an encoder network appropriate for each input image. Each encoder could be designed similar to the encoder from the VAE example, so the first can be an encoder that maps images to dimensional vectors , and similarly the second encoder maps images . The are hyperparameters you choose, and you can either concatenate and feed this into the decoder, or if you can add .
Respuestas (0)
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!