why Nvidia A100 GPUs slower than RTX 3090 GPUs?
52 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
재호 곽
el 13 de Mayo de 2022
Comentada: Joss Knight
el 16 de Mayo de 2022
Hello, we have RTX3090 GPU and A100 GPU.
Using the Matlab Deep Learning Toolbox Model for ResNet-50 Network, we found that the A100 was 20% slower than the RTX 3090 when learning from the ResNet50 model.
The questions are as follows.
1. I heard that the speed of A100 and 3090 is different because there is a difference between the number of CUDA cores and the number of Tensor cores, so can only use Cuda cores for Matlab?
If you can use it, I would appreciate it if you could send me a link if you have an example site using Tensor core.
2. You can specify single inference, double inference, and half inference methods when learning GPU. I heard that Matlab uses double inference automatically, so please check if it is the correct answer.
Thank you.
0 comentarios
Respuesta aceptada
David Willingham
el 13 de Mayo de 2022
See this answer for an explanation:
2 comentarios
Joss Knight
el 16 de Mayo de 2022
It is possible to train models in double precision, using model functions, or using a dlnetwork and converting its weights to double precision before training.
However, I don't believe this is what you want. You won't get a speedup over the RTX 3090 training in single precision, it will still be considerably slower.
Más respuestas (0)
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!