Deep learning with a GPU that supports fp16
Mostrar comentarios más antiguos
Hi.
NVDIA has released the new RTX 2XXX and 3XXX series that support fp16 that accelrates training process.
Does Matlab support this?
Thank you
4 comentarios
Walter Roberson
el 28 de Ag. de 2019
According to the release notes it does; https://www.mathworks.com/products/gpu-coder/whatsnew.html
but according to the Product Limitations it does not.
Joss Knight
el 29 de Ag. de 2019
It is supported for deep learning code generation, but not for general code generation.
Walter Roberson
el 1 de Sept. de 2019
An interesting article came through recently, https://www.linkedin.com/pulse/deep-learning-cant-progress-ieee-754-floating-point-heres-omtzigt/
Krishna Bindumadhavan
el 14 de Sept. de 2019
There is support for half precision in MATLAB via the half precision object, available in the fixed point designer toolbox:https://www.mathworks.com/help/fixedpoint/ref/half.html.
General Code generation support for half precision data type via MATLAB Coder and GPU Coder is under active development. This functionality is expected in an upcoming release.
As mentioned below, there is no support currently for using half for training a deep learning network in MATLAB. This is expected in a future release.
Respuesta aceptada
Más respuestas (0)
Categorías
Más información sobre Parallel and Cloud en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!