Computer Vision Toolbox Model for OpenAI CLIP Network
The Contrastive Learning Image Pre-Training (CLIP) network is a vision language model that can be used for joint image-text classification.
3 Descargas
Actualizado
15 oct 2025
The CLIP network uses contrastive learning to encode image and textual data into a shared feature space for joint classification. Images and text with high similarity will be close in this feature space, and have a high CLIP score. This further enables image search from input text, and text search from an input image.
Compatibilidad con la versión de MATLAB
Se creó con
R2026a
Compatible con R2026a
Compatibilidad con las plataformas
Windows macOS (Apple Silicon) macOS (Intel) LinuxEtiquetas
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Descubra Live Editor
Cree scripts con código, salida y texto formateado en un documento ejecutable.
