Main Content

Personalización de redes profundas para imágenes

Personalice bucles de entrenamiento y funciones de pérdida de deep learning

Si la función trainingOptions no proporciona las opciones de entrenamiento que necesita para la tarea, o si las capas de salida personalizadas no son compatibles con las funciones de pérdida que necesita, puede definir un bucle de entrenamiento personalizado. En el caso de las redes que no se pueden crear mediante gráficas de capa, puede definir redes personalizadas como una función. Para obtener más información, consulte Define Custom Training Loops, Loss Functions, and Networks.

Funciones

expandir todo

dlnetworkDeep learning network for custom training loops
forwardCompute deep learning network output for training
predictCompute deep learning network output for inference
adamupdateUpdate parameters using adaptive moment estimation (Adam)
rmspropupdate Update parameters using root mean squared propagation (RMSProp)
sgdmupdate Update parameters using stochastic gradient descent with momentum (SGDM)
dlupdate Update parameters using custom function
minibatchqueueCreate mini-batches for deep learning
onehotencodeEncode data labels into one-hot vectors
onehotdecodeDecode probability vectors into class labels
initializeInitialize learnable and state parameters of a dlnetwork
plotRepresentar una arquitectura de red neuronal
addLayersAdd layers to layer graph or network
removeLayersRemove layers from layer graph or network
connectLayersConnect layers in layer graph or network
disconnectLayersDisconnect layers in layer graph or network
replaceLayerReplace layer in layer graph or network
summaryPrint network summary
trainingProgressMonitorMonitor and plot training progress for deep learning custom training loops
dlarrayDeep learning array for custom training loops
dlgradientCompute gradients for custom training loops using automatic differentiation
dlfevalEvaluate deep learning model for custom training loops

Capas de entrada

imageInputLayerImage input layer
image3dInputLayer3-D image input layer

Convolución y capas totalmente conectadas

convolution2dLayer2-D convolutional layer
convolution3dLayer3-D convolutional layer
groupedConvolution2dLayer2-D grouped convolutional layer
transposedConv2dLayerTransposed 2-D convolution layer
transposedConv3dLayerTransposed 3-D convolution layer
fullyConnectedLayerFully connected layer

Capas de activación

reluLayerCapa de unidad lineal rectificada (ReLU)
leakyReluLayerLeaky Rectified Linear Unit (ReLU) layer
clippedReluLayerClipped Rectified Linear Unit (ReLU) layer
eluLayerExponential linear unit (ELU) layer
tanhLayerCapa de tangente hiperbólica (tanh)
swishLayerSwish layer
geluLayerGaussian error linear unit (GELU) layer
functionLayerFunction layer

Capas de normalización, abandono y recorte

batchNormalizationLayerBatch normalization layer
groupNormalizationLayerGroup normalization layer
instanceNormalizationLayerInstance normalization layer
layerNormalizationLayerLayer normalization layer
crossChannelNormalizationLayer Channel-wise local response normalization layer
dropoutLayerDropout layer
crop2dLayer2-D crop layer
crop3dLayer3-D crop layer

Agrupar y desagrupar capas

averagePooling2dLayerAverage pooling layer
averagePooling3dLayer3-D average pooling layer
globalAveragePooling2dLayer2-D global average pooling layer
globalAveragePooling3dLayer3-D global average pooling layer
globalMaxPooling2dLayerGlobal max pooling layer
globalMaxPooling3dLayer3-D global max pooling layer
maxPooling2dLayerMax pooling layer
maxPooling3dLayer3-D max pooling layer
maxUnpooling2dLayerMax unpooling layer

Capas de combinación

additionLayerCapa de suma
multiplicationLayerMultiplication layer
concatenationLayerCapa de concatenación
depthConcatenationLayerCapa de concatenación de profundidad

Temas