How to adapt the output size of a given feature map in the Deep Learning Toolbox by using the "pool" operation?

6 visualizaciones (últimos 30 días)
I understand that there are currently "averagePooling2dLayer, maxPooling2dLayer, globalAveragePooling2dLayer, globalMaxPooling2dLayer" and so on for the "pool" operation. globalMaxPooling2dLayer", "maxpool" and other direct call functions, but nothing like pytorch's "adaptive_max_pool2d". function that can directly specify the size of the output feature map for a pool operation?
The following simple example is pytorch code, how can matlab achieve the same purpose?
input = torch.rand(8,3,224,224) # input tensor, NCHW
outSize = (20,20) # specify output tensor size, H_out*W_out
output = F.adaptive_max_pool2d(input,outSize) # adaptive output
print(output.shape) # result--> torch.Size([8, 3, 20, 20])

Respuesta aceptada

KaSyow Riyuu
KaSyow Riyuu el 15 de Abr. de 2022
Stride = ( InputSize / OutputSize )
KernelSize = InputSize - ( OutputSize - 1) * Stride
Padding = 0
you can do adaptivepool like this
function Output = AdaptiveMaxPool(Input,OuputSize)
InputSize = size(Input)
Stride = floor( InputSize ./ OutputSize ) ;
KernelSize = InputSize - ( OutputSize -1) .* Stride ;
Output = maxpool(Input , KernelSize, "Stride", Stide) ;
end

Más respuestas (0)

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Productos


Versión

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by