MPSCNNConvolutionTranspose(3) | MetalPerformanceShaders.framework | MPSCNNConvolutionTranspose(3) |
MPSCNNConvolutionTranspose
#import <MPSCNNConvolution.h>
Inherits MPSCNNKernel.
(nonnull instancetype) - initWithDevice:weights:
(nonnull instancetype) - initWithDevice:
(nullable instancetype) - initWithCoder:device:
(MPSImage *__nonnull) -
encodeToCommandBuffer:sourceImage:convolutionGradientState:
(MPSImageBatch *__nonnull) -
encodeBatchToCommandBuffer:sourceImages:convolutionGradientStates:
(void) -
encodeToCommandBuffer:sourceImage:convolutionGradientState:destinationImage:
(void) -
encodeBatchToCommandBuffer:sourceImages:convolutionGradientStates:destinationImages:
NSUInteger inputFeatureChannels
NSUInteger outputFeatureChannels
NSInteger kernelOffsetX
NSInteger kernelOffsetY
NSUInteger groups
MPSNNConvolutionAccumulatorPrecisionOption
accumulatorPrecisionOption
This depends on Metal.framework The MPSCNNConvolutionTranspose specifies a transposed convolution. The MPSCNNConvolutionTranspose convolves the input image with a set of filters, each producing one feature map in the output image.
Some third-party frameworks may rotate the weights spatially by 180 degrees for Convolution Transpose. MPS uses the weights specified by the developer as-is and does not perform any rotation. The developer may need to rotate the weights appropriately in case this rotation is needed before the convolution transpose is applied.
When the stride in any dimension is greater than 1, the convolution transpose puts (stride - 1) zeroes in-between the source image pixels to create an expanded image. Then a convolution is done over the expanded image to generate the output of the convolution transpose.
Intermediate image size = (srcSize - 1) * Stride + 1
Examples:
So in case of sride == 2 (this behaves same in both dimensions) Source image:
_______________ | | | | | | 1 | 2 | 3 | 4 | | | | | |
--------------- Intermediate Image:
___________________________ | | | | | | | | | 1 | 0 | 2 | 0 | 3 | 0 | 4 | | | | | | | | |
--------------------------- NOTE on Offset: There are 2 types of offsets defined: 1) The Offset defined in MPSCNNKernel from which MPSCNNConvolutionTranspose inherits. This offset is applied to from where
the kernel will be applied on the source. 2) The kernelOffsetX and kernelOffsetY which is the offset applied to the kernel when it is finally applied on the intermediate
image. So totalOffset = Offset * stride + kernelOffset The offset defined by user refers to the coordinate frame of the expanded image (we are showing only 1 dimension X it can be extended to Y dimension as well) : X indicates where the convolution transpose begins: Intermediate Image: Offset = 0, kernelOffset = 0
___________________________ | | | | | | | | | 1 | 0 | 2 | 0 | 3 | 0 | 4 | | X | | | | | | |
--------------------------- X indicates where the convolution transpose begins: Intermediate Image: Offset = 0, kernelOffset = 1
___________________________ | | | | | | | | | 1 | 0 | 2 | 0 | 3 | 0 | 4 | | | X | | | | | |
--------------------------- X indicates where the convolution transpose begins: Intermediate Image: Offset = 0, kernelOffset = -1
___________________________
| | | | | | | | X | 1 | 0 | 2 | 0 | 3 | 0 | 4 |
| | | | | | | |
--------------------------- So if the user wanted to apply an offset of 2 on the source image of convolution transpose: Source image:
_______________ | | | | | | 1 | 2 | 3 | 4 | | | | X | |
--------------- offset = 2, kernelOffset = 0 Intermediate Image:
___________________________ | | | | | | | | | 1 | 0 | 2 | 0 | 3 | 0 | 4 | | | | | | X | | |
---------------------------
Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.
This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h.
Note: the regular encodeToCommandBuffer:sourceImage: method may be used when no state is needed, such as when the convolution transpose operation is not balanced by a matching convolution object upstream.
Parameters:
Returns:
<NSSecureCoding> support
Reimplemented from MPSCNNKernel.
Standard init with default properties per filter type
Parameters:
Returns:
Reimplemented from MPSCNNKernel.
Initializes a convolution transpose kernel
Parameters:
Returns:
Precision of accumulator used in convolution. See MPSNeuralNetworkTypes.h for discussion. Default is MPSNNConvolutionAccumulatorPrecisionOptionFloat.
Number of groups input and output channels are divided into.
The number of feature channels per pixel in the input image.
Offset in X from which the kernel starts sliding
Offset in Y from which the kernel starts sliding
The number of feature channels per pixel in the output image.
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |