MPSCNNConvolutionGradient(3) MetalPerformanceShaders.framework MPSCNNConvolutionGradient(3)

MPSCNNConvolutionGradient

#import <MPSCNNConvolution.h>

Inherits MPSCNNGradientKernel.

Inherited by MPSCNNFullyConnectedGradient.


(nonnull instancetype) - initWithDevice:weights:
(nullable instancetype) - initWithCoder:device:
(nonnull instancetype) - initWithDevice:
(void) - reloadWeightsAndBiasesFromDataSource
(void) - reloadWeightsAndBiasesWithCommandBuffer:state:


NSUInteger sourceGradientFeatureChannels
NSUInteger sourceImageFeatureChannels
NSUInteger groups
NSUInteger channelMultiplier
id< MPSCNNConvolutionDataSource > dataSource
MPSCNNConvolutionGradientOption gradientOption
BOOL serializeWeightsAndBiases

This depends on Metal.framework The MPSCNNConvolutionGradient implementents backward propagation of gradient i.e. it computes the gradient of loss function with respect input data of corresonding forward convolution and gradient of loss function with respect to weights and bias of corresponding convolution in forward pass.

Gradient with respect to input data of corresponding forward convolution will be written in destination image passed to encode call of MPSCNNConvolutionGradient. This step is similar to convolution transpose in that the strided convolution in forward pass become zero filled convolution in backward propagation of gradients. The difference between MPSCNNConvolutionTranspose and gradient wrt data is how the weights, that are provided by data source, are interpreted. MPSCNNConvolution and MPSCNNConvolutionTranspose interpret weights provided by data source as weights[outputFeatureChannels][kernelWidth][kernelHeight][inputFeatureChannels] whereas convoution gradient with respect to data interpret the weights as weights[inputFeatureChannels][kernelWidth][kernelHeight][outputFeatureChannels] i.e. weights are transposed in inputFeatureChannels/outputFeatureChannels dimension and also rotated 180 degress in spatial dimension

User should use the same data source provider to initialize MPSCNNConvolutionGradient as is used to initialize corresponding forward MPSCNNConvolution. Implementation will do the transposition/shuffling needed. Thus, while the forward MPSCNNConvolution takes sourceImage of inputFeatureChannels and convolves it with Wt[outputFeatureChannels][kernelHeight][kernelWidth][inputFeatureChannels] to produce destinationImage of outputFeatureChannels, MPSConvolutionGradient takes sourceGradient of outputFeatureChannels which is out of previous layer (nomally neuron backward layer), convolves it with transposed and rotated weights and produces destinationGradient of inputFeatureChannels. If the user decide to double buffer data source provider i.e. different data source providers are passed to forward MPSCNNConvolution object and corresponding MPSCNNConvolutionGradient object, it is user responsibility to make sure both data source providers provide same weights/bias data and have same properties in convolution descriptor else behavior is undefined.

Gradient with respect to weights and bias are returned in MPSCNNConvolutionGradientState object to be used in weights update functions. If I denotes the input image to corresponding MPSCNNConvolution in forward pass and E denoates the loss gradient from previous layer (normally neuron backward layer) in backward pass, gradient of E with respect to weights is

delta_E/delta_Wkpqc = sum_i sum_j [ E(i - primaryOffset.x,j - primaryOffset.y, k) * I( secondaryStrideInPixelX*i + secondaryOffset.x - secondaryDilationRateX*secondaryKernelWidth/2 + secondaryDilationRateX*p, secondaryStrideinPixelY*i + secondaryOffset.y - secondaryDilationRateY*secondaryKernelHeight/2 + secondaryDilationRateY*q, c) ]

where i goes over 0..W-1 and j goes over 0..H-1, (W,H) being width and height of E. p in [0, secondaryKernelWidth-1] q in [0, secondaryKernelHeight-1] c in [0, inputeFeatureChannels/groups - 1] k in [0, outputFeatureChannels]

and gradient with respect to bias

delta_E/delta_bk = sum_i sum_j [ E(i - primaryOffset.x,j - primaryOffset.y, k) ]

These gradients with respect to weights and bias are returned as buffers in MPSCNNConvolutionGradientState object passed in the encode call. These are consumed by MPSCNNConvolution object's -updateWeightsAndBias:MPSCNNConvolutionGradientState* method for CPU side update and encodeWeightsAndBiasUpdate:commandBuffer:MPSCNNConvolutionGradientState* method of MPSCNNConvolution object for GPU side update. UPdated weights and biases are computed as


Wkpqc_new = Wkpqc_old + delta_E/delta_Wkpqc
bk_new = bk_old + delta_E/delta_bk

Note that MPSCNNConvolutionGradientState objects's buffers that contain gradients, for CPU side update, will only contain valid data after command buffer is complete so its only makes sense to call -updateWeightsAndBias method on MPSCNNConvolution objects after command bufer is complete. One can achieve this by enqueueing a command buffer completion handler block that make this call. Since MPSCNNConvolutionGradientState is used across command buffers i.e. its created in forward pass, consumed by MPSCNNConvolutionGradient in backward pass in same command buffer and passed onto MPSCNNConvolution updateWeightsAndBias method after completion of command buffer, it cannot be a temporary state.

In order to gaurantee consistency between forward pass (MPSCNNConvolution) and weights gradient computation in this filter, certain requirements must be met. 1) Dimensions of loss gradient E from previous layer in backward pass must be equal to clipRect.size of corresponding MPSCNNConvolution in forward pass. This is to gaurantee that only those pixels for which weights/bias contributed in destination of forward pass end up contributing to weights/bias gradient update. If the dimension of loss gradient E from previous layer is not equal to clipRect.size of corresponding forward MPSCNNConvolution, i) one can insert a slice operation to extract out the region of size clipRect.size from appropriate offset in E and set primaryOffset = 0 Or ii) set primatryOffset to offset in E at which valid data starts and make sure data outside is zeroed. 2) secondaryOffset should be set to what offset property of MPSCNNConvolution was set to in forward pass.

Currently back propagation for gradients is only supported for regualar convolution and depthwise convolution. Back propagation sub-pixel convolution are not supported. So channelMultiplier and subPixelScaleFactor must be one.

- (nullable instancetype) initWithCoder: (NSCoder *__nonnull) aDecoder(nonnull id< MTLDevice >) device

NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use initWithCoder:device instead.

Parameters:

aDecoder The NSCoder subclass with your serialized MPSKernel
device The MTLDevice on which to make the MPSKernel

Returns:

A new MPSKernel object, or nil if failure.

Reimplemented from MPSCNNGradientKernel.

Reimplemented in MPSCNNFullyConnectedGradient.

- (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device

Standard init with default properties per filter type

Parameters:

device The device that the filter will be used on. May not be NULL.

Returns:

A pointer to the newly initialized object. This will fail, returning nil if the device is not supported. Devices must be MTLFeatureSet_iOS_GPUFamily2_v1 or later.

Reimplemented from MPSCNNGradientKernel.

Reimplemented in MPSCNNFullyConnectedGradient.

- (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device(nonnull id< MPSCNNConvolutionDataSource >) weights

Initializes a convolution gradient (with respect to weights and bias) object.

Parameters:

device The MTLDevice on which this MPSCNNConvolutionGradient filter will be used
weights A pointer to a object that conforms to the MPSCNNConvolutionDataSource protocol. Note that same data source as provided to forward convolution should be used.

Returns:

A valid MPSCNNConvolutionGradient object or nil, if failure.

Reimplemented in MPSCNNFullyConnectedGradient.

- (void) reloadWeightsAndBiasesFromDataSource

CPU side reload. Reload the updated weights and biases from data provider into internal weights and bias buffers. Weights and biases gradients needed for update are obtained from MPSCNNConvolutionGradientState object. Data provider passed in init call is used for this purpose.

- (void) reloadWeightsAndBiasesWithCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSCNNConvolutionWeightsAndBiasesState *__nonnull) state

GPU side reload. Reload the updated weights and biases from update buffer produced by application enqueued metal kernel into internal weights and biases buffer. Weights and biases gradients needed for update are obtained from MPSCNNConvolutionGradientState object's gradientForWeights and gradientForBiases metal buffer.

Parameters:

commandBuffer Metal command buffer on which application update kernel was enqueued consuming MPSCNNConvolutionGradientState's gradientForWeights and gradientForBiases buffer and producing updateBuffer metal buffer.
state MPSCNNConvolutionWeightsAndBiasesState containing weights and biases buffers which have updated weights produced by application's update kernel.

- (NSUInteger) channelMultiplier [read], [nonatomic], [assign]

Channel multiplier. For convolution created with MPSCNNDepthWiseConvolutionDescriptor, it is the number of output feature channels for each input channel. See MPSCNNDepthWiseConvolutionDescriptor for more details. Default is 0 which means regular CNN convolution. Currently only channelMultiplier of 1 is supported i.e. inputChannels == outputChannels

- dataSource [read], [nonatomic], [retain]

dataSource with which gradient object was created

- gradientOption [read], [write], [nonatomic], [assign]

Option to control which gradient to compute. Default is MPSCNNConvolutionGradientOptionAll which means both gradient with respect to data and gradient with respect to weight and bias are computed.

- groups [read], [nonatomic], [assign]

Number of groups input and output channels are divided into.

- (BOOL) serializeWeightsAndBiases [read], [write], [nonatomic], [assign]

Property to control serialization of weights and bias. During serialization of convolution object in -encodeWithCoder call, weights and biases are saved so that convolution object can be properly unserialized/restored in -initWithCoder call. If data source provied is NSSecureCoding compliant, data source is serialized else weights and biases are serialized. As weights/biases data may be several MB and these are same for both gradient and forward convolution object, application may already have weights/biases on disk through convolution, it can save disk space by setting this property false so convolution gradient object does not end up storing another copy of weights/biases. Default is NO. When application decides to set it to NO, it MUST call -(void) reloadWeightsAndBiasesFromDataSource after initWithCoder has initialized convolution object.

- sourceGradientFeatureChannels [read], [nonatomic], [assign]

The number of feature channels per pixel in the gradient image (primarySource) of encode call. This is same is outputFeatureChannels or the feature channels of destination image in forward convolution i.e. dataSource.descriptor.outputFeatureChannels

- sourceImageFeatureChannels [read], [nonatomic], [assign]

The number of feature channels per pixel in the input image to forward convolution which is used here as secondarySource. This is same as dataSource.descriptor.inputFeatureChannels. This is also the number of feature channels in destinatin image here i.e. gradient with respect to data.

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3