MPSCNNBatchNormalizationStatisticsGradient(3) MetalPerformanceShaders.framework MPSCNNBatchNormalizationStatisticsGradient(3)

MPSCNNBatchNormalizationStatisticsGradient

#import <MPSCNNBatchNormalization.h>

Inherits MPSCNNGradientKernel.


(nonnull instancetype) - initWithDevice:fusedNeuronDescriptor:
(nullable instancetype) - initWithCoder:device:
(void) - encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:
(MPSImage *__nonnull) - encodeToCommandBuffer:sourceGradient:sourceImage:gradientState:
(void) - encodeToCommandBuffer:sourceGradient:sourceImage:gradientState:destinationGradient:
(MPSImageBatch *__nonnull) - encodeBatchToCommandBuffer:sourceGradients:sourceImages:gradientStates:
(void) - encodeBatchToCommandBuffer:sourceGradients:sourceImages:gradientStates:destinationGradients:

This depends on Metal.framework MPSCNNBatchNormalizationStatisticsGradient updates a MPSCNNBatchNormalizationState with the gradient of the loss function with respect to the batch statistics and batch normalization weights used to perform a batch normalization.

- (void) encodeBatchToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceGradients(MPSImageBatch *__nonnull) sourceImages(MPSCNNBatchNormalizationState *__nonnull) batchNormalizationState

Encode this operation to a command buffer.

Parameters:

commandBuffer The command buffer.
sourceGradients An MPSImageBatch containing the gradient of the loss function with respect to the results of batch normalization on the source images.
sourceImages An MPSImageBatch containing the source images for batch normalization.
batchNormalizationState A valid MPSCNNBatchNormalizationState object which has been previously updated using a MPSCNNBatchNormalizationStatistics kernel and the source images. Upon completion of the command buffer, will contain the (possibly partially updated) gradients for the loss function with respect to the scale and bias parameters used to compute the batch normalization. The state will be considered to be completely updated when all MPSImages in the training batch have been processed. If the state is temporary its read count will be decremented.

- (MPSImageBatch*__nonnull) encodeBatchToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceGradients(MPSImageBatch *__nonnull) sourceImages(MPSStateBatch *__nonnull) gradientStates

Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.

A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when this method creates its destination image.

Parameters:

commandBuffer The MTLCommandBuffer on which to encode
sourceGradients The gradient images from the 'next' filter in the graph
sourceImages The images used as source image from the forward pass
gradientStates The MPSNNGradientState or MPSNNBinaryGradientState subclass produced by the forward pass

Reimplemented from MPSCNNGradientKernel.

- (void) encodeBatchToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceGradients(MPSImageBatch *__nonnull) sourceImages(MPSStateBatch *__nonnull) gradientStates(MPSImageBatch *__nonnull) destinationGradients

Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.

A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when you use -[MPSCNNGradientKernel destinationImageDescriptorForSourceImages:sourceStates:]. If you do not call this method, then you are responsible for configuring all of the primary and secondary image properties in MPSCNNBinaryKernel. Please see class description for expected ordering of operations.

Parameters:

commandBuffer The MTLCommandBuffer on which to encode
sourceGradients The gradient images from the 'next' filter in the graph
sourceImages The image used as source images from the forward pass
gradientStates An array of the MPSNNGradientState or MPSNNBinaryGradientState subclass produced by the forward pass
destinationGradients The MPSImages into which to write the filter result

Reimplemented from MPSCNNGradientKernel.

- (MPSImage*__nonnull) encodeToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceGradient(MPSImage *__nonnull) sourceImage(MPSState *__nonnull) gradientState

Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.

A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when this method creates its destination image.

Parameters:

commandBuffer The MTLCommandBuffer on which to encode
sourceGradient The gradient image from the 'next' filter in the graph (in the inference direction)
sourceImage The image used as source image by the forward inference pass
gradientState The MPSNNGradientState or MPSNNBinaryGradientState subclass produced by the forward inference pass

Returns:

The result gradient from the gradient filter

Reimplemented from MPSCNNGradientKernel.

- (void) encodeToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceGradient(MPSImage *__nonnull) sourceImage(MPSState *__nonnull) gradientState(MPSImage *__nonnull) destinationGradient

Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.

A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when you use -[MPSCNNGradientKernel destinationImageDescriptorForSourceImages:sourceStates:]. If you do not call this method, then you are responsible for configuring all of the primary and secondary image properties in MPSCNNBinaryKernel. Please see class description for expected ordering of operations.

Parameters:

commandBuffer The MTLCommandBuffer on which to encode
sourceGradient The gradient image from the 'next' filter in the graph
sourceImage The image used as source image from the forward pass
gradientState The MPSNNGradientState and MPSNNBinaryGradientState subclass produced by the forward pass
destinationGradient The MPSImage into which to write the filter result

Reimplemented from MPSCNNGradientKernel.

- (nullable instancetype) initWithCoder: (NSCoder *__nonnull) aDecoder(nonnull id< MTLDevice >) device

NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use a subclass of NSCoder that implements the <MPSDeviceProvider> protocol to tell MPS the MTLDevice to use.

Parameters:

aDecoder The NSCoder subclass with your serialized MPSKernel
device The MTLDevice on which to make the MPSKernel

Returns:

A new MPSCNNBatchNormalizationStatisticsGradient object, or nil if failure.

Reimplemented from MPSCNNGradientKernel.

- (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device(MPSNNNeuronDescriptor *__nullable) fusedNeuronDescriptor

Initializes a batch normalization statistics gradient kernel using a device and neuron descriptor.

Parameters:

device The MTLDevice on which this filter will be used
fusedNeuronDescriptor A MPSNNNeuronDescriptor object which specifies a neuron activation function whose gradient should be applied prior to computing the statistics of the input gradient. This neuron descriptor should match that used in the corresponding forward batch normalization kernel.

Returns:

A valid MPSCNNBatchNormalizationStatisticsGradient object or nil, if failure.

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3