MPSCNNBatchNormalizationGradient(3) MetalPerformanceShaders.framework MPSCNNBatchNormalizationGradient(3)

MPSCNNBatchNormalizationGradient

#import <MPSCNNBatchNormalization.h>

Inherits MPSCNNGradientKernel.


(nonnull instancetype) - initWithDevice:fusedNeuronDescriptor:
(nullable instancetype) - initWithCoder:device:
(void) - encodeToCommandBuffer:sourceGradient:sourceImage:batchNormalizationState:destinationGradient:
(void) - encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:destinationGradients:
(MPSImage *__nonnull) - encodeToCommandBuffer:sourceGradient:sourceImage:batchNormalizationState:
(MPSImageBatch *__nonnull) - encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:
(void) - encodeToCommandBuffer:primaryImage:secondaryImage:destinationImage:
(void) - encodeBatchToCommandBuffer:primaryImages:secondaryImages:destinationImages:
(MPSImage *__nonnull) - encodeToCommandBuffer:primaryImage:secondaryImage:
(MPSImageBatch *__nonnull) - encodeBatchToCommandBuffer:primaryImages:secondaryImages:

This depends on Metal.framework

MPSCNNBatchNormalizationGradient computes the gradients of a loss function resulting from a network containing a corresponding MPSCNNBatchNormalization kernel.

Two sets of values are computed: the gradient of the loss function with respect to the batch normalization source images, and the gradient of the loss function with respect to the scale and bias terms used to compute the batch normalization.

- (MPSImageBatch * __nonnull) encodeBatchToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) primaryImage(MPSImageBatch *__nonnull) secondaryImage

Encode a MPSCNNKernel into a command Buffer. Create textures to hold the results and return them. In the first iteration on this method, encodeBatchToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.

This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h. All images in a batch must have MPSImage.numberOfImages = 1.

Parameters:

commandBuffer The command buffer
primaryImage A MPSImages to use as the primary source images for the filter.
secondaryImage A MPSImages to use as the secondary source images for the filter.

Returns:

A MPSImage or MPSTemporaryImage allocated per the destinationImageAllocator containing the output of the graph. The returned image will be automatically released when the command buffer completes. If you want to keep it around for longer, retain the image. (ARC will do this for you if you use it later.)

Reimplemented from MPSCNNBinaryKernel.

- (void) encodeBatchToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) primaryImages(MPSImageBatch *__nonnull) secondaryImages(MPSImageBatch *__nonnull) destinationImages

Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method. Multiple images are processed concurrently. All images must have MPSImage.numberOfImages = 1.

Parameters:

commandBuffer A valid MTLCommandBuffer to receive the encoded filter
primaryImages An array of MPSImage objects containing the primary source images.
secondaryImages An array MPSImage objects containing the secondary source images.
destinationImages An array of MPSImage objects to contain the result images. destinationImages may not alias primarySourceImages or secondarySourceImages in any manner.

Reimplemented from MPSCNNBinaryKernel.

- (MPSImageBatch * __nonnull) encodeBatchToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceGradients(MPSImageBatch *__nonnull) sourceImages(MPSCNNBatchNormalizationState *__nonnull) batchNormalizationState

Encode this operation to a command buffer. Create an MPSImageBatch to contain the result and return it. See encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:destinationGradients for further details.

- (void) encodeBatchToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImageBatch *__nonnull) sourceGradients(MPSImageBatch *__nonnull) sourceImages(MPSCNNBatchNormalizationState *__nonnull) batchNormalizationState(MPSImageBatch *__nonnull) destinationGradients

Encode this operation to a command buffer.

Parameters:

commandBuffer The command buffer.
sourceGradients An MPSImageBatch containing the gradient of the loss function with respect to the results of batch normalization on the source images.
sourceImages An MPSImageBatch containing the source images for batch normalization.
batchNormalizationState A valid MPSCNNBatchNormalizationState object which has been previously updated using a MPSCNNBatchNormalizationStatisticsGradient kernel and the source images. If the state is temporary its read count will be decremented.
destinationGradients An MPSImageBatch whose images will contain the gradient of the loss function with respect to the source images.

- (MPSImage * __nonnull) encodeToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) primaryImage(MPSImage *__nonnull) secondaryImage

Encode a MPSCNNKernel into a command Buffer. Create a texture to hold the result and return it. In the first iteration on this method, encodeToCommandBuffer:sourceImage:destinationImage: some work was left for the developer to do in the form of correctly setting the offset property and sizing the result buffer. With the introduction of the padding policy (see padding property) the filter can do this work itself. If you would like to have some input into what sort of MPSImage (e.g. temporary vs. regular) or what size it is or where it is allocated, you may set the destinationImageAllocator to allocate the image yourself.

This method uses the MPSNNPadding padding property to figure out how to size the result image and to set the offset property. See discussion in MPSNeuralNetworkTypes.h.

Parameters:

commandBuffer The command buffer
primaryImage A MPSImages to use as the primary source images for the filter.
secondaryImage A MPSImages to use as the secondary source images for the filter.

Returns:

A MPSImage or MPSTemporaryImage allocated per the destinationImageAllocator containing the output of the graph. The returned image will be automatically released when the command buffer completes. If you want to keep it around for longer, retain the image. (ARC will do this for you if you use it later.)

Reimplemented from MPSCNNBinaryKernel.

- (void) encodeToCommandBuffer: (nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) primaryImage(MPSImage *__nonnull) secondaryImage(MPSImage *__nonnull) destinationImage

Encode a MPSCNNKernel into a command Buffer. The operation shall proceed out-of-place. This is the older style of encode which reads the offset, doesn't change it, and ignores the padding method.

Parameters:

commandBuffer A valid MTLCommandBuffer to receive the encoded filter
primaryImage A valid MPSImage object containing the primary source image.
secondaryImage A valid MPSImage object containing the secondary source image.
destinationImage A valid MPSImage to be overwritten by result image. destinationImage may not alias primarySourceImage or secondarySourceImage.

Reimplemented from MPSCNNBinaryKernel.

- (MPSImage*__nonnull) encodeToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceGradient(MPSImage *__nonnull) sourceImage(MPSCNNBatchNormalizationState *__nonnull) batchNormalizationState

Encode this operation to a command buffer. Create an MPSImage to contain the result and return it. See encodeToCommandBuffer:sourceImage:sourceGradient:sourceImage:batchNormalizationState:destinationGradient for further details.

- (void) encodeToCommandBuffer: (__nonnull id< MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull) sourceGradient(MPSImage *__nonnull) sourceImage(MPSCNNBatchNormalizationState *__nonnull) batchNormalizationState(MPSImage *__nonnull) destinationGradient

Encode this operation to a command buffer for a single image.

Parameters:

commandBuffer The command buffer.
sourceGradient An MPSImage containing the gradient of the loss function with respect to the results of batch normalization on the source image.
sourceImage An MPSImage containing the source image for batch normalization.
batchNormalizationState A valid MPSCNNBatchNormalizationState object which has been previously updated using a MPSCNNBatchNormalizationStatisticsGradient kernel and the source images. If the state is temporary its read count will be decremented.
destinationGradient An MPSImage which contains the gradient of the loss function with respect to the source image.

- (nullable instancetype) initWithCoder: (NSCoder *__nonnull) aDecoder(nonnull id< MTLDevice >) device

NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use a subclass of NSCoder that implements the <MPSDeviceProvider> protocol to tell MPS the MTLDevice to use.

Parameters:

aDecoder The NSCoder subclass with your serialized MPSKernel
device The MTLDevice on which to make the MPSKernel

Returns:

A new MPSCNNBatchNormalizationGradient object, or nil if failure.

Reimplemented from MPSCNNGradientKernel.

- (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device(MPSNNNeuronDescriptor *__nullable) fusedNeuronDescriptor

Initializes a batch normalization gradient kernel using a device and neuron descriptor.

Parameters:

device The MTLDevice on which this filter will be used
fusedNeuronDescriptor A MPSNNNeuronDescriptor object which specifies a neuron activation function whose gradient should be applied prior to computing the resulting gradient. This neuron descriptor should match that used in the corresponding forward batch normalization kernel as well as the preceeding batch normalization statistics gradient kernel.

Returns:

A valid MPSCNNBatchNormalizationGradient object or nil, if failure.

Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.

Mon Jul 9 2018 Version MetalPerformanceShaders-119.3