MPSCNNBatchNormalizationStatisticsGradient(3) | MetalPerformanceShaders.framework | MPSCNNBatchNormalizationStatisticsGradient(3) |
MPSCNNBatchNormalizationStatisticsGradient
#import <MPSCNNBatchNormalization.h>
Inherits MPSCNNGradientKernel.
(nonnull instancetype) -
initWithDevice:fusedNeuronDescriptor:
(nullable instancetype) - initWithCoder:device:
(void) -
encodeBatchToCommandBuffer:sourceGradients:sourceImages:batchNormalizationState:
(MPSImage *__nonnull) -
encodeToCommandBuffer:sourceGradient:sourceImage:gradientState:
(void) -
encodeToCommandBuffer:sourceGradient:sourceImage:gradientState:destinationGradient:
(MPSImageBatch *__nonnull) -
encodeBatchToCommandBuffer:sourceGradients:sourceImages:gradientStates:
(void) -
encodeBatchToCommandBuffer:sourceGradients:sourceImages:gradientStates:destinationGradients:
This depends on Metal.framework MPSCNNBatchNormalizationStatisticsGradient updates a MPSCNNBatchNormalizationState with the gradient of the loss function with respect to the batch statistics and batch normalization weights used to perform a batch normalization.
Encode this operation to a command buffer.
Parameters:
Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.
A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when this method creates its destination image.
Parameters:
Reimplemented from MPSCNNGradientKernel.
Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.
A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when you use -[MPSCNNGradientKernel destinationImageDescriptorForSourceImages:sourceStates:]. If you do not call this method, then you are responsible for configuring all of the primary and secondary image properties in MPSCNNBinaryKernel. Please see class description for expected ordering of operations.
Parameters:
Reimplemented from MPSCNNGradientKernel.
Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.
A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when this method creates its destination image.
Parameters:
Returns:
Reimplemented from MPSCNNGradientKernel.
Encode a gradient filter and return a gradient During training, gradient filters are used to calculate the gradient associated with the loss for each feature channel in the forward pass source image. For those nodes that are trainable, these are then used to refine the value used in the trainable parameter. They consume a source gradient image which contains the gradients corresponding with the forward pass destination image, and calculate the gradients corresponding to the forward pass source image.
A gradient filter consumes a MPSNNGradientState object which captured various forward pass properties such as offset and edgeMode at the time the forward pass was encoded. These are transferred to the MPSCNNBinaryKernel secondary image properties automatically when you use -[MPSCNNGradientKernel destinationImageDescriptorForSourceImages:sourceStates:]. If you do not call this method, then you are responsible for configuring all of the primary and secondary image properties in MPSCNNBinaryKernel. Please see class description for expected ordering of operations.
Parameters:
Reimplemented from MPSCNNGradientKernel.
NSSecureCoding compatability While the standard NSSecureCoding/NSCoding method -initWithCoder: should work, since the file can't know which device your data is allocated on, we have to guess and may guess incorrectly. To avoid that problem, use a subclass of NSCoder that implements the <MPSDeviceProvider> protocol to tell MPS the MTLDevice to use.
Parameters:
Returns:
Reimplemented from MPSCNNGradientKernel.
Initializes a batch normalization statistics gradient kernel using a device and neuron descriptor.
Parameters:
Returns:
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |