MPSNNFilterNode(3) | MetalPerformanceShaders.framework | MPSNNFilterNode(3) |
MPSNNFilterNode
#import <MPSNNGraphNodes.h>
Inherits NSObject.
Inherited by MPSCNNBatchNormalizationNode, MPSCNNConvolutionNode, MPSCNNDilatedPoolingMaxNode, MPSCNNDropoutNode, MPSCNNInstanceNormalizationNode, MPSCNNLogSoftMaxNode, MPSCNNLossNode, MPSCNNNeuronNode, MPSCNNNormalizationNode, MPSCNNPoolingNode, MPSCNNSoftMaxNode, MPSCNNUpsamplingBilinearNode, MPSCNNUpsamplingNearestNode, MPSCNNYOLOLossNode, MPSNNBinaryArithmeticNode, MPSNNConcatenationNode, MPSNNGradientFilterNode, and MPSNNScaleNode.
(nonnull instancetype) - init
(MPSNNGradientFilterNode *__nonnull) - gradientFilterWithSource:
(MPSNNGradientFilterNode *__nonnull) -
gradientFilterWithSources:
(NSArray< MPSNNGradientFilterNode * > *__nonnull) -
gradientFiltersWithSources:
(NSArray< MPSNNGradientFilterNode * > *__nonnull) -
gradientFiltersWithSource:
(NSArray< MPSNNFilterNode * > *__nullable) -
trainingGraphWithSourceGradient:nodeHandler:
MPSNNImageNode * resultImage
MPSNNStateNode * resultState
NSArray< MPSNNStateNode * > * resultStates
id< MPSNNPadding > paddingPolicy
NSString * label
A placeholder node denoting a neural network filter stage There are as many MPSNNFilterNode subclasses as there are MPS neural network filter objects. Make one of those. This class defines an polymorphic interface for them.
Return multiple gradient versions of the filter MPSNNFilters that consume multiple inputs generally result in multiple conjugate filters for the gradient computation at the end of training. For example, a single concatenation operation that concatenates multple images will result in an array of slice operators that carve out subsections of the input gradient image.
Reimplemented in MPSNNGradientFilterNode.
Return multiple gradient versions of the filter MPSNNFilters that consume multiple inputs generally result in multiple conjugate filters for the gradient computation at the end of training. For example, a single concatenation operation that concatenates multple images will result in an array of slice operators that carve out subsections of the input gradient image.
Reimplemented in MPSNNBinaryArithmeticNode, and MPSNNGradientFilterNode.
Return the gradient (backwards) version of this filter. The backwards training version of the filter will be returned. The non-gradient image and state arguments for the filter are automatically obtained from the target.
Parameters:
Reimplemented in MPSNNGradientFilterNode.
Return the gradient (backwards) version of this filter. The backwards training version of the filter will be returned. The non-gradient image and state arguments for the filter are automatically obtained from the target.
Parameters:
Reimplemented in MPSCNNYOLOLossNode, MPSNNConcatenationNode, MPSCNNLossNode, MPSNNBinaryArithmeticNode, and MPSNNGradientFilterNode.
Reimplemented in MPSCNNNeuronGradientNode, and MPSCNNNeuronNode.
Build training graph from inference graph This method will iteratively build the training potion of a graph based on an inference graph. Self should be the last node in the inference graph. It is typically a loss layer, but can be anything. Typically, the 'inference graph' used here is the desired inference graph with a dropout node and a loss layer node appended.
BUG: This method can not follow links to regions of the graph that are connected to the rest of the graph solely via MPSNNStateNodes. A gradient image input is required to construct a MPSNNGradientFilterNode from a inference filter node.
Parameters:
Returns:
A string to help identify this object.
The padding method used for the filter node The padding policy configures how the filter centers the region of interest in the source image. It principally is responsible for setting the MPSCNNKernel.offset and the size of the image produced, and sometimes will also configure .sourceFeatureChannelOffset, .sourceFeatureChannelMaxCount, and .edgeMode. It is permitted to set any other filter properties as needed using a custom padding policy. The default padding policy varies per filter to conform to consensus expectation for the behavior of that filter. In some cases, pre-made padding policies are provided to match the behavior of common neural networking frameworks with particularly complex or unexpected behavior for specific nodes. See MPSNNDefaultPadding class methods in MPSNeuralNetworkTypes.h for more.
BUG: MPS doesn't provide a good way to reset the MPSKernel properties in the context of a MPSNNGraph after the kernel is finished encoding. These values carry on to the next time the graph is used. Consequently, if your custom padding policy modifies the property as a function of the previous value, e.g.:
kernel.someProperty += 2;
then the second time the graph runs, the property may have an inconsistent value, leading to unexpected behavior. The default padding computation runs before the custom padding method to provide it with a sense of what is expected for the default configuration and will reinitialize the value in the case of the .offset. However, that computation usually doesn't reset other properties. In such cases, the custom padding policy may need to keep a record of the original value to enable consistent behavior.
Get the node representing the image result of the filter Except where otherwise noted, the precision used for the result image (see format property) is copied from the precision from the first input image node.
convenience method for resultStates[0] If resultStates is nil, returns nil
Get the node representing the state result of the filter If more than one, see description of subclass for ordering.
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |