MPSRNNMatrixInferenceLayer(3) | MetalPerformanceShaders.framework | MPSRNNMatrixInferenceLayer(3) |
MPSRNNMatrixInferenceLayer
#import <MPSRNNLayer.h>
Inherits MPSKernel.
(nonnull instancetype) - initWithDevice:rnnDescriptor:
(nonnull instancetype) - initWithDevice:rnnDescriptors:
(nonnull instancetype) - initWithDevice:
(void) -
encodeSequenceToCommandBuffer:sourceMatrices:sourceOffsets:destinationMatrices:destinationOffsets:recurrentInputState:recurrentOutputStates:
(void) -
encodeSequenceToCommandBuffer:sourceMatrices:destinationMatrices:recurrentInputState:recurrentOutputStates:
(void) -
encodeBidirectionalSequenceToCommandBuffer:sourceSequence:destinationForwardMatrices:destinationBackwardMatrices:
(nullable instancetype) - initWithCoder:device:
(nonnull instancetype) - copyWithZone:device:
NSUInteger inputFeatureChannels
NSUInteger outputFeatureChannels
NSUInteger numberOfLayers
BOOL recurrentOutputIsTemporary
BOOL storeAllIntermediateStates
MPSRNNBidirectionalCombineMode bidirectionalCombineMode
This depends on Metal.framework The MPSRNNMatrixInferenceLayer specifies a recurrent neural network layer for inference on MPSMatrices. Currently two types of recurrent layers are supported: ones that operate with convolutions on images: MPSRNNImageInferenceLayer and one that operates on matrices: MPSRNNMatrixInferenceLayer. The former can be often used to implement the latter by using 1x1-matrices, but due to image size restrictions and performance, it is advisable to use MPSRNNMatrixInferenceLayer for linear recurrent layers. A MPSRNNMatrixInferenceLayer is initialized using a MPSRNNLayerDescriptor, which further specifies the recurrent network layer, or an array of MPSRNNLayerDescriptors, which specifies a stack of recurrent layers, that can operate in parallel a subset of the inputs in a sequence of inputs and recurrent outputs. Note that currently stacks with bidirectionally traversing encode functions do not support starting from a previous set of recurrent states, but this can be achieved quite easily by defining two separate unidirectional stacks of layers, and running the same input sequence on them separately (one forwards and one backwards) and ultimately combining the two result sequences as desired with auxiliary functions. The input and output vectors in encode calls are stored as rows of the input and output matrices and MPSRNNMatrixInferenceLayer supports matrices with decreasing number of rows: The row-indices identify the different sequences that may be of different lengths - for example if we have three sequences: ( x1, x2, x3 ), ( y1, y2, y3, y4 ) and ( z1, z2 ) of vectors xi, yi and zi, then these can be inserted together as a batch to the sequence encoding kernel by using the matrices:
( y1 ) ( y2 ) ( y3 ) ( y4 ) m1 = ( x1 ), m2 = ( x2 ), m3 = ( x3 ), m4 =
( z1 ) ( z2 )
If a recurrent output state is requested then it will contain the state
corresponding to last inputs to each sequence and if all the intermediate
states are requested (see storeAllIntermediateStates), then the shorter
sequences will be propagated by copying the state of the previous output if
the input vector is not present in the sequence - in the example above the
output states would be:
( s_y1 ) ( s_y2 ) ( s_y3 ) ( s_y4 ) s1 = ( s_x1 ), s2 = ( s_x2 ), s3 = ( s_x3 ), s4 = ( s_x3 )
( s_z1 ) ( s_z2 ) ( s_z2 ) ( s_z2 )
The mathematical operation described in the linear transformations of
MPSRNNSingleGateDescriptor MPSLSTMDescriptor and
MPSGRUDescriptor are y^T = W x^T <=> y = x W^T, where x is the
matrix containing the input vectors as rows, y is the matrix containing the
output vectors as rows and W is the weight matrix.
Make a copy of this kernel for a new device -
See also:
Parameters:
Returns:
Reimplemented from MPSKernel.
Encode an MPSRNNMatrixInferenceLayer kernel stack for an input matrix sequences into a command buffer bidirectionally. The operation proceeds as follows: The first source matrix x0 is passed through all forward traversing layers in the stack, ie. those that were initialized with MPSRNNSequenceDirectionForward, recurrent input is assumed zero. This produces forward output yf0 and recurrent states hf00, hf01, hf02, ... hf0n, one for each forward layer in the stack. Then x1 is passed to forward layers together with recurrent state hf00, hf01, ..., hf0n, which produces yf1, and hf10,... This procedure is iterated until the last matrix in the input sequence x_(N-1), which produces forward output yf(N-1). The backwards layers iterate the same sequence backwards, starting from input x_(N-1) (recurrent state zero), that produces yb(N-1) and recurrent output hb(N-1)0, hf(N-1)1, ... hb(N-1)m, one for each backwards traversing layer. Then the backwards layers handle input x_(N-2) using recurrent state hb(N-1)0, ..., et cetera, until the first matrix of the sequence is computed, producing output yb0. The result of the operation is either pair of sequences ({yf0, yf1, ... , yf(N-1)}, {yb0, yb1, ... , yb(N-1)}) or a combined sequence, {(yf0 + yb0), ... , (yf(N-1) + yb(N-1)) }, where '+' stands either for sum, or concatenation along feature channels, as specified by bidirectionalCombineMode.
Parameters:
Encode an MPSRNNMatrixInferenceLayer kernel (stack) for a sequence of inputs into a command buffer. Note that when encoding using this function the
See also:
Parameters:
See also:
MPSRNNRecurrentMatrixState* recurrent0 = nil; [filter encodeToCommandBuffer: cmdBuf
sourceMatrix: source0
destinationMatrix: destination0
recurrentInputState: nil
recurrentOutputState: &recurrent0];
Then use it for the next input in sequence:
[filter encodeToCommandBuffer: cmdBuf
sourceMatrix: source1
destinationMatrix: destination1
recurrentInputState: recurrent0
recurrentOutputState: &recurrent0];
And discard recurrent output of the third input:
[filter encodeToCommandBuffer: cmdBuf
sourceMatrix: source2
destinationMatrix: destination2
recurrentInputState: recurrent0
recurrentOutputState: nil];
NSSecureCoding compatability See MPSKernel::initWithCoder.
Parameters:
Returns:
Reimplemented from MPSKernel.
Standard init with default properties per filter type
Parameters:
Returns:
Reimplemented from MPSKernel.
Initializes a linear (fully connected) RNN kernel
Parameters:
Returns:
Initializes a kernel that implements a stack of linear (fully connected) RNN layers
Parameters:
Returns:
Defines how to combine the output-results, when encoding bidirectional layers using encodeBidirectionalSequenceToCommandBuffer. Defaults to MPSRNNBidirectionalCombineModeNone.
The number of feature channels input vector/matrix.
Number of layers in the filter-stack. This will be one when using initWithDevice:rnnDescriptor to initialize this filter and the number of entries in the array 'rnnDescriptors' when initializing this filter with initWithDevice:rnnDescriptors.
The number of feature channels in the output vector/matrix.
How output states from encodeSequenceToCommandBuffer are constructed. Defaults to NO. For reference
See also:
If YES then calls to encodeSequenceToCommandBuffer return every recurrent state in the array: recurrentOutputStates. Defaults to NO.
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |