MPSNNNeuronDescriptor(3) | MetalPerformanceShaders.framework | MPSNNNeuronDescriptor(3) |
MPSNNNeuronDescriptor
#import <MPSCNNNeuron.h>
Inherits NSObject, and <NSCopying>.
(nonnull instancetype) - init
(nonnull MPSNNNeuronDescriptor *) +
cnnNeuronDescriptorWithType:
(nonnull MPSNNNeuronDescriptor *) +
cnnNeuronDescriptorWithType:a:
(nonnull MPSNNNeuronDescriptor *) +
cnnNeuronDescriptorWithType:a:b:
(nonnull MPSNNNeuronDescriptor *) +
cnnNeuronDescriptorWithType:a:b:c:
(nonnull MPSNNNeuronDescriptor *) +
cnnNeuronPReLUDescriptorWithData:noCopy:
MPSCNNNeuronType neuronType
float a
float b
float c
NSData * data
This depends on Metal.framework The MPSNNNeuronDescriptor specifies a neuron descriptor. Supported neuron types:
Neuron type 'none': f(x) = x Parameters: none
ReLU neuron filter: f(x) = x >= 0 ? x : a * x This is called Leaky ReLU in literature. Some literature defines classical ReLU as max(0, x). If you want this behavior, simply pass a = 0. Parameters: a For default behavior, set the value of a to 0.0f.
Linear neuron filter: f(x) = a * x + b Parameters: a, b For default behavior, set the value of a to 1.0f and the value of b to 0.0f.
Sigmoid neuron filter: f(x) = 1 / (1 + e^-x) Parameters: none
Hard Sigmoid filter: f(x) = clamp((x * a) + b, 0, 1) Parameters: a, b For default behavior, set the value of a to 0.2f and the value of b to 0.5f.
Hyperbolic tangent (TanH) neuron filter: f(x) = a * tanh(b * x) Parameters: a, b For default behavior, set the value of a to 1.0f and the value of b to 1.0f.
Absolute neuron filter: f(x) = fabs(x) Parameters: none
Parametric Soft Plus neuron filter: f(x) = a * log(1 + e^(b * x)) Parameters: a, b For default behavior, set the value of a to 1.0f and the value of b to 1.0f.
Parametric Soft Sign neuron filter: f(x) = x / (1 + abs(x)) Parameters: none
Parametric ELU neuron filter: f(x) = x >= 0 ? x : a * (exp(x) - 1) Parameters: a For default behavior, set the value of a to 1.0f.
Parametric ReLU (PReLU) neuron filter: Same as ReLU, except parameter aArray is per channel. For each pixel, applies the following function: f(x_i) = x_i, if x_i >= 0 = a_i * x_i if x_i < 0 i in [0...channels-1] i.e. parameters a_i are learned and applied to each channel separately. Compare this to ReLu where parameter a is shared across all channels. See https://arxiv.org/pdf/1502.01852.pdf for details. Parameters: aArray - Array of floats containing per channel value of PReLu parameter count - Number of float values in array aArray.
ReLUN neuron filter: f(x) = min((x >= 0 ? x : a * x), b) Parameters: a, b As an example, the TensorFlow Relu6 activation layer can be implemented by setting the parameter b to 6.0f: https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/relu6. For default behavior, set the value of a to 1.0f and the value of b to 6.0f.
Make a descriptor for a MPSCNNNeuron object.
Parameters:
Returns:
Make a descriptor for a MPSCNNNeuron object.
Parameters:
Returns:
Initialize the neuron descriptor.
Parameters:
Returns:
Make a descriptor for a MPSCNNNeuron object.
Parameters:
Returns:
Make a descriptor for a neuron of type MPSCNNNeuronTypePReLU. The PReLU neuron is the same as a ReLU neuron, except parameter 'a' is per feature channel.
Parameters:
Returns:
You must use one of the interfaces below instead.
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |