MPSRNNDescriptor(3)    MetalPerformanceShaders.framework   MPSRNNDescriptor(3)
NAME
       MPSRNNDescriptor
SYNOPSIS
       #import <MPSRNNLayer.h>
       Inherits NSObject.
       Inherited by MPSGRUDescriptor, MPSLSTMDescriptor, and
       MPSRNNSingleGateDescriptor.
   Properties
       NSUInteger inputFeatureChannels
       NSUInteger outputFeatureChannels
       BOOL useLayerInputUnitTransformMode
       BOOL useFloat32Weights
       MPSRNNSequenceDirection layerSequenceDirection
Detailed Description
       This depends on Metal.framework  The MPSRNNDescriptor specifies a
       Recursive neural network block/layer descriptor.
Property Documentation
   - inputFeatureChannels [read],  [write],  [nonatomic],  [assign]
       The number of feature channels per pixel in the input image or number
       of rows in the input matrix.
   - layerSequenceDirection [read],  [write],  [nonatomic],  [assign]
       When the layer specified with this descriptor is used to process a
       sequence of inputs by calling
       See also:
           encodeBidirectionalSequenceToCommandBuffer then this parameter
           defines in which direction the sequence is processed. The operation
           of the layer is: (yt, ht, ct) = f(xt,ht-1,ct-1) for
           MPSRNNSequenceDirectionForward and (yt, ht, ct) = f(xt,ht+1,ct+1)
           for MPSRNNSequenceDirectionBackward, where xt is the output of the
           previous layer that encodes in the same direction as this layer,
           (or the input image or matrix if this is the first layer in stack
           with this direction).
           MPSRNNImageInferenceLayer and
           MPSRNNMatrixInferenceLayer.
   - outputFeatureChannels [read],  [write],  [nonatomic],  [assign]
       The number of feature channels per pixel in the destination image or
       number of rows in the destination matrix.
   - useFloat32Weights [read],  [write],  [nonatomic],  [assign]
       If YES, then MPSRNNMatrixInferenceLayer uses 32-bit floating point
       numbers internally for weights when computing matrix transformations.
       If NO, then 16-bit, half precision floating point numbers are used.
       Currently MPSRNNImageInferenceLayer ignores this property and the
       convolution operations always convert FP32 weights into FP16 for better
       performance. Defaults to NO.
   - useLayerInputUnitTransformMode [read],  [write],  [nonatomic],  [assign]
       if YES then use identity transformation for all weights (W, Wr, Wi, Wf,
       Wo, Wc) affecting input x_j in this layer, even if said weights are
       specified as nil. For example 'W_ij * x_j' is replaced by 'x_j' in
       formulae defined in MPSRNNSingleGateDescriptor. Defaults to NO.
Author
       Generated automatically by Doxygen for
       MetalPerformanceShaders.framework from the source code.
Version MetalPerformanceShaders-Thu2Jul 13 2017            MPSRNNDescriptor(3)
Mac OS X 10.12.6 - Generated Tue Oct 31 19:32:05 CDT 2017