Define input layer

tefla.core.layers.input (shape, name='inputs', outputs_collections=None, **unused)

Args

  • shape: A Tensor, define the input shape e.g. for image input [batch_size, height, width, depth]
  • name: A optional score/name for this op
  • outputs_collections: The collections to which the outputs are added.

Returns

A placeholder for the input


Add item to colelction

tefla.core.layers.register_to_collections (inputs, name=None, outputs_collections=None, **unused)

Args

  • shape: A Tensor, define the input shape e.g. for image input [batch_size, height, width, depth]
  • name: A optional score/name for this op
  • outputs_collections: The collections to which the outputs are added.

Returns

A placeholder for the input


Adds a fully connected layer

tefla.core.layers.fully_connected (x, n_output, is_training, reuse, trainable=True, w_init=, b_init=0.0, w_regularizer=, w_normalized=False, name='fc', batch_norm=None, batch_norm_args=None, activation=None, params=None, outputs_collections=None, use_bias=True)

fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the x to produce a Tensor of hidden units. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank greater than 2, then x is flattened prior to the initial matrix multiply by weights.

Args

  • x: A Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, depth], [None, None, None, channels].
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If -batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 2-D Tensor variable representing the result of the series of operations. e.g: 2-D Tensor [batch, n_output].


Adds a 2D convolutional layer

tefla.core.layers.conv2d (x, n_output_channels, is_training, reuse, trainable=True, filter_size= (3, 3), stride= (1, 1), padding='SAME', w_init=, b_init=0.0, w_regularizer=, untie_biases=False, name='conv2d', batch_norm=None, batch_norm_args=None, activation=None, use_bias=True, outputs_collections=None)

convolutional layer creates a variable called weights, representing a conv weight matrix, which is multiplied by the x to produce a Tensor of hidden units. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 4-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/tuple of 2 positive integers specifying the spatial dimensions of of the filters.
  • stride: a int or tuple/list of 2 positive integers specifying the stride at which to compute output.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 4-D Tensor variable representing the result of the series of operations. e.g.: 4-D Tensor [batch, new_height, new_width, n_output].


Adds a 2D dilated convolutional layer

tefla.core.layers.dilated_conv2d (x, n_output_channels, is_training, reuse, trainable=True, filter_size= (3, 3), dilation=1, stride=1, padding='SAME', w_init=, b_init=0.0, w_regularizer=, untie_biases=False, name='dilated_conv2d', batch_norm=None, batch_norm_args=None, activation=None, use_bias=True, outputs_collections=None)

also known as convolution with holes or atrous convolution. If the rate parameter is equal to one, it performs regular 2-D convolution. If the rate parameter is greater than one, it performs convolution with holes, sampling the input values every rate pixels in the height and width dimensions. convolutional layer creates a variable called weights, representing a conv weight matrix, which is multiplied by the x to produce a Tensor of hidden units. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 4-D Tensor of with rank 4 and value for the last dimension, i.e. [batch_size, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/tuple of 2 positive integers specifying the spatial
  • dimensions of of the filters.
  • dilation: A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride/rate or dilation.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 4-D Tensor variable representing the result of the series of operations. e.g.: 4-D Tensor [batch, new_height, new_width, n_output].


Adds a 2D seperable convolutional layer

tefla.core.layers.separable_conv2d (x, n_output_channels, is_training, reuse, trainable=True, filter_size= (3, 3), stride= (1, 1), depth_multiplier=1, padding='SAME', w_init=, b_init=0.0, w_regularizer=, untie_biases=False, name='separable_conv2d', batch_norm=None, batch_norm_args=None, activation=None, use_bias=True, outputs_collections=None)

Performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels. Note that this is separability between dimensions [1, 2] and 3, not spatial separability between dimensions 1 and 2. convolutional layer creates two variable called depthwise_W and pointwise_W, depthwise_W is multiplied by x to produce depthwise conolution, which is multiplied by the pointwise_W to produce a output Tensor If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 4-D Tensor of with rank 4 and value for the last dimension, i.e. [batch_size, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/tuple of 2 positive integers specifying the spatial
  • stride: a int or tuple/list of 2 positive integers specifying the stride at which to compute output.
  • dimensions of of the filters.
  • depth_multiplier: A positive int32. the number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 4-D Tensor variable representing the result of the series of operations. e.g.: 4-D Tensor [batch, new_height, new_width, n_output].


Adds a 2D sdepthwise convolutional layer

tefla.core.layers.depthwise_conv2d (x, depth_multiplier, is_training, reuse, trainable=True, filter_size= (3, 3), stride= (1, 1), padding='SAME', w_init=, b_init=0.0, w_regularizer=, untie_biases=False, name='depthwise_conv2d', batch_norm=None, batch_norm_args=None, activation=None, use_bias=True, outputs_collections=None)

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter tensor of shape [filter_height, filter_width, in_channels, channel_multiplier] containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. The output has in_channels * channel_multiplier channels. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 4-D Tensor of with rank 4 and value for the last dimension, i.e. [batch_size, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/tuple of 2 positive integers specifying the spatial dimensions of of the filters.
  • stride: a int or tuple/list of 2 positive integers specifying the stride at which to compute output.
  • depth_multiplier: A positive int32. the number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to `num_filters_in * depth_multiplier
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The tensor variable representing the result of the series of operations. e.g.: 4-D Tensor [batch, new_height, new_width, n_output].


Adds a 3D convolutional layer

tefla.core.layers.conv3d (x, n_output_channels, is_training, reuse, trainable=True, filter_size= (3, 3, 3), stride= (1, 1, 1), padding='SAME', w_init=, b_init=0.0, w_regularizer=, untie_biases=False, name='conv3d', batch_norm=None, batch_norm_args=None, activation=None, use_bias=True, outputs_collections=None)

convolutional layer creates a variable called weights, representing a conv weight matrix, which is multiplied by the x to produce a Tensor of hidden units. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 5

Args

  • x: A 5-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, in_depth, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int, or list/tuple of 3 positive integers specifying the spatial dimensions of of the filters.
  • stride: a int, or tuple/list of 3 positive integers specifying the stride at which to compute output.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 5-D Tensor variable representing the result of the series of operations. e.g.: 5-D Tensor [batch, new_depth, new_height, new_width, n_output].


Adds a 2D upsampling or deconvolutional layer

tefla.core.layers.upsample2d (input_, output_shape, is_training, reuse, trainable=True, filter_size= (5, 5), stride= (2, 2), w_init=, b_init=0.0, w_regularizer=, batch_norm=None, batch_norm_args=None, activation=None, name='deconv2d', use_bias=True, with_w=False, outputs_collections=None, **unused)

his operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv2d rather than an actual deconvolution. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 4-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • output_shape: 4D tensor, the output shape
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/tuple of 2 positive integers specifying the spatial dimensions of of the filters.
  • stride: a int or tuple/list of 2 positive integers specifying the stride at which to compute output.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The tensor variable representing the result of the series of operations. e.g.: 4-D Tensor [batch, new_height, new_width, n_output].


Adds a 3D upsampling or deconvolutional layer

tefla.core.layers.upsample3d (input_, output_shape, is_training, reuse, trainable=True, filter_size= (5, 5, 5), stride= (2, 2, 2), w_init=, b_init=0.0, w_regularizer=, batch_norm=None, batch_norm_args=None, activation=None, name='deconv3d', use_bias=True, with_w=False, outputs_collections=None, **unused)

his operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv2d rather than an actual deconvolution. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 5

Args

  • x: A 5-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, in_depth, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • output_shape: 5D tensor, the output shape
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/tuple of 3 positive integers specifying the spatial dimensions of of the filters.
  • stride: a int or tuple/list of 3 positive integers specifying the stride at which to compute output.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The tensor variable representing the result of the series of operations. e.g.: 5-D Tensor [batch, new_depth, new_height, new_width, n_output].


Adds a 1D convolutional layer

tefla.core.layers.conv1d (x, n_output_channels, is_training, reuse, trainable=True, filter_size=3, stride=1, padding='SAME', w_init=, b_init=0.0, w_regularizer=, untie_biases=False, name='conv1d', batch_norm=None, batch_norm_args=None, activation=None, use_bias=True, outputs_collections=None)

convolutional layer creates a variable called weights, representing a conv weight matrix, which is multiplied by the x to produce a Tensor of hidden units. If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 3-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, in_width, depth],
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a `int specifying the spatial dimensions of of the filters.
  • stride: a int specifying the stride at which to compute output.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 3-D Tensor variable representing the result of the series of operations. e.g.: 3-D Tensor [batch, new_width, n_output].


Max Pooling 1D

tefla.core.layers.max_pool_1d (x, filter_size=3, stride=2, padding='SAME', name='maxpool1d', outputs_collections=None, **unused)

Args

  • x: a 3-D Tensor [batch_size, steps, in_channels].
  • kernel_size: int or list of int. Pooling kernel size.
  • strides: int or list of int. Strides of conv operation. Default: same as kernel_size.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'maxpool1d'.

Returns

3-D Tensor [batch, pooled steps, in_channels].


Avg Pooling 1D

tefla.core.layers.avg_pool_1d (x, filter_size=3, stride=2, padding='SAME', name='avgpool1d', outputs_collections=None, **unused)

Args

  • x: a 3-D Tensor [batch_size, steps, in_channels].
  • kernel_size: int or list of int. Pooling kernel size.
  • strides: int or list of int. Strides of conv operation. Default: same as kernel_size.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'avgpool1d'.

Returns

3-D Tensor [batch, pooled steps, in_channels].


Adds a 2D highway convolutional layer

tefla.core.layers.highway_conv2d (x, n_output, is_training, reuse, trainable=True, filter_size= (3, 3), stride= (1, 1), padding='SAME', w_init=, b_init=0.0, w_regularizer=, name='highway_conv2d', batch_norm=None, batch_norm_args=None, activation=, use_bias=True, outputs_collections=None)

https://arxiv.org/abs/1505.00387 If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank 4

Args

  • x: A 4-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, in_height, in_width, depth],
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • filter_size: a int or list/ tuple of 2 positive integers specifying the spatial dimensions of of the filters.
  • stride: a int or tuple/list of 2 positive integers specifying the stride at which to compute output.
  • padding: one of "VALID" or "SAME".
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • untie_biases: spatial dimensions wise baises
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The Tensor variable representing the result of the series of operations. e.g.: 4-D Tensor [batch, new_height, new_width, n_output].


Adds a fully connected highway layer

tefla.core.layers.highway_fc2d (x, n_output, is_training, reuse, trainable=True, filter_size= (3, 3), w_init=, b_init=0.0, w_regularizer=, name='highway_fc2d', activation=None, use_bias=True, outputs_collections=None)

https://arxiv.org/abs/1505.00387 If a batch_norm is provided (such as batch_norm), it is then applied. Otherwise, if batch_norm is None and a b_init and use_bias is provided then a biases variable would be created and added the hidden units. Finally, if activation is not None, it is applied to the hidden units as well. Note: that if x have a rank greater than 2, then x is flattened prior to the initial matrix multiply by weights.

Args

  • x: A 2-D/4-D Tensor of with at least rank 2 and value for the last dimension, i.e. [batch_size, depth], [None, None, None, channels].
  • is_training: Bool, training or testing
  • n_output: Integer or long, the number of output units in the layer.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • activation: activation function, set to None to skip it and maintain a linear activation.
  • batch_norm: normalization function to use. If batch_norm is True then google original implementation is used and if another function is provided then it is applied. default set to None for no normalizer function
  • batch_norm_args: normalization function parameters.
  • w_init: An initializer for the weights.
  • w_regularizer: Optional regularizer for the weights.
  • b_init: An initializer for the biases. If None skip biases.
  • outputs_collections: The collections to which the outputs are added.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional name or scope for variable_scope/name_scope.
  • use_bias: Whether to add bias or not

Returns

The 2-D Tensor variable representing the result of the series of operations. e.g.: 2-D Tensor [batch_size, n_output]


Max pooling layer

tefla.core.layers.max_pool (x, filter_size= (3, 3), stride= (2, 2), padding='SAME', name='pool', outputs_collections=None, **unused)

Args

  • x: A 4-D 'Tensorof shape[batch_size, height, width, channels]`
  • filter_size: A int or list/tuple of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A int or list/tuple of length 2: [stride_height, stride_width].
  • padding: The padding method, either 'VALID' or 'SAME'.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch, new_height, new_width, channels].


Max pooling layer

tefla.core.layers.max_pool_3d (x, filter_size= (3, 3, 3), stride= (2, 2, 2), padding='SAME', name='pool', outputs_collections=None, **unused)

Args

  • x: A 5-D 'Tensorof shape[batch_size, depth, height, width, channels]`
  • filter_size: A int or list/tuple of length 3: [kernel_depth, kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A int or list/tuple of length 3: [stride_depth, stride_height, stride_width].
  • padding: The padding method, either 'VALID' or 'SAME'.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A Tensor representing the results of the pooling operation. e.g.: 5-D Tensor [batch, new_depth, new_height, new_width, channels].


Fractional pooling layer

tefla.core.layers.fractional_pool (x, pooling_ratio=[1.0, 1.44, 1.73, 1.0], pseudo_random=None, determinastic=None, overlapping=None, name='fractional_pool', seed=None, seed2=None, type='avg', outputs_collections=None, **unused)

Args

  • x: A 4-D Tensor of shape [batch_size, height, width, channels]
  • pooling_ratio: A list of floats that has length >= 4. Pooling ratio for each dimension of value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.
  • pseudo_random: An optional bool. Defaults to False. When set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper Benjamin Graham, Fractional Max-Pooling for difference between pseudorandom and random.
  • overlapping: An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7; If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [41/3, 26/3] for fractional avg pooling.
  • deterministic: An optional bool. Defaults to False. When set to True, a fixed pooling region will be used when iterating over a FractionalAvgPool node in the computation graph. Mainly used in unit test to make FractionalAvgPool deterministic.
  • seed: An optional int. Defaults to 0. If either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
  • seed2: An optional int. Defaults to 0. An second seed to avoid seed collision.
  • outputs_collections: The collections to which the outputs are added.
  • type: avg or max pool
  • name: Optional scope/name for name_scope.

Returns

A 4-D Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch, new_height, new_width, channels].


RMS pooling layer

tefla.core.layers.rms_pool_2d (x, filter_size= (3, 3), stride= (2, 2), padding='SAME', name='pool', epsilon=1e-12, outputs_collections=None, **unused)

Args

  • x: A 4-D Tensor of shape [batch_size, height, width, channels]
  • filter_size: A int or list/tuple of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A int or list/tuple of length 2: [stride_height, stride_width].
  • padding: The padding method, either 'VALID' or 'SAME'.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.
  • epsilon: prevents divide by zero

Returns

A 4-D Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch, new_height, new_width, channels].


RMS pooling layer

tefla.core.layers.rms_pool_3d (x, filter_size= (3, 3, 3), stride= (2, 2, 2), padding='SAME', name='pool', epsilon=1e-12, outputs_collections=None, **unused)

Args

  • x: A 5-D Tensor of shape [batch_size, depth, height, width, channels]
  • filter_size: A int or list/tuple of length 3: [kernel_depth, kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A int or list/tuple of length 3: [stride_depth, stride_height, stride_width].
  • padding: The padding method, either 'VALID' or 'SAME'.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.
  • epsilon: prevents divide by zero

Returns

A 5-D Tensor representing the results of the pooling operation. e.g.: 5-D Tensor [batch, new_height, new_width, channels].


Avg pooling layer

tefla.core.layers.avg_pool_3d (x, filter_size= (3, 3, 3), stride= (2, 2, 2), padding='SAME', name=None, outputs_collections=None, **unused)

Args

  • x: A 4-D Tensor of shape [batch_size, depth, height, width, channels]
  • filter_size: A int or list/tuple of length 3: [kernel_depth, kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A int or list/tuple of length 3: [stride_depth, stride_height, stride_width].
  • padding: The padding method, either 'VALID' or 'SAME'.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A 5-D Tensor representing the results of the pooling operation. e.g.: 5-D Tensor [batch, new_depth, new_height, new_width, channels].


Avg pooling layer

tefla.core.layers.avg_pool_2d (x, filter_size= (3, 3), stride= (2, 2), padding='SAME', name=None, outputs_collections=None, **unused)

Args

  • x: A 4-D Tensor of shape [batch_size, height, width, channels]
  • filter_size: A int or list/tuple of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A int or list/tuple of length 2: [stride_height, stride_width].
  • padding: The padding method, either 'VALID' or 'SAME'.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A 4-D Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch, new_height, new_width, channels].


Gloabl pooling layer

tefla.core.layers.global_avg_pool (x, name='global_avg_pool', outputs_collections=None, **unused)

Args

  • x: A 4-D Tensor of shape [batch_size, height, width, channels]
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A 4-D Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch, 1, 1, channels].


Gloabl max pooling layer

tefla.core.layers.global_max_pool (x, name='global_max_pool', outputs_collections=None, **unused)

Args

  • x: A 4-D Tensor of shape [batch_size, height, width, channels]
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A 4-D Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch, 1, 1, channels].


Feature max pooling layer

tefla.core.layers.feature_max_pool_1d (x, stride=2, name='feature_max_pool_1d', outputs_collections=None, **unused)

Args

  • x: A 2-D tensor of shape [batch_size, channels]
  • stride: A int.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A 2-D Tensor representing the results of the pooling operation. e.g.: 2-D Tensor [batch_size, new_channels]


Feature max pooling layer

tefla.core.layers.feature_max_pool_2d (x, stride=2, name='feature_max_pool_2d', outputs_collections=None, **unused)

Args

  • x: A 4-D tensor of shape [batch_size, height, width, channels]
  • stride: A int.
  • outputs_collections: The collections to which the outputs are added.
  • name: Optional scope/name for name_scope.

Returns

A 4-D Tensor representing the results of the pooling operation. e.g.: 4-D Tensor [batch_size, height, width, new_channels]


Adds a Batch Normalization layer from http://arxiv.org/abs/1502.03167

tefla.core.layers.batch_norm_tf (x, name='bn', scale=False, updates_collections=None, **kwargs) "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", Sergey Ioffe, Christian Szegedy Can be used as a normalizer function for conv2d and fully_connected. Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to the train_op, example: update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) if update_ops: updates = tf.group(*update_ops) total_loss = control_flow_ops.with_dependencies([updates], total_loss) One can set updates_collections=None to force the updates in place, but that can have speed penalty, specially in distributed settings.

Args

  • x: a Tensor with 2 or more dimensions, where the first dimension has batch_size. The normalization is over all but the last dimension if data_format is NHWC and the second dimension if data_format is NCHW.
  • decay: decay for the moving average. Reasonable values for decay are close to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. Lower decay value (recommend trying decay=0.9) if model experiences reasonably good training performance but poor validation and/or test performance. Try zero_debias_moving_mean=True for improved stability.
  • center: If True, subtract beta. If False, beta is ignored.
  • scale: If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer.
  • epsilon: small float added to variance to avoid dividing by zero.
  • activation_fn: activation function, default set to None to skip it and maintain a linear activation.
  • param_initializers: optional initializers for beta, gamma, moving mean and moving variance.
  • updates_collections: collections to collect the update ops for computation. The updates_ops need to be executed with the train_op. If None, a control dependency would be added to make sure the updates are computed in place.
  • is_training: whether or not the layer is in training mode. In training mode it would accumulate the statistics of the moments into moving_mean and moving_variance using an exponential moving average with the given decay. When it is not in training mode then it would use the values of the moving_mean and the moving_variance.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. outputs_collections: collections to add the outputs.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • batch_weights: An optional tensor of shape [batch_size], containing a frequency weight for each batch item. If present, then the batch normalization uses weighted mean and variance. (This can be used to correct for bias in training example selection.)
  • fused: Use nn.fused_batch_norm if True, nn.batch_normalization otherwise.
  • name: Optional scope/name for variable_scope.

Returns

A Tensor representing the output of the operation.


Adds a Batch Normalization layer from http://arxiv.org/abs/1502.03167

tefla.core.layers.batch_norm_lasagne (x, is_training, reuse, trainable=True, decay=0.9, epsilon=0.0001, name='bn', updates_collections='update_ops', outputs_collections=None) Instead of storing and updating moving variance, this layer store and update moving inverse standard deviation "Batch Normalization: Accelerating Deep Network Training by Reducin Internal Covariate Shift" Sergey Ioffe, Christian Szegedy Can be used as a normalizer function for conv2d and fully_connected. Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to the train_op, example: update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) if update_ops: updates = tf.group(*update_ops) total_loss = control_flow_ops.with_dependencies([updates], total_loss) One can set updates_collections=None to force the updates in place, but that can have speed penalty, specially in distributed settings.

Args

  • x: a tensor with 2 or more dimensions, where the first dimension has batch_size. The normalization is over all but the last dimension if data_format is NHWC and the second dimension if data_format is NCHW.
  • decay: decay for the moving average. Reasonable values for decay are close to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. Lower decay value (recommend trying decay=0.9) if model experiences reasonably good training performance but poor validation and/or test performance. Try zero_debias_moving_mean=True for improved stability.
  • epsilon: small float added to variance to avoid dividing by zero.
  • updates_collections: collections to collect the update ops for computation. The updates_ops need to be executed with the train_op. If None, a control dependency would be added to make sure the updates are computed in place.
  • is_training: whether or not the layer is in training mode. In training mode it would accumulate the statistics of the moments into moving_mean and moving_variance using an exponential moving average with the given decay. When it is not in training mode then it would use the values of the moving_mean and the moving_variance.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • outputs_collections: collections to add the outputs.
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
  • name: Optional scope/name for variable_scope.

Returns

A Tensor representing the output of the operation.


Layer normalize the tensor x, averaging over the last dimension

tefla.core.layers.layer_norm (x, reuse, filters=None, trainable=True, epsilon=1e-06, name='layer_norm', allow_defun=False, outputs_collections=None)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • trainable: a bool, training or fixed value
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the layer norm operation.


Prametric rectifier linear layer

tefla.core.layers.prelu (x, reuse, alpha_init=0.2, trainable=True, name='prelu', outputs_collections=None)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • alpha_init: initalization value for alpha
  • trainable: a bool, training or fixed value
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the prelu activation operation.


Computes relu

tefla.core.layers.relu (x, name='relu', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Rectifier linear relu6 layer

tefla.core.layers.relu6 (x, name='relu6', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the relu6 activation operation.


Softplus layer

tefla.core.layers.softplus (x, name='softplus', outputs_collections=None, **unused) Computes softplus: log(exp(x) + 1).

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Softsign layer

tefla.core.layers.softsign (x, name='softsign', outputs_collections=None, **unused) Computes softsign: x / (abs(x) + 1).

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes Concatenated ReLU

tefla.core.layers.crelu (x, name='crelu', outputs_collections=None, **unused) Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that at as a result this non-linearity doubles the depth of the activations. Source: https://arxiv.org/abs/1603.05201

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes exponential linear: exp(features) - 1 if < 0, features otherwise

tefla.core.layers.elu (x, name='elu', outputs_collections=None, **unused) See "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)"

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


like concatenated ReLU (http://arxiv.org/abs/1603.05201), but then with ELU

tefla.core.layers.concat_elu (x, name='concat_elu', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes leaky relu

tefla.core.layers.leaky_relu (x, alpha=0.01, name='leaky_relu', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • aplha: the conatant fro scalling the activation
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes reaky relu lasagne style

tefla.core.layers.lrelu (x, leak=0.2, name='lrelu', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • leak: the conatant fro scalling the activation
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes maxout activation

tefla.core.layers.maxout (x, k=2, name='maxout', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • k: output channel splitting factor
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes maxout activation

tefla.core.layers.offset_maxout (x, k=2, name='maxout', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • k: output channel splitting factor
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes softmax activation

tefla.core.layers.softmax (x, name='softmax', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Computes the spatial softmax of a convolutional feature map

tefla.core.layers.spatial_softmax (features, reuse, temperature=None, name='spatial_softmax', trainable=True, outputs_collections=None, **unused) First computes the softmax over the spatial extent of each channel of a convolutional feature map. Then computes the expected 2D position of the points of maximal activation for each channel, resulting in a set of feature keypoints [x1, y1, ... xN, yN] for all N channels. Read more here: "Learning visual feature spaces for robotic manipulation with deep spatial autoencoders." Finn et. al, http://arxiv.org/abs/1509.06113.

Args

  • features: A Tensor of size [batch_size, W, H, num_channels]; the convolutional feature map.
  • reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • outputs_collections: The collections to which the outputs are added.
  • temperature: Softmax temperature (optional). If None, a learnable temperature is created.
  • name: A name for this operation (optional).
  • trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).

Returns

feature_keypoints: A Tensor with size [batch_size, num_channels * 2]; the expected 2D locations of each channel's feature keypoint (normalized to the range (-1,1)). The inner dimension is arranged as [x1, y1, ... xN, yN].


Computes selu

tefla.core.layers.selu (x, alpha=None, scale=None, name='selu', outputs_collections=None, **unused)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • alpha: float, selu parameters calculated from fixed points
  • scale: float, selu parameters calculated from fixed points
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the selu activation operation.


Dropout layer for self normalizing networks

tefla.core.layers.dropout_selu (x, is_training, drop_p=0.2, alpha=-1.7580993408473766, fixedPointMean=0.0, fixedPointVar=1.0, noise_shape=None, seed=None, name='dropout_selu', outputs_collections=None, **unused)

Args

  • x: a Tensor.
  • is_training: a bool, training or validation
  • drop_p: probability of droping unit
  • fixedPointsMean: float, the mean used to calculate the selu parameters
  • fixedPointsVar: float, the Variance used to calculate the selu parameters
  • alpha: float, product of the two selu parameters
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the dropout operation.


Computes Gumbel Softmax

tefla.core.layers.gumbel_softmax (logits, temperature, hard=False) Sample from the Gumbel-Softmax distribution and optionally discretize. http://blog.evjang.com/2016/11/tutorial-categorical-variational.html https://arxiv.org/abs/1611.01144

Args

  • logits: [batch_size, n_class] unnormalized log-probs
  • temperature: non-negative scalar
  • hard: if True, take argmax, but differentiate w.r.t. soft sample y

Returns

[batch_size, n_class] sample from the Gumbel-Softmax distribution. If hard=True, then the returned sample will be one-hot, otherwise it will be a probabilitiy distribution that sums to 1 across classes


Computes pixel wise softmax activation

tefla.core.layers.pixel_wise_softmax (inputs)

Args

  • x: a Tensor with type float, double, int32, int64, uint8, int16, orint8`.
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the activation operation.


Dropout layer

tefla.core.layers.dropout (x, is_training, drop_p=0.5, seed=None, name='dropout', outputs_collections=None, **unused)

Args

  • x: a Tensor.
  • is_training: a bool, training or validation
  • drop_p: probability of droping unit
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the dropout operation.


Repeat op

tefla.core.layers.repeat (x, repetitions, layer, num_outputs=None, name='Repeat', outputs_collections=None, args, *kwargs)

Args

  • x: a Tensor.
  • repetitions: a int, number of times to apply the same operation
  • layer: the layer function with arguments to repeat
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the repetition operation.


Merge op

tefla.core.layers.merge (tensors_list, mode, axis=1, name='merge', outputs_collections=None, **kwargs)

Args

  • tensor_list: A list Tensors to merge
  • mode: str, available modes are ['concat', 'elemwise_sum', 'elemwise_mul', 'sum','mean', 'prod', 'max', 'min', 'and', 'or']
  • name: a optional scope/name of the layer
  • outputs_collections: The collections to which the outputs are added.

Returns

A Tensor representing the results of the repetition operation.


Builds a stack of layers by applying layer repeatedly using stack_args

tefla.core.layers.stack (inputs, layer, stack_args, is_training, reuse, outputs_collections=None, **kwargs) stack allows you to repeatedly apply the same operation with different arguments stack_args[i]. For each application of the layer, stack creates a new scope appended with an increasing number. For example:

y = stack(x, fully_connected, [32, 64, 128], scope='fc')
   # It is equivalent to:
   x = fully_connected(x, 32, scope='fc/fc_1')
   x = fully_connected(x, 64, scope='fc/fc_2')
   y = fully_connected(x, 128, scope='fc/fc_3')

If the scope argument is not given in kwargs, it is set to layer.__name__, or layer.func.__name__ (for functools.partial objects). If neither __name__ nor func.__name__ is available, the layers are called with scope='stack'.

Args

  • inputs: A Tensor suitable for layer.
  • layer: A layer with arguments (inputs, *args, **kwargs)
  • stack_args: A list/tuple of parameters for each call of layer.
  • outputs_collections: The collections to which the outputs are added.
  • **kwargs: Extra kwargs for the layer.

Returns

a Tensor result of applying the stacked layers.


Normalizes the given input across the specified dimension to unit length

tefla.core.layers.unit_norm (inputs, dim, epsilon=1e-07, scope=None) Note that the rank of input must be known.

Args

  • inputs: A Tensor of arbitrary size.
  • dim: The dimension along which the input is normalized.
  • epsilon: A small value to add to the inputs to avoid dividing by zero.
  • scope: Optional scope for variable_scope.

Returns

The normalized Tensor.


Concates two features maps

tefla.core.layers.crop_and_concat (inputs1, inputs2, name='crop_concat') concates different sizes feature maps cropping the larger map concatenation across output channels

Args

  • inputs1: A Tensor
  • inputs2: A Tensor

Returns

concated output tensor