API¶
Access to classes and functions of tednet.
tednet¶
-
tednet.
hard_sigmoid
(tensor: torch.Tensor) → torch.Tensor¶ Computes element-wise hard sigmoid of x. See e.g. https://github.com/Theano/Theano/blob/master/theano/tensor/nnet/sigm.py#L279
- Parameters
tensor (torch.Tensor) – tensor \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Returns
tensor \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Return type
torch.Tensor
-
tednet.
eye
(n: int, m: int, device: torch.device = 'cpu', requires_grad: bool = False) → torch.Tensor¶ Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
- Parameters
- Returns
2-D tensor \(\in \mathbb{R}^{{n} \times {m}}\)
- Return type
torch.Tensor
-
tednet.
to_numpy
(tensor: torch.Tensor) → numpy.ndarray¶ Convert torch.Tensor to numpy.ndarray.
- Parameters
tensor (torch.Tensor) – tensor \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Returns
arr \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Return type
numpy.ndarray
-
tednet.
to_tensor
(arr: numpy.ndarray) → torch.Tensor¶ Convert numpy.ndarray to torch.Tensor.
- Parameters
arr (numpy.ndarray) – arr \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Returns
tensor \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Return type
torch.Tensor
tednet.tnn¶
tednet.tnn.initializer¶
-
tednet.tnn.initializer.
trunc_normal_init
(model, mean: float = 0.0, std: float = 0.1)¶ Initialize network with truncated normal distribution
tednet.tnn.tn_module¶
-
class
tednet.tnn.tn_module.
_TNBase
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], bias: bool = True)¶ Bases:
torch.nn.modules.module.Module
The basis of tensor decomposition networks.
- Parameters
-
check_setting
()¶ Check whether in_shape, out_shape, ranks are 1-D params.
-
abstract
set_tn_type
()¶ Set the tensor decomposition type. The types are as follows:
type
tensor decomposition
tr
Tensor Ring
tt
Tensor Train
tk2
Tucker2
cp
CANDECAMP/PARAFAC
btt
Block-Term Tucker
Examples
>>> tn_type = "tr" >>> self.tn_info["type"] = tn_type
-
abstract
set_nodes
()¶ Generate tensor nodes, then add node information to self.tn_info.
Examples
>>> nodes_info = [] >>> node_info = dict(name="node1", shape=[2, 3, 4]) >>> nodes_info.append(node_info) >>> self.tn_info["nodes"] = nodes_info
-
abstract
set_params_info
()¶ Record information of Parameters.
Examples
>>> self.tn_info["t_params"] = tn_parameters >>> self.tn_info["ori_params"] = ori_parameters >>> self.tn_info["cr"] = ori_parameters / tn_parameters
-
abstract
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ The method of contract inputs and tensor nodes.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{{i_1} \times \dots \times {i_m}}\)
- Returns
tensor \(\in \mathbb{R}^{{i_1} \times \dots \times {i_n}}\)
- Return type
torch.Tensor
-
abstract
recover
()¶ Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tn_module.
LambdaLayer
(lambd)¶ Bases:
torch.nn.modules.module.Module
A layer consists of Lambda function.
- Parameters
lambd – a lambda function.
-
forward
(inputs: torch.Tensor) → torch.Tensor¶ Forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
tednet.tnn.tn_linear¶
-
class
tednet.tnn.tn_linear.
_TNLinear
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], bias=True)¶ Bases:
tednet.tnn.tn_module._TNBase
The Tensor Decomposition Linear.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The decomposition shape of feature out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^r\). The ranks of linear
bias (bool) – use bias of linear or not.
True
to use, andFalse
to not use
-
forward
(inputs)¶ Tensor linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C'}\)
- Return type
torch.Tensor
-
_abc_impl
= <_abc_data object>¶
tednet.tnn.tn_cnn¶
-
class
tednet.tnn.tn_cnn.
_TNConvNd
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], kernel_size: Union[int, tuple], stride=1, padding=0, bias=True)¶ Bases:
tednet.tnn.tn_module._TNBase
Tensor Decomposition Convolution.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of channel in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The decomposition shape of channel out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^r\). The ranks of the decomposition
kernel_size (Union[int, tuple]) – The convolutional kernel size
stride (int) – The length of stride
padding (int) – The size of padding
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
forward
(inputs: torch.Tensor)¶ Tensor convolutional forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times H' \times W' \times C'}\)
- Return type
torch.Tensor
-
_abc_impl
= <_abc_data object>¶
tednet.tnn.tn_rnn¶
-
class
tednet.tnn.tn_rnn.
_TNLSTMCell
(hidden_size: int, tn_block, drop_ih=0.3, drop_hh=0.35)¶ Bases:
torch.nn.modules.module.Module
Tensor LSTMCell.
- Parameters
-
reset_hh
()¶ Reset parameters of hidden-to-hidden layer.
-
forward
(inputs: torch.Tensor, state: tednet.tnn.tn_rnn.LSTMState)¶ Forwarding method. LSTMState = namedtuple(‘LSTMState’, [‘hx’, ‘cx’])
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
state (LSTMState) – namedtuple: [hx \(\in \mathbb{R}^{H}\), cx \(\in \mathbb{R}^{H}\)]
- Returns
result: hy \(\in \mathbb{R}^{H}\), [hy \(\in \mathbb{R}^{H}\), cy \(\in \mathbb{R}^{H}\)]
- Return type
torch.Tensor, [torch.Tensor, torch.Tensor]
-
class
tednet.tnn.tn_rnn.
_TNLSTM
(hidden_size, tn_block, drop_ih=0.3, drop_hh=0.35)¶ Bases:
torch.nn.modules.module.Module
Tensor LSTM.
- Parameters
-
forward
(inputs, state)¶ Forwarding method. LSTMState = namedtuple(‘LSTMState’, [‘hx’, ‘cx’])
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{S \times b \times C}\)
state (LSTMState) – namedtuple: [hx \(\in \mathbb{R}^{H}\), cx \(\in \mathbb{R}^{H}\)]
- Returns
tensor \(\in \mathbb{R}^{S \times b \times C'}\), LSTMState is a namedtuple: [hy \(\in \mathbb{R}^{H}\), cy \(\in \mathbb{R}^{H}\)]
- Return type
torch.Tensor, LSTMState
tednet.tnn.cp¶
-
class
tednet.tnn.cp.
CPConv2D
(c_in: int, c_out: int, rank: int, kernel_size: Union[int, tuple], stride=1, padding=0, bias=True)¶ Bases:
tednet.tnn.tn_cnn._TNConvNd
CANDECOMP/PARAFAC Decomposition Convolution.
- Parameters
c_in (int) – The decomposition shape of channel in
c_out (int) – The decomposition shape of channel out
rank (int) – The rank of the decomposition
kernel_size (Union[int, tuple]) – The convolutional kernel size
stride (int) – The length of stride
padding (int) – The size of padding
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as CANDECOMP/PARAFAC decomposition type.
-
set_nodes
()¶ Generate CANDECOMP/PARAFAC nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tensor Decomposition Convolution.
- Parameters
inputs (torch.Tensor) – A tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
A tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
-
forward
(inputs: torch.Tensor)¶ Tensor convolutional forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.cp.
CPLinear
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], rank: int, bias: bool = True)¶ Bases:
tednet.tnn.tn_linear._TNLinear
The CANDECOMP/PARAFAC Decomposition Linear.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The decomposition shape of feature out
ranks (int) – The rank of linear
bias (bool) – use bias of linear or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as CANDECOMP/PARAFAC decomposition type.
-
set_nodes
()¶ Generate tensor ring nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ CANDECOMP/PARAFAC linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.cp.
CPLeNet5
(num_classes: int, rs: Union[list, numpy.ndarray])¶ Bases:
torch.nn.modules.module.Module
LeNet-5 based on CANDECOMP/PARAFAC.
- Parameters
-
forward
(inputs: torch.Tensor) → torch.Tensor¶ forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times num\_classes}\)
- Return type
torch.Tensor
-
class
tednet.tnn.cp.
CPResNet20
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.cp.cp_resnet.CPResNet
ResNet-20 based on CANDECOMP/PARAFAC.
- Parameters
-
class
tednet.tnn.cp.
CPResNet32
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.cp.cp_resnet.CPResNet
ResNet-32 based on CANDECOMP/PARAFAC.
- Parameters
-
class
tednet.tnn.cp.
CPLSTM
(in_shape: Union[list, numpy.ndarray], hidden_shape: Union[list, numpy.ndarray], ranks: int, drop_ih: float = 0.3, drop_hh: float = 0.35)¶ Bases:
tednet.tnn.tn_rnn._TNLSTM
LSTM based on CANDECOMP/PARAFAC.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The input shape of LSTM
hidden_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The hidden shape of LSTM
ranks (int) – The rank of linear
drop_ih (float) – The dropout rate of input-to-hidden door
drop_hh (float) – The dropout rate of hidden-to-hidden door
-
reset_ih
()¶ Reset parameters of input-to-hidden layer.
tednet.tnn.tucker2¶
-
class
tednet.tnn.tucker2.
TK2Conv2D
(c_in: int, c_out: int, ranks: Union[list, numpy.ndarray], kernel_size: Union[int, tuple], stride=1, padding=0, bias=True)¶ Bases:
tednet.tnn.tn_cnn._TNConvNd
Tucker-2 Decomposition Convolution.
- Parameters
c_in (int) – The decomposition shape of channel in
c_out (int) – The decomposition shape of channel out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^r\). The ranks of the decomposition
kernel_size (Union[int, tuple]) – 1-D param \(\in \mathbb{R}^m\). The convolutional kernel size
stride (int) – The length of stride
padding (int) – The size of padding
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Tucker-2 decomposition type.
-
set_nodes
()¶ Generate Tucker-2 nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tucker-2 Decomposition Convolution.
- Parameters
inputs (torch.Tensor) – A tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
A tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tucker2.
TK2Linear
(in_shape: Union[list, numpy.ndarray], out_size: int, ranks: Union[list, numpy.ndarray], bias: bool = True)¶ Bases:
tednet.tnn.tn_linear._TNLinear
Tucker-2 Decomposition Linear.
input length
ranks length
1
1
3
2
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m, m \in \{1, 3\}\). The decomposition shape of feature in
out_size (int) – The output size of the model
ranks (Union[list, numpy.ndarray]) – 1-D param. The rank of the decomposition
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Tucker-2 decomposition type.
-
set_nodes
()¶ Generate Tucker-2 nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tucker-2 linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tucker2.
TK2LeNet5
(num_classes: int, rs: Union[list, numpy.ndarray])¶ Bases:
torch.nn.modules.module.Module
LeNet-5 based on the Tucker-2.
- Parameters
-
forward
(inputs: torch.Tensor) → torch.Tensor¶ forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times num\_classes}\)
- Return type
torch.Tensor
-
class
tednet.tnn.tucker2.
TK2ResNet20
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.tucker2.tk2_resnet.TK2ResNet
ResNet-20 based on Tucker-2.
- Parameters
-
class
tednet.tnn.tucker2.
TK2ResNet32
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.tucker2.tk2_resnet.TK2ResNet
ResNet-32 based on Tucker-2.
- Parameters
-
class
tednet.tnn.tucker2.
TK2LSTM
(in_shape: Union[list, numpy.ndarray], hidden_size: int, ranks: Union[list, numpy.ndarray], drop_ih: float = 0.3, drop_hh: float = 0.35)¶ Bases:
tednet.tnn.tn_rnn._TNLSTM
LSTM based on Tucker-2.
input length
ranks length
1
1
3
2
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m, m \in \{1, 3\}\). The input shape of LSTM
hidden_size (int) – The hidden size of LSTM
ranks (Union[list, numpy.ndarray]) – 1-D param. The ranks of linear
drop_ih (float) – The dropout rate of input-to-hidden door
drop_hh (float) – The dropout rate of hidden-to-hidden door
-
reset_ih
()¶ Reset parameters of input-to-hidden layer.
tednet.tnn.bt_tucker¶
-
class
tednet.tnn.bt_tucker.
BTTConv2D
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], block_num: int, kernel_size: Union[int, tuple], stride=1, padding=0, bias=True)¶ Bases:
tednet.tnn.tn_cnn._TNConvNd
Block-Term Tucker Decomposition Convolution.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of channel in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of channel out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^{m+2}\). The rank of the decomposition
block_num (int) – The number of blocks
kernel_size (Union[int, tuple]) – The convolutional kernel size
stride (int) – The length of stride
padding (int) – The size of padding
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Block-Term Tucker decomposition type.
-
set_nodes
()¶ Generate Block-Term Tucker nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Block-Term Tucker Decomposition Convolution.
- Parameters
inputs (torch.Tensor) – A tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
A tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
-
forward
(inputs: torch.Tensor)¶ Block-Term Tucker convolutional forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.bt_tucker.
BTTLinear
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], block_num: int, bias: bool = True)¶ Bases:
tednet.tnn.tn_linear._TNLinear
Block-Term Tucker Decomposition Linear.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The rank of the decomposition
block_num (int) – The number of blocks
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Block-Term Tucker decomposition type.
-
set_nodes
()¶ Generate Block-Term Tucker nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Block-Term Tucker linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.bt_tucker.
BTTLeNet5
(num_classes: int, rs: Union[list, numpy.ndarray])¶ Bases:
torch.nn.modules.module.Module
LeNet-5 based on the Block-Term Tucker.
- Parameters
-
forward
(inputs: torch.Tensor) → torch.Tensor¶ Block-Term Tucker linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times num\_classes}\)
- Return type
torch.Tensor
-
class
tednet.tnn.bt_tucker.
BTTResNet20
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.bt_tucker.btt_resnet.BTTResNet
ResNet-20 based on Block-Term Tucker.
- Parameters
-
class
tednet.tnn.bt_tucker.
BTTResNet32
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.bt_tucker.btt_resnet.BTTResNet
ResNet-32 based on Block-Term Tucker.
- Parameters
-
class
tednet.tnn.bt_tucker.
BTTLSTM
(in_shape: Union[list, numpy.ndarray], hidden_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], block_num: int, drop_ih: float = 0.3, drop_hh: float = 0.35)¶ Bases:
tednet.tnn.tn_rnn._TNLSTM
LSTM based on Block-Term Tucker.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The input shape of LSTM
hidden_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The hidden shape of LSTM
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The ranks of linear
block_num (int) – The number of blocks
drop_ih (float) – The dropout rate of input-to-hidden door
drop_hh (float) – The dropout rate of hidden-to-hidden door
-
reset_ih
()¶ Reset parameters of input-to-hidden layer.
tednet.tnn.tensor_train¶
-
class
tednet.tnn.tensor_train.
TTConv2D
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], kernel_size: Union[int, tuple], stride=1, padding=0, bias=True)¶ Bases:
tednet.tnn.tn_cnn._TNConvNd
Tensor Train Decomposition Convolution.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of channel in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of channel out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The rank of the decomposition
kernel_size (Union[int, tuple]) – The convolutional kernel size
stride (int) – The length of stride
padding (int) – The size of padding
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Tensor Train decomposition type.
-
set_nodes
()¶ Generate Tensor Train nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tensor Train Decomposition Convolution.
- Parameters
inputs (torch.Tensor) – A tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
A tensor \(\in \mathbb{R}^{b \times C' \times H' \times W'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tensor_train.
TTLinear
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], bias: bool = True)¶ Bases:
tednet.tnn.tn_linear._TNLinear
Tensor Train Decomposition Linear.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^{m-1}\). The rank of the decomposition
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Tensor Train decomposition type.
-
set_nodes
()¶ Generate Tensor Train nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tensor Train linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tensor_train.
TTLeNet5
(num_classes: int, rs: Union[list, numpy.ndarray])¶ Bases:
torch.nn.modules.module.Module
LeNet-5 based on the Tensor Train.
- Parameters
-
forward
(inputs: torch.Tensor) → torch.Tensor¶ Tensor Train linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times num\_classes}\)
- Return type
torch.Tensor
-
class
tednet.tnn.tensor_train.
TTResNet20
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.tensor_train.tt_resnet.TTResNet
ResNet-20 based on Tensor Train.
- Parameters
-
class
tednet.tnn.tensor_train.
TTResNet32
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.tensor_train.tt_resnet.TTResNet
ResNet-32 based on Tensor Train.
- Parameters
-
class
tednet.tnn.tensor_train.
TTLSTM
(in_shape: Union[list, numpy.ndarray], hidden_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], drop_ih: float = 0.3, drop_hh: float = 0.35)¶ Bases:
tednet.tnn.tn_rnn._TNLSTM
LSTM based on Tensor Train.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The input shape of LSTM
hidden_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The hidden shape of LSTM
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^{m-1}\). The ranks of linear
drop_ih (float) – The dropout rate of input-to-hidden door
drop_hh (float) – The dropout rate of hidden-to-hidden door
-
reset_ih
()¶ Reset parameters of input-to-hidden layer.
tednet.tnn.tensor_ring¶
-
class
tednet.tnn.tensor_ring.
TRConv2D
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], kernel_size: Union[int, tuple], stride=1, padding=0, bias=True)¶ Bases:
tednet.tnn.tn_cnn._TNConvNd
Tensor Ring Decomposition Convolution.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of channel in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The decomposition shape of channel out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^{m+n+1}\). The ranks of the decomposition
kernel_size (Union[int, tuple]) – The convolutional kernel size
stride (int) – The length of stride
padding (int) – The size of padding
bias (bool) – use bias of convolution or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Tensor Ring decomposition type.
-
set_nodes
()¶ Generate Tensor Ring nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tensor Decomposition Convolution.
- Parameters
inputs (torch.Tensor) – A tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
A tensor \(\in \mathbb{R}^{b \times H' \times W' \times C'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tensor_ring.
TRLinear
(in_shape: Union[list, numpy.ndarray], out_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], bias: bool = True)¶ Bases:
tednet.tnn.tn_linear._TNLinear
The Tensor Ring Decomposition Linear.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The decomposition shape of feature in
out_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The decomposition shape of feature out
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^{m+n}\). The ranks of linear
bias (bool) – use bias of linear or not.
True
to use, andFalse
to not use
-
set_tn_type
()¶ Set as Tensor Ring decomposition type.
-
set_nodes
()¶ Generate tensor ring nodes, then add node information to self.tn_info.
-
set_params_info
()¶ Record information of Parameters.
-
reset_parameters
()¶ Reset parameters.
-
tn_contract
(inputs: torch.Tensor) → torch.Tensor¶ Tensor Ring linear forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C}\)
- Returns
tensor \(\in \mathbb{R}^{b \times C'}\)
- Return type
torch.Tensor
-
recover
()¶ Todo: Use for rebuilding the original tensor.
-
_abc_impl
= <_abc_data object>¶
-
class
tednet.tnn.tensor_ring.
TRLeNet5
(num_classes: int, rs: Union[list, numpy.ndarray])¶ Bases:
torch.nn.modules.module.Module
LeNet-5 based on Tensor Ring.
- Parameters
-
forward
(inputs: torch.Tensor) → torch.Tensor¶ forwarding method.
- Parameters
inputs (torch.Tensor) – tensor \(\in \mathbb{R}^{b \times C \times H \times W}\)
- Returns
tensor \(\in \mathbb{R}^{b \times num\_classes}\)
- Return type
torch.Tensor
-
class
tednet.tnn.tensor_ring.
TRResNet20
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.tensor_ring.tr_resnet.TRResNet
ResNet-20 based on Tensor Ring.
- Parameters
-
class
tednet.tnn.tensor_ring.
TRResNet32
(rs: Union[list, numpy.ndarray], num_classes: int)¶ Bases:
tednet.tnn.tensor_ring.tr_resnet.TRResNet
ResNet-32 based on Tensor Ring.
- Parameters
-
class
tednet.tnn.tensor_ring.
TRLSTM
(in_shape: Union[list, numpy.ndarray], hidden_shape: Union[list, numpy.ndarray], ranks: Union[list, numpy.ndarray], drop_ih: float = 0.25, drop_hh: float = 0.25)¶ Bases:
tednet.tnn.tn_rnn._TNLSTM
LSTM based on Tensor Ring.
- Parameters
in_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^m\). The input shape of LSTM
hidden_shape (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^n\). The hidden shape of LSTM
ranks (Union[list, numpy.ndarray]) – 1-D param \(\in \mathbb{R}^{m+n}\). The ranks of linear
drop_ih (float) – The dropout rate of input-to-hidden door
drop_hh (float) – The dropout rate of hidden-to-hidden door
-
reset_ih
()¶ Reset parameters of input-to-hidden layer.