tensorlayer3/tensorlayer_cn.md

347 lines
146 KiB
Markdown
Raw Permalink Normal View History

2021-08-02 17:02:39 +08:00
TensorLayer3.0是一款兼容多种深度学习框架后端的深度学习库支持TensorFlow, MindSpore, PaddlePaddle为后端计算引擎。TensorLayer3.0使用方式简单并且在选定运算后端后能和该后端的算子混合使用。TensorLayer3.0提供了数据处理、模型构建、模型训练等深度学习全流程API同一套代码可以通过一行代码切换后端减少框架之间算法迁移需要重构代码的繁琐工作。
## 一、TensorLayer安装
TensorLayer安装前置条件包括TensorFlow, numpy, matplotlib等如果你需要使用GPU加速还需要安装CUDA和cuDNN。
### 1.1 安装后端
TensorLayer支持多种后端默认为TensorFlow也支持MindSpore和PaddlePaddlePaddlePaddle目前只支持少量Layer后续新版本中会持续更新。
安装TensorFlow
```python
pip3 install tensorflow-gpu # GPU version
pip3 install tensorflow # CPU version
```
如果你想使用MindSpore后端还需要安装MindSpore1.2.0,下面给出了MindSpore1.2.0GPU版本的安装如果需要安装CPU或者Ascend可以参考MindSpore官网。
```python
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.2.1/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-1.2.1-cp37-cp37m-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
```
如果想要使用PaddlePaddle后端还需要安装PaddlePaddle2.0下面给出了PaddlePaddle2.0GPU版本的安装其他平台请参考PaddlePaddle官网。
```python
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
### 1.2 安装TensorLayer
通过PIP安装稳定版本
```plain
pip install tensorlayer3
pip install tensorlayer3 -i https://pypi.tuna.tsinghua.edu.cn/simple (faster in China)
```
如果要获得最新开发版本可以通过下载源码安装
```plain
pip3 install git+https://git.openi.org.cn/TensorLayer/tensorlayer3.0.git
```
## 二、TensorLayer3.0特性
TensorLayer3.0版本主要设计目标如下我们将会支持TensorFlow, Pytorch, MindSpore, PaddlePaddle作为计算引擎。在API层提供深度学习模型构建组件(Layers),数据处理(DataFlow), 激活函数Activations参数初始化函数代价函数模型优化函数以及一些常用操作函数。在最上层我们利用TensorLayer开发了一些例子和预训练模型。
![图片](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABrcAAAN9CAYAAAA5QCKTAAAgAElEQVR4AezBDXiUhYHv7d8zhDB8ZAgGzAQQnVTF0BoJb9I1obqHyVo1FF2SytaErmtg363AVrdEPF4M7duSnK5KVukS7PYQ3F1JtAihaA24XEyuY0uG08w6NlpjrGYoSpgAI8MEzBACz1vUaGaAkPAd+N+3Yf4ZIiIiIiIiIiIiIiIiIgOABREREREREREREREREZEBwoKIiIiIiIiIiIiIiIjIAGFBREREREREREREREREZICwICIiIiIiIiIiIiIiIjJAWBAREREREREREREREREZICyIiIiIiIiIiIiIiIiIDBAWRERERERERERERERERAYICyIiIiIiIiIiIiIiIiIDhAURERERERERERERERGRAcKCiIiIiIiIiIiIiIiIyABhQURERERERERERERERGSAsCAiIiIiIiIiIiIiIiIyQMQhInJemEQOtHD40G66Docwj0YQkUuDJW4ocUNGET98LFbbdcil41hXB5HwDjo/aeNo5wHMY0cQkUuBhUGDhxFnTWLIiPHED09Bzp8jn+zh8MEPOdKxj6Ndh8A8hohcfIYljkHxNuKHJjPEdi2DBo9ARK5sRz7Zw+GDH3KkYx9Huw6BeQwRuTIZljgGxduIH5rMENu1DBo8gvMtDhGRc+TY0QgHdv2Wg3saOBR8G/NYFyJyabMMsjJ89M0kJGcxcuztYBjIhXXkkz0c2L2Ng3v+m47QHxGRS1/ckFGMGHMLCfZbGTEmAzl7h4Jv0b57Owf3vcmRjn2IyKXPOjKVEWOmMDIlh/gR4xCRK8Oh4Fu0797OwX1vcqRjHyIiJ2MdmcqIMVMYmZJD/IhxnA+G+WeIiJyFo51hgi0v8/GfXsM81omIDEyDBo/gquvuJin1XgzLYOT8ioR38LH/FQ60/hYRGbiGjBjPVdflkXhNLtJ/B3b9ho93vEok7EdEBq4E+1+Q5PgWQxNvREQuTwd2/YaPd7xKJOxHRKQ/Eux/QZLjWwxNvJFzyTD/DBGRM7T/T6+x570XONbVQU+GYWGobTxDho9m8BAblkHxGBiIyMVlYnKs6zBHDoeJHNpDR3gXseKsV3H1jfczctztyPmx5901BP2vEGtQnJWhtnEMGZpE3JARWCyDEZGLzzSPcbSrg85IiMjBNg4f2kusoYk3cPXEIoZdlYacXuRAC3uaqzgUfJtYQ4YlYR1hJ96ayKDBwzAMCyJy8R071kVX50E6O4J0hHfTdeQQsUZdeyfJN30XwzIYEbk8RA60sKe5ikPBt4k1ZFgS1hF24q2JDBo8DMOwICJXpmPHuujqPEhnR5CO8G66jhwi1qhr7yT5pu9iWAZzLhjmnyEi0k/m0U5a33qW8O56ehpx1VcYefXXSEi6HsMYhIhc2o4d7aQ9+B4H9vyBQ6E/0VPiNX9Fytf+Hjl3DrfvpPWtnxM58AE9JSanYxuTxvDEaxGRS9+Rw2Ha9zUTanuLw5/so6erJxaRlHoPcmr7d/4XgT9U0lO8NZFEezoJoycSbx2FiFz6Pgl/RHhvE/sDb4Jp0m3IiHGk3Pw9hibeiIgMbPt3/heBP1TSU7w1kUR7OgmjJxJvHYWIyMl8Ev6I8N4m9gfeBNOk25AR40i5+XsMTbyRs2WYf4aISD8ciQT56I1yIgc+oNvwUQ7GXJPDUNs4RGRgOrTfz94P6+kI76Lb8NHpjJ9SgmXQEOTsHNrXyEe+pznW9QndEu3pjB6fzWDrSERkYAq1vcXeP/2Wrs52uo269i7skx5ETrTnvRcIfvArug2KszLm2m8wKmUKIjIwdR05RPDD7Xzc+t98wTAYn/EDEpK/jogMTHvee4HgB7+i26A4K2Ou/QajUqYgItJXXUcOEfxwOx+3/jdfMAzGZ/yAhOSvczYM888QEemjrs4D7PxdKYfbd9ItOdXJVWMzEZHLw96d29i3cxvdhl31Va79ugsMC3JmPgn+gZ0NZZjmUY6LGzwc+/XfJCHpBkRk4Dt2tJPAB1s4sOcPdBs14ZvYvzoH+dKe5mqCLRvplpB0I/av3EFc/HBEZOA7FNpB4P3/ojMSotv4KY+SkJyJiAwse5qrCbZspFtC0o3Yv3IHcfHDERE5E4dCOwi8/190RkJ0Gz/lURKSMzlTg/6/P0NEpI8++u+n6Aj9keMslsGMn1TAyKu/iohcPoaPnMCQoVfRHnyP44507KXzUCs2+61I/3V+0saH3p9y7GgHx1lHJHPN1+5jmG0cInJ5MCyDSEi6EQP45MCHHBc58AGGMYhhV6Uh8PGfNrH3vV/S7apxWYy94W4sg+IRkctDvDUR25g0Igd3c+RwmOPa2xoYcXUGcUNGISIDw8d/2sTe935Jt6vGZTH2hruxDIpHRORMxVsTsY1JI3JwN0cOhzmuva2BEVdnEDdkFGfCgohIH7W9+zyHgm/zKcPgmq8WMGKUAxG5/NjGpDH+pr+mW3i3h2DLRqT/Am//b7o6D3DckOFjuOar3ybemoiIXH5GT5jKmGu/Qbe9f/wlB/e+yZWuI/Qebe/8O92Sxn2dZMc0ROTyEzd4GNd89T6G2cZznHnsCLvf/gUiMjB0hN6j7Z1/p1vSuK+T7JiGiMi5EDd4GNd89T6G2cZznHnsCLvf/gVnyoKISB8cCr7Nx/5f023cjd9i2MgJiMjlK2H0jdhT/4pue5qrOdy+E+m7oP8VDgXf4jiLZTDjJs4gbvBwROTyNfqaHBLtt9Btz7vPc6Vra3qebrbRN3G1438gIpcviyWOsRNnEBc/guMiB1rY+96LiMilr63pebrZRt/E1Y7/gYjIuWSxxDF24gzi4kdwXORAC3vfe5EzYUFEpA/2vb+ObleN/X+wjUlDRC5/o8ZOwTYmjW773l+P9M3RI+3s++NLdEv+yl8xZNhoROTyl/KVbzJkWBLHHT74EcGWX3Gl2r9zCx2h9zguLn4E9uu/iYhc/gYPScD+lb+i274PNtD5SQARuXTt37mFjtB7HBcXPwL79d9EROR8GDwkAftX/opu+z7YQOcnAfrLgojIabTv8fLJx00cFxc/nDHX/SUicuW4+rrb6RYObKcj9D5yeh/vqOXY0cMcN2JUKonJNyMiVwjDYMy1t9Et6H8VzGNciT7e8Srdxlx7G4PirIjIlSEh6UZGjplEt4931CIil66Pd7xKtzHX3sagOCsiIudLQtKNjBwziW4f76ilvyyIiJxG6EM33ZLGfR2LJQ4RuXIMHjKSpHFfp1voIzdyeqEP3XRLGv8XiMiVJSHpRoaNnMBxRzvDhD6q40rTHvi/dB7azXHW4VeTmHwzInJlSRr/F3QLfbiVY0cPIyKXnvbA/6Xz0G6Osw6/msTkmxEROd+Sxv8F3UIfbuXY0cP0hwURkV50HT7AwT3/zXGGMYhE+2RE5MqTmHIL3cK765Hete/x0nU4xHHDbOMZNvIaROTKM8p+C93CAQ9XmnDAQ7dE+y2IyJVnyPAxjLjqeo4zj3XRHvAgIpeecMBDt0T7LYiIXAhDho9hxFXXc5x5rIv2gIf+sCAi0otD+35Pt4TRE7EMGoyIXHniraMYNnICxx3r6uDQvkbk1A7t/T3dEkZPRESuTLYxN2EZNJjjDu17i2NdHVxJDu710c02+iZE5MpkGz2Rbgf3vomIXHoO7vXRzTb6JkRELhTb6Il0O7j3TfrDgohILz7Z30y3EYnXISJXrhGjrqPbJ/ubkVP7ZP+7dBsxyoGIXKkMhideR7dP9jdzpeg48D7HuiIcN8w2nkGDhyIiV6YRoxx0+2R/MyJyaek48D7HuiIcN8w2nkGDhyIicqGMGOWg2yf7m+kPCyIivTjc/ie6WRNSEJErl3VECt0Ot+9ETs40j3K4fSfHDRo8lPihVyEiVy7rCDvdDrfv5EpxuP1DulkTUhCRK9egwcOIH3oVx3VFPqar8wAicuk43P4h3awJKYiIXEiDBg8jfuhVHNcV+ZiuzgP0lQURkV4c6dhLtyFDr0JErlzxQ6+i25GOvcjJHenYR7d46yhE5MoWP/Qquh3
### 2.1 可扩展性
TensorLayer3.0的相比于TensorLayer2.0之前的版本对后端进行了解耦。在之前的版本中我们设计的Layer直接使用了TensorFlow的算子这为后续扩展后端带来不便。为此在新的版本中我们将所有后端算子均封装在backend层并且对不同框架之间的接口进行了统一在构建Layer时均调用统一的接口来达到兼容多框架的目的。
### 2.2 简易性
TensorLayer3.0使用简单我们设计了两种构建模型的方式对于顺序连贯的模型我们提供了SequentialLayer来构建对于复杂模型可以通过SubClass的方式继承Module来构建。在TensorLayer3.0中构建的网络模型可以当成Layer在__init__中初始化在forward中调用。TensorLayer3.0构建网络时可以无需计算上一层的输出不用输入in_channels参数通过最后init_build操作来完成参数初始化自动推断模型输出大小。
### 2.3 兼容性
TensorLayer3.0构建的模型能直接在TensorFlow, MindSpore, PaddlePaddle中使用可以混合对应框架的算子进行使用。例如用TensorLayer搭建网络使用TensorFlow后端那么在数据处理和模型训练时可以直接用TensorFlow提供的算子完成。
## 三、数据集加载
TensorLayer内置了一些常见的数据集例如mnist, cifar10。这里加载手写数字识别数据集用来模型训练和评估。
```python
import tensorlayer as tl
X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784))
```
## 四、数据预处理
TensorLayer提供了大量数据处理操作也可以直接使用对应框架数据处理操作完成你的数据构建。
Tensorlayer目前拥有完善的图像预处理操作。为了满足开发者习惯集成以TensorFlow、MindSpore、PaddlePaddle为后端的图像算子。图像算子主要基于各框架本身tensor操作以及PIL、opencv库完成并且能够自动根据全局后端环境变量将图像矩阵数据转换为后端框架对应的数据格式。为了图像算子在各框架后端保持一致TensorLayer综合考虑TensorFlow、Mindspore、PaddlePaddle框架各自图像算子功能及参数增加和调整不同后端框架源码扩展了图像处理功能。以PyTorch为后端的图像算子将在未来开发中更新。
TensorLayer的图像数据预处理例子如下
```python
import tensorlayer as tl
import numpy as np
image=(np.random.rand(224, 224, 3) * 255.).astype(np.uint8)
transform=tl.vision.transforms.Resize(size(100,100),interpolation='bilinear')
image=transform(image)
print(image.shape)
#image shape:(100, 100, 3)
image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8)
transform=tl.vision.transforms.Pad(padding=10,padding_value=0,mode='constant')
image = transform(image)
print(image.shape)
#image shape : (244, 244, 3)
```
## 五、模型构建
### 5.1 SequentialLayer构建
针对有顺序的线性网络结构你可以通过SequentialLayer来快速构建模型可以减少定义网络等代码编写具体如下我们构建一个多层感知机模型。
```python
import tensorlayer as tl
from tensorlayer.layers import Dense
layer_list = []
layer_list.append(Dense(n_units=800, act=tl.ReLU, in_channels=784, name='Dense1'))
layer_list.append(Dense(n_units=800, act=tl.ReLU, in_channels=800, name='Dense2'))
layer_list.append(Dense(n_units=10, act=tl.ReLU, in_channels=800, name='Dense3'))
MLP = SequentialLayer(layer_list)
```
### 5.2 继承基类Module构建
针对较为复杂的网络可以使用Module子类定义的方式来进行模型构建在__init__对Layer进行声明在forward里使用声明的Layer进行前向计算。这种方式中声明的Layer可以进行复用针对相同的Layer构造一次在forward可以调用多次。同样我们构建一个多层感知机模型。
```python
import tensorlayer as tl
from tensorlayer.layers import Module, Dropout, Dense
class MLP(Module):
def __init__(self):
super(CustomModel, self).__init__()
self.dense1 = Dense(n_units=800, act=tl.ReLU, in_channels=784)
self.dense2 = Dense(n_units=800, act=tl.ReLU, in_channels=800)
self.dense3 = Dense(n_units=10, act=tl.ReLU, in_channels=800)
def forward(self, x):
z = self.dense1(z)
z = self.dense2(z)
out = self.dense3(z)
return out
```
### 5.3 构建复杂网络结构
在构建网络时,我们经常遇到一些模块重复使用多次,可以通过循环来构建。
例如在网络中需要将感知机当成一个Block并且使用三次我们先定义要多次调用的Block
```python
import tensorlayer as tl
from tensorlayer.layers import Module, Dense, Elementwise
class Block(Module):
def __init__(self, in_channels):
super(Block, self).__init__()
self.dense1 = Dense(in_channels=in_channels, n_units=256)
self.dense2 = Dense(in_channels=256, n_units=384)
self.dense3 = Dense(in_channels=in_channels, n_units=384)
self.concat = Elementwise(combine_fn=tl.ops.add)
def forward(self, inputs):
z = self.dense1(inputs)
z1 = self.dense2(z)
z2 = self.dense3(inputs)
out = self.concat([z1, z2])
return out
```
定义好Block后我们通过SequentialLayer和Module构建网络
```python
class CNN(Module):
def __init__(self):
super(CNN, self).__init__()
self.flatten = Flatten(name='flatten')
self.dense1 = Dense(384, act=tl.ReLU, in_channels=2304)
self.dense_add = self.make_layer(in_channel=384)
self.dense2 = Dense(192, act=tl.ReLU, n_channels=384)
self.dense3 = Dense(10, act=None, in_channels=192)
def forward(self, x):
z = self.flatten(z)
z = self.dense1(z)
z = self.dense_add(z)
z = self.dense2(z)
z = self.dense3(z)
return z
def make_layer(self, in_channel):
layers = []
_block = Block(in_channel)
layers.append(_block)
for _ in range(1, 3):
range_block = Block(in_channel)
layers.append(range_block)
return SequentialLayer(layers)
```
### 5.4 自动推断上一层输出大小
我们构建网络时经常需要手动输入上一层的输出大小作为下一层的输入也就是每个Layer中的in_channels参数。在TensoLayer中也可以无需输入in_channels构建网络后给定网络的输入调用一次参数初始化即可。
```python
import tensorlayer as tl
from tensorlayer.layers import Module, Dense
class CustomModel(Module):
def __init__(self):
super(CustomModel, self).__init__()
self.dense1 = Dense(n_units=800)
self.dense2 = Dense(n_units=800, act=tl.ReLU)
self.dense3 = Dense(n_units=10, act=tl.ReLU)
def forward(self, x):
z = self.dense1(z)
z = self.dense2(z)
out = self.dense3(z)
return out
MLP = CustomModel()
input = tl.layers.Input(shape=(1, 784))
MLP.init_build(input)
```
## 六、模型训练
TensorLayer提供了模型训练模块可以直接调用进行训练。TensorLayer构建的模型也能支持在其他框架中直接使用如用TensorLayer构建MLP模型使用的是TensorFlow后端那么可以使用TensoFlow的算子完成模型训练。
### 6.1 调用模型训练模块训练
调用封装好的models模块进行训练。
```python
import tensorlayer as tl
optimizer = tl.optimizers.Momentum(0.05, 0.9)
model = tl.models.Model(network=MLP, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer)
model.train(n_epoch=500, train_dataset=train_ds, print_freq=2)
```
### 6.2 混合对应框架算子进行训练
混合TensorFlow进行训练。下面例子中optimizer和loss均可以使用TensorFlow的算子
```python
import tensorlayer as tl
import tensorflow as tf
optimizer = tl.optimizers.Momentum(0.05, 0.9)
# optimizer = tf.optimizers.Momentum(0.05, 0.9)
for epoch in range(n_epoch):
for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
MLP.set_train()
with tf.GradientTape() as tape:
_logits = MLP(X_batch)
_loss = tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch)
grad = tape.gradient(_loss, train_weights)
optimizer.apply_gradients(zip(grad, train_weights))
```
## 七、完整实例
同一套代码通过设置后端进行切换不同后端训练无需修改代码。在os.environ['TL_BACKEND']中可以设置为'tensorflow,'mindspore', 'paddle'。
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
os.environ['TL_BACKEND'] = 'tensorflow'
# os.environ['TL_BACKEND'] = 'mindspore'
# os.environ['TL_BACKEND'] = 'paddle'
import numpy as np
import tensorlayer as tl
from tensorlayer.layers import Module
from tensorlayer.layers import Dense, Dropout
X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784))
class CustomModel(Module):
def __init__(self):
super(CustomModel, self).__init__()
self.dropout1 = Dropout(keep=0.8)
self.dense1 = Dense(n_units=800, act=tl.ReLU, in_channels=784)
self.dropout2 = Dropout(keep=0.8)
self.dense2 = Dense(n_units=800, act=tl.ReLU, in_channels=800)
self.dropout3 = Dropout(keep=0.8)
self.dense3 = Dense(n_units=10, act=tl.ReLU, in_channels=800)
def forward(self, x, foo=None):
z = self.dropout1(x)
z = self.dense1(z)
# z = self.bn(z)
z = self.dropout2(z)
z = self.dense2(z)
z = self.dropout3(z)
out = self.dense3(z)
if foo is not None:
out = tl.ops.relu(out)
return out
def generator_train():
inputs = X_train
targets = y_train
if len(inputs) != len(targets):
raise AssertionError("The length of inputs and targets should be equal")
for _input, _target in zip(inputs, targets):
yield (_input, np.array(_target))
MLP = CustomModel()
n_epoch = 50
batch_size = 128
print_freq = 2
shuffle_buffer_size = 128
train_weights = MLP.trainable_weights
optimizer = tl.optimizers.Momentum(0.05, 0.9)
train_ds = tl.dataflow.FromGenerator(
generator_train, output_types=(tl.float32, tl.int32) , column_names=['data', 'label']
)
train_ds = tl.dataflow.Shuffle(train_ds,shuffle_buffer_size)
train_ds = tl.dataflow.Batch(train_ds,batch_size)
model = tl.models.Model(network=MLP, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer)
model.train(n_epoch=n_epoch, train_dataset=train_ds, print_freq=print_freq, print_train_batch=False)
model.save_weights('./model.npz', format='npz_dict')
model.load_weights('./model.npz', format='npz_dict')
```
## 八、预训练模型
在TensorLayer中我们将持续提供了丰富的预训练模型和应用。例如VGG16, VGG19, ResNet50, YOLOv4.下面例子展示了在MS-COCO数据集中利用YOLOv4进行目标检测对应预训练模型和数据可以从examples/model_zoo中找到。
```python
import numpy as np
import cv2
from PIL import Image
from examples.model_zoo.common import yolo4_input_processing, yolo4_output_processing, \
result_to_json, read_class_names, draw_boxes_and_labels_to_image_with_json
from examples.model_zoo.yolo import YOLOv4
import tensorlayer as tl
tl.logging.set_verbosity(tl.logging.DEBUG)
INPUT_SIZE = 416
image_path = './data/kite.jpg'
class_names = read_class_names('./model/coco.names')
original_image = cv2.imread(image_path)
image = cv2.cvtColor(np.array(original_image), cv2.COLOR_BGR2RGB)
model = YOLOv4(NUM_CLASS=80, pretrained=True)
model.set_eval()
batch_data = yolo4_input_processing(original_image)
feature_maps = model(batch_data)
pred_bbox = yolo4_output_processing(feature_maps)
json_result = result_to_json(image, pred_bbox)
image = draw_boxes_and_labels_to_image_with_json(image, json_result, class_names)
image = Image.fromarray(image.astype(np.uint8))
image.show()
```
## 九、自定义Layer
在TensorLayer中自定以Layer需要继承Module在build中我们对训练参数进行定义在forward中我们定义前向计算。下面给出用TensorFlow后端时定义全连接层$$a=f(x*W + b)$$如果你想定义其他后端的Dense需要将算子换成对应后端。
如果要定义一个通用的Layer则要把算子接口进行统一后封装在backend中具体可以参考tensorlayer/layers中的Layer。
```python
from tensorlayer.layers import Module
class Dense(Module):
def __init__(
self,
n_units,
act=None,
name=None,
in_channels = None
):
super(Dense, self).__init__(name, act=act)
self.n_units = n_units
self.in_channels = in_channels
self.build()
self._built = True
def build(self): # initialize the model weights here
shape = [self.in_channels, self.n_units]
self.W = self._get_weights("weights", shape=tuple(shape), init=self.W_init)
self.b = self._get_weights("biases", shape=(self.n_units, ), init=self.b_init)
def forward(self, inputs): # call function
z = tf.matmul(inputs, self.W) + self.b
if self.act: # is not None
z = self.act(z)
return z
```