mirror of
https://github.com/apachecn/ailearning.git
synced 2026-03-31 01:00:37 +08:00
2020-10-19 21:48:57
This commit is contained in:
@@ -8,7 +8,7 @@
|
||||
|
||||
Torch 中的激励函数有很多, 不过我们平时要用到的就这几个. relu, sigmoid, tanh, softplus . 那我们就看看他们各自长什么样啦.
|
||||
|
||||
```
|
||||
```py
|
||||
import torch
|
||||
import torch.nn.functional as F # 激励函数都在这
|
||||
from torch.autograd import Variable
|
||||
@@ -20,7 +20,7 @@ x = Variable(x)
|
||||
|
||||
接着就是做生成不同的激励函数数据:
|
||||
|
||||
```
|
||||
```py
|
||||
x_np = x.data.numpy() # 换成 numpy array, 出图时用
|
||||
|
||||
# 几种常用的 激励函数
|
||||
@@ -35,7 +35,7 @@ y_softplus = F.softplus(x).data.numpy()
|
||||
|
||||

|
||||
|
||||
```
|
||||
```py
|
||||
import matplotlib.pyplot as plt # python 的可视化模块, 我有教程 (https://morvanzhou.github.io/tutorials/data-manipulation/plt/)
|
||||
|
||||
plt.figure(1, figsize=(8, 6))
|
||||
|
||||
Reference in New Issue
Block a user