pytorch起飞
139
Sklearn/sklearn-cookbook-zh/2.ipynb
Normal file
82
Sklearn/sklearn-cookbook-zh/5.ipynb
Normal file
@@ -0,0 +1,82 @@
|
||||
{
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.0-final"
|
||||
},
|
||||
"orig_nbformat": 2,
|
||||
"kernelspec": {
|
||||
"name": "python3",
|
||||
"display_name": "Python 3.8.0 64-bit",
|
||||
"metadata": {
|
||||
"interpreter": {
|
||||
"hash": "38740d3277777e2cd7c6c2cc9d8addf5118fdf3f82b1b39231fd12aeac8aee8b"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2,
|
||||
"cells": [
|
||||
{
|
||||
"source": [
|
||||
"## 5.1 K-fold交叉验证"
|
||||
],
|
||||
"cell_type": "markdown",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# 创建数据集\n",
|
||||
"N =1000\n",
|
||||
"holdout = 200\n",
|
||||
"from sklearn.datasets import make_regression\n",
|
||||
"X,y = make_regression(1000,shuffle=True)\n",
|
||||
"\n",
|
||||
"# 分割数据集的第二种方法。\n",
|
||||
"X_h,y_h = X[:holdout],y[:holdout]\n",
|
||||
"X_t,y_t = X[holdout:],y[holdout:]\n",
|
||||
"\n",
|
||||
"# 方法的接口变了\n",
|
||||
"from sklearn.model_selection import KFold\n",
|
||||
"\n",
|
||||
"kfold = KFold(n_splits=4)\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"source": [
|
||||
"## 5.8 回归模型评估"
|
||||
],
|
||||
"cell_type": "markdown",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"source": [
|
||||
"## 5.9 特征选取"
|
||||
],
|
||||
"cell_type": "markdown",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"source": [
|
||||
"## 5.11 使用joblib保存模型"
|
||||
],
|
||||
"cell_type": "markdown",
|
||||
"metadata": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
# 莫烦 PyTorch 系列教程
|
||||
|
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
|
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 6.5 KiB After Width: | Height: | Size: 6.5 KiB |
|
Before Width: | Height: | Size: 8.0 KiB After Width: | Height: | Size: 8.0 KiB |
|
Before Width: | Height: | Size: 5.6 KiB After Width: | Height: | Size: 5.6 KiB |
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 6.9 KiB After Width: | Height: | Size: 6.9 KiB |
|
Before Width: | Height: | Size: 8.1 KiB After Width: | Height: | Size: 8.1 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB |
|
Before Width: | Height: | Size: 6.7 KiB After Width: | Height: | Size: 6.7 KiB |
|
Before Width: | Height: | Size: 7.6 KiB After Width: | Height: | Size: 7.6 KiB |
|
Before Width: | Height: | Size: 5.3 KiB After Width: | Height: | Size: 5.3 KiB |
|
Before Width: | Height: | Size: 9.1 KiB After Width: | Height: | Size: 9.1 KiB |
|
Before Width: | Height: | Size: 5.4 KiB After Width: | Height: | Size: 5.4 KiB |
|
Before Width: | Height: | Size: 3.8 KiB After Width: | Height: | Size: 3.8 KiB |
|
Before Width: | Height: | Size: 19 KiB After Width: | Height: | Size: 19 KiB |
|
Before Width: | Height: | Size: 9.1 KiB After Width: | Height: | Size: 9.1 KiB |
|
Before Width: | Height: | Size: 8.1 KiB After Width: | Height: | Size: 8.1 KiB |
|
Before Width: | Height: | Size: 7.6 KiB After Width: | Height: | Size: 7.6 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 4.7 KiB After Width: | Height: | Size: 4.7 KiB |
|
Before Width: | Height: | Size: 8.6 KiB After Width: | Height: | Size: 8.6 KiB |
|
Before Width: | Height: | Size: 5.6 KiB After Width: | Height: | Size: 5.6 KiB |
|
Before Width: | Height: | Size: 13 KiB After Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 8.5 KiB After Width: | Height: | Size: 8.5 KiB |
|
Before Width: | Height: | Size: 7.3 KiB After Width: | Height: | Size: 7.3 KiB |
|
Before Width: | Height: | Size: 8.6 KiB After Width: | Height: | Size: 8.6 KiB |
|
Before Width: | Height: | Size: 7.5 KiB After Width: | Height: | Size: 7.5 KiB |
|
Before Width: | Height: | Size: 4.7 KiB After Width: | Height: | Size: 4.7 KiB |
|
Before Width: | Height: | Size: 5.6 KiB After Width: | Height: | Size: 5.6 KiB |
|
Before Width: | Height: | Size: 5.9 KiB After Width: | Height: | Size: 5.9 KiB |
|
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
|
Before Width: | Height: | Size: 3.8 KiB After Width: | Height: | Size: 3.8 KiB |
|
Before Width: | Height: | Size: 19 KiB After Width: | Height: | Size: 19 KiB |
|
Before Width: | Height: | Size: 8.6 KiB After Width: | Height: | Size: 8.6 KiB |
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
1
pytorch/官方教程/01.md
Normal file
@@ -0,0 +1 @@
|
||||
# 学习 PyTorch
|
||||
39
pytorch/官方教程/02.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# PyTorch 深度学习:60分钟快速入门
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html>
|
||||
|
||||
**作者**: [Soumith Chintala](http://soumith.ch)
|
||||
|
||||
<https://www.youtube.com/embed/u7x8RXwLKcA>
|
||||
|
||||
## 什么是 PyTorch?
|
||||
|
||||
PyTorch 是基于以下两个目的而打造的python科学计算框架:
|
||||
|
||||
* 无缝替换NumPy,并且通过利用GPU的算力来实现神经网络的加速。
|
||||
* 通过自动微分机制,来让神经网络的实现变得更加容易。
|
||||
|
||||
## 本次教程的目标:
|
||||
|
||||
* 深入了解PyTorch的张量单元以及如何使用Pytorch来搭建神经网络。
|
||||
* 自己动手训练一个小型神经网络来实现图像的分类。
|
||||
|
||||
注意
|
||||
|
||||
确保已安装[`torch`](https://github.com/pytorch/pytorch)和[`torchvision`](https://github.com/pytorch/vision)包。
|
||||
|
||||

|
||||
|
||||
[张量](blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py)
|
||||
|
||||

|
||||
|
||||
[`torch.autograd`的简要介绍](blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py)
|
||||
|
||||

|
||||
|
||||
[神经网络简介](blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py)
|
||||
|
||||

|
||||
|
||||
[自己动手训练一个图像分类器](blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py)
|
||||
292
pytorch/官方教程/03.md
Normal file
@@ -0,0 +1,292 @@
|
||||
# 张量
|
||||
|
||||
>原文: <https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py>
|
||||
|
||||
张量如同数组和矩阵一样, 是一种特殊的数据结构。在`PyTorch`中, 神经网络的输入、输出以及网络的参数等数据, 都是使用张量来进行描述。
|
||||
|
||||
张量的使用和`Numpy`中的`ndarrays`很类似, 区别在于张量可以在`GPU`或其它专用硬件上运行, 这样可以得到更快的加速效果。如果你对`ndarrays`很熟悉的话, 张量的使用对你来说就很容易了。如果不太熟悉的话, 希望这篇有关张量`API`的快速入门教程能够帮到你。
|
||||
|
||||
```python
|
||||
import torch
|
||||
import numpy as np
|
||||
```
|
||||
|
||||
## 张量初始化
|
||||
|
||||
张量有很多种不同的初始化方法, 先来看看四个简单的例子:
|
||||
|
||||
**1. 直接生成张量**
|
||||
|
||||
由原始数据直接生成张量, 张量类型由原始数据类型决定。
|
||||
|
||||
```python
|
||||
data = [[1, 2], [3, 4]]
|
||||
x_data = torch.tensor(data)
|
||||
```
|
||||
|
||||
**2. 通过Numpy数组来生成张量**
|
||||
|
||||
由已有的`Numpy`数组来生成张量(反过来也可以由张量来生成`Numpy`数组, 参考[张量与Numpy之间的转换](#jump))。
|
||||
|
||||
```python
|
||||
np_array = np.array(data)
|
||||
x_np = torch.from_numpy(np_array)
|
||||
```
|
||||
**3. 通过已有的张量来生成新的张量**
|
||||
|
||||
新的张量将继承已有张量的数据属性(结构、类型), 也可以重新指定新的数据类型。
|
||||
|
||||
```python
|
||||
x_ones = torch.ones_like(x_data) # 保留 x_data 的属性
|
||||
print(f"Ones Tensor: \n {x_ones} \n")
|
||||
|
||||
x_rand = torch.rand_like(x_data, dtype=torch.float) # 重写 x_data 的数据类型
|
||||
int -> float
|
||||
print(f"Random Tensor: \n {x_rand} \n")
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
Ones Tensor:
|
||||
tensor([[1, 1],
|
||||
[1, 1]])
|
||||
|
||||
Random Tensor:
|
||||
tensor([[0.0381, 0.5780],
|
||||
[0.3963, 0.0840]])
|
||||
```
|
||||
**4. 通过指定数据维度来生成张量**
|
||||
|
||||
`shape`是元组类型, 用来描述张量的维数, 下面3个函数通过传入`shape`来指定生成张量的维数。
|
||||
|
||||
```python
|
||||
shape = (2,3,)
|
||||
rand_tensor = torch.rand(shape)
|
||||
ones_tensor = torch.ones(shape)
|
||||
zeros_tensor = torch.zeros(shape)
|
||||
|
||||
print(f"Random Tensor: \n {rand_tensor} \n")
|
||||
print(f"Ones Tensor: \n {ones_tensor} \n")
|
||||
print(f"Zeros Tensor: \n {zeros_tensor}")
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
Random Tensor:
|
||||
tensor([[0.0266, 0.0553, 0.9843],
|
||||
[0.0398, 0.8964, 0.3457]])
|
||||
|
||||
Ones Tensor:
|
||||
tensor([[1., 1., 1.],
|
||||
[1., 1., 1.]])
|
||||
|
||||
Zeros Tensor:
|
||||
tensor([[0., 0., 0.],
|
||||
[0., 0., 0.]])
|
||||
```
|
||||
|
||||
## 张量属性
|
||||
|
||||
从张量属性我们可以得到张量的维数、数据类型以及它们所存储的设备(CPU或GPU)。
|
||||
|
||||
来看一个简单的例子:
|
||||
|
||||
```python
|
||||
tensor = torch.rand(3,4)
|
||||
|
||||
print(f"Shape of tensor: {tensor.shape}")
|
||||
print(f"Datatype of tensor: {tensor.dtype}")
|
||||
print(f"Device tensor is stored on: {tensor.device}")
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
Shape of tensor: torch.Size([3, 4]) # 维数
|
||||
Datatype of tensor: torch.float32 # 数据类型
|
||||
Device tensor is stored on: cpu # 存储设备
|
||||
```
|
||||
|
||||
## 张量运算
|
||||
|
||||
有超过100种张量相关的运算操作, 例如转置、索引、切片、数学运算、线性代数、随机采样等。更多的运算可以在这里[查看](https://pytorch.org/docs/stable/torch.html)。
|
||||
|
||||
所有这些运算都可以在GPU上运行(相对于CPU来说可以达到更高的运算速度)。如果你使用的是Google的Colab环境, 可以通过 `Edit > Notebook Settings` 来分配一个GPU使用。
|
||||
|
||||
```python
|
||||
# 判断当前环境GPU是否可用, 然后将tensor导入GPU内运行
|
||||
if torch.cuda.is_available():
|
||||
tensor = tensor.to('cuda')
|
||||
```
|
||||
|
||||
光说不练假把式, 接下来的例子一定要动手跑一跑。如果你对Numpy的运算非常熟悉的话, 那tensor的运算对你来说就是小菜一碟。
|
||||
|
||||
**1. 张量的索引和切片**
|
||||
|
||||
```python
|
||||
tensor = torch.ones(4, 4)
|
||||
tensor[:,1] = 0 # 将第1列(从0开始)的数据全部赋值为0
|
||||
print(tensor)
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
tensor([[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.]])
|
||||
```
|
||||
|
||||
**2. 张量的拼接**
|
||||
|
||||
你可以通过`torch.cat`方法将一组张量按照指定的维度进行拼接, 也可以参考[`torch.stack`](https://pytorch.org/docs/stable/generated/torch.stack.html)方法。这个方法也可以实现拼接操作, 但和`torch.cat`稍微有点不同。
|
||||
|
||||
```python
|
||||
t1 = torch.cat([tensor, tensor, tensor], dim=1)
|
||||
print(t1)
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```
|
||||
tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
|
||||
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
|
||||
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],
|
||||
[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])
|
||||
```
|
||||
|
||||
**3. 张量的乘积和矩阵乘法**
|
||||
|
||||
```python
|
||||
# 逐个元素相乘结果
|
||||
print(f"tensor.mul(tensor): \n {tensor.mul(tensor)} \n")
|
||||
# 等价写法:
|
||||
print(f"tensor * tensor: \n {tensor * tensor}")
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
tensor.mul(tensor):
|
||||
tensor([[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.]])
|
||||
|
||||
tensor * tensor:
|
||||
tensor([[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.]])
|
||||
```
|
||||
|
||||
下面写法表示张量与张量的矩阵乘法:
|
||||
|
||||
```python
|
||||
print(f"tensor.matmul(tensor.T): \n {tensor.matmul(tensor.T)} \n")
|
||||
# 等价写法:
|
||||
print(f"tensor @ tensor.T: \n {tensor @ tensor.T}")
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
tensor.matmul(tensor.T):
|
||||
tensor([[3., 3., 3., 3.],
|
||||
[3., 3., 3., 3.],
|
||||
[3., 3., 3., 3.],
|
||||
[3., 3., 3., 3.]])
|
||||
|
||||
tensor @ tensor.T:
|
||||
tensor([[3., 3., 3., 3.],
|
||||
[3., 3., 3., 3.],
|
||||
[3., 3., 3., 3.],
|
||||
[3., 3., 3., 3.]])
|
||||
```
|
||||
|
||||
**4. 自动赋值运算**
|
||||
|
||||
自动赋值运算通常在方法后有 `_` 作为后缀, 例如: `x.copy_(y)`, `x.t_()`操作会改变 `x` 的取值。
|
||||
|
||||
```python
|
||||
print(tensor, "\n")
|
||||
tensor.add_(5)
|
||||
print(tensor)
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
tensor([[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.],
|
||||
[1., 0., 1., 1.]])
|
||||
|
||||
tensor([[6., 5., 6., 6.],
|
||||
[6., 5., 6., 6.],
|
||||
[6., 5., 6., 6.],
|
||||
[6., 5., 6., 6.]])
|
||||
```
|
||||
|
||||
> 注意:
|
||||
>
|
||||
> 自动赋值运算虽然可以节省内存, 但在求导时会因为丢失了中间过程而导致一些问题, 所以我们并不鼓励使用它。
|
||||
|
||||
## <span id="jump">Tensor与Numpy的转化</span>
|
||||
张量和`Numpy array`数组在CPU上可以共用一块内存区域, 改变其中一个另一个也会随之改变。
|
||||
**1. 由张量变换为Numpy array数组**
|
||||
```python
|
||||
t = torch.ones(5)
|
||||
print(f"t: {t}")
|
||||
n = t.numpy()
|
||||
print(f"n: {n}")
|
||||
```
|
||||
显示:
|
||||
```python
|
||||
t: tensor([1., 1., 1., 1., 1.])
|
||||
n: [1. 1. 1. 1. 1.]
|
||||
```
|
||||
修改张量的值,则`Numpy array`数组值也会随之改变。
|
||||
```python
|
||||
t.add_(1)
|
||||
print(f"t: {t}")
|
||||
print(f"n: {n}")
|
||||
```
|
||||
显示:
|
||||
```python
|
||||
t: tensor([2., 2., 2., 2., 2.])
|
||||
n: [2. 2. 2. 2. 2.]
|
||||
```
|
||||
|
||||
**2. 由Numpy array数组转为张量**
|
||||
|
||||
```python
|
||||
n = np.ones(5)
|
||||
t = torch.from_numpy(n)
|
||||
```
|
||||
|
||||
修改`Numpy array`数组的值,则张量值也会随之改变。
|
||||
|
||||
```python
|
||||
np.add(n, 1, out=n)
|
||||
print(f"t: {t}")
|
||||
print(f"n: {n}")
|
||||
```
|
||||
|
||||
显示:
|
||||
|
||||
```python
|
||||
t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
|
||||
n: [2. 2. 2. 2. 2.]
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.045 秒)
|
||||
|
||||
[下载 Python 源码:`tensor_tutorial.py`](https://pytorch.org/tutorials/_downloads/092fba3c36cb2ab226bfdaa78248b310/tensor_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`tensor_tutorial.ipynb`](https://pytorch.org/tutorials/_downloads/3c2b25b8a9f72db7780a6bf9b5fc9f62/tensor_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
236
pytorch/官方教程/04.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# `torch.autograd`的简要介绍
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py>
|
||||
|
||||
`torch.autograd`是 PyTorch 的自动差分引擎,可为神经网络训练提供支持。 在本节中,您将获得有关 Autograd 如何帮助神经网络训练的概念性理解。
|
||||
|
||||
## 背景
|
||||
|
||||
神经网络(NN)是在某些输入数据上执行的嵌套函数的集合。 这些函数由*参数*(由权重和偏差组成)定义,这些参数在 PyTorch 中存储在张量中。
|
||||
|
||||
训练 NN 分为两个步骤:
|
||||
|
||||
**正向传播**:在正向传播中,NN 对正确的输出进行最佳猜测。 它通过其每个函数运行输入数据以进行猜测。
|
||||
|
||||
**反向传播**:在反向传播中,NN 根据其猜测中的误差调整其参数。 它通过从输出向后遍历,收集有关函数参数(*梯度*)的误差导数并使用梯度下降来优化参数来实现。 有关反向传播的更详细的演练,请查看 3Blue1Brown 的[视频](https://www.youtube.com/watch?v=tIeHLnjs5U8)。
|
||||
|
||||
## 在 PyTorch 中的用法
|
||||
|
||||
让我们来看一个训练步骤。 对于此示例,我们从`torchvision`加载了经过预训练的 resnet18 模型。 我们创建一个随机数据张量来表示具有 3 个通道的单个图像,高度&宽度为 64,其对应的`label`初始化为一些随机值。
|
||||
|
||||
```py
|
||||
import torch, torchvision
|
||||
model = torchvision.models.resnet18(pretrained=True)
|
||||
data = torch.rand(1, 3, 64, 64)
|
||||
labels = torch.rand(1, 1000)
|
||||
|
||||
```
|
||||
|
||||
接下来,我们通过模型的每一层运行输入数据以进行预测。 这是**正向传播**。
|
||||
|
||||
```py
|
||||
prediction = model(data) # forward pass
|
||||
|
||||
```
|
||||
|
||||
我们使用模型的预测和相应的标签来计算误差(`loss`)。 下一步是通过网络反向传播此误差。 当我们在误差张量上调用`.backward()`时,开始反向传播。 然后,Autograd 会为每个模型参数计算梯度并将其存储在参数的`.grad`属性中。
|
||||
|
||||
```py
|
||||
loss = (prediction - labels).sum()
|
||||
loss.backward() # backward pass
|
||||
|
||||
```
|
||||
|
||||
接下来,我们加载一个优化器,在本例中为 SGD,学习率为 0.01,动量为 0.9。 我们在优化器中注册模型的所有参数。
|
||||
|
||||
```py
|
||||
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
|
||||
|
||||
```
|
||||
|
||||
最后,我们调用`.step()`启动梯度下降。 优化器通过`.grad`中存储的梯度来调整每个参数。
|
||||
|
||||
```py
|
||||
optim.step() #gradient descent
|
||||
|
||||
```
|
||||
|
||||
至此,您已经具备了训练神经网络所需的一切。 以下各节详细介绍了 Autograd 的工作原理-随时跳过它们。
|
||||
|
||||
* * *
|
||||
|
||||
## Autograd 的微分
|
||||
|
||||
让我们来看看`autograd`如何收集梯度。 我们用`requires_grad=True`创建两个张量`a`和`b`。 这向`autograd`发出信号,应跟踪对它们的所有操作。
|
||||
|
||||
```py
|
||||
import torch
|
||||
|
||||
a = torch.tensor([2., 3.], requires_grad=True)
|
||||
b = torch.tensor([6., 4.], requires_grad=True)
|
||||
|
||||
```
|
||||
|
||||
我们从`a`和`b`创建另一个张量`Q`。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
Q = 3*a**3 - b**2
|
||||
|
||||
```
|
||||
|
||||
假设`a`和`b`是神经网络的参数,`Q`是误差。 在 NN 训练中,我们想要相对于参数的误差,即
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
当我们在`Q`上调用`.backward()`时,Autograd 将计算这些梯度并将其存储在各个张量的`.grad`属性中。
|
||||
|
||||
我们需要在`Q.backward()`中显式传递`gradient`参数,因为它是向量。 `gradient`是与`Q`形状相同的张量,它表示`Q`相对于本身的梯度,即
|
||||
|
||||

|
||||
|
||||
同样,我们也可以将`Q`聚合为一个标量,然后隐式地向后调用,例如`Q.sum().backward()`。
|
||||
|
||||
```py
|
||||
external_grad = torch.tensor([1., 1.])
|
||||
Q.backward(gradient=external_grad)
|
||||
|
||||
```
|
||||
|
||||
梯度现在沉积在`a.grad`和`b.grad`中
|
||||
|
||||
```py
|
||||
# check if collected gradients are correct
|
||||
print(9*a**2 == a.grad)
|
||||
print(-2*b == b.grad)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor([True, True])
|
||||
tensor([True, True])
|
||||
|
||||
```
|
||||
|
||||
### 可选阅读-使用`autograd`的向量微积分
|
||||
|
||||
从数学上讲,如果您具有向量值函数`y = f(x)`,则`y`相对于`x`的雅可比矩阵`J`:
|
||||
|
||||

|
||||
|
||||
一般来说,`torch.autograd`是用于计算向量雅可比积的引擎。 也就是说,给定任何向量`v`,计算乘积`J^T · v`
|
||||
|
||||
如果`v`恰好是标量函数的梯度
|
||||
|
||||

|
||||
|
||||
然后根据链式规则,向量-雅可比积将是`l`相对于`x`的梯度:
|
||||
|
||||

|
||||
|
||||
上面的示例中使用的是 vector-Jacobian 乘积的这一特征。 `external_grad`表示`v`。
|
||||
|
||||
## 计算图
|
||||
|
||||
从概念上讲,Autograd 在由[函数](https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)对象组成的有向无环图(DAG)中记录数据(张量)和所有已执行的操作(以及由此产生的新张量)。 在此 DAG 中,叶子是输入张量,根是输出张量。 通过从根到叶跟踪此图,可以使用链式规则自动计算梯度。
|
||||
|
||||
在正向传播中,Autograd 同时执行两项操作:
|
||||
|
||||
* 运行请求的操作以计算结果张量,并且
|
||||
* 在 DAG 中维护操作的*梯度函数*。
|
||||
|
||||
当在 DAG 根目录上调用`.backward()`时,后退通道开始。 `autograd`然后:
|
||||
|
||||
* 从每个`.grad_fn`计算梯度,
|
||||
* 将它们累积在各自的张量的`.grad`属性中,然后
|
||||
* 使用链式规则,一直传播到叶子张量。
|
||||
|
||||
下面是我们示例中 DAG 的直观表示。 在图中,箭头指向前进的方向。 节点代表正向传播中每个操作的反向函数。 蓝色的叶节点代表我们的叶张量`a`和`b`。
|
||||
|
||||

|
||||
|
||||
注意
|
||||
|
||||
**DAG 在 PyTorch 中是动态的**。要注意的重要一点是,图是从头开始重新创建的; 在每个`.backward()`调用之后,Autograd 开始填充新图。 这正是允许您在模型中使用控制流语句的原因。 您可以根据需要在每次迭代中更改形状,大小和操作。
|
||||
|
||||
### 从 DAG 中排除
|
||||
|
||||
`torch.autograd`跟踪所有将其`requires_grad`标志设置为`True`的张量的操作。 对于不需要梯度的张量,将此属性设置为`False`会将其从梯度计算 DAG 中排除。
|
||||
|
||||
即使只有一个输入张量具有`requires_grad=True`,操作的输出张量也将需要梯度。
|
||||
|
||||
```py
|
||||
x = torch.rand(5, 5)
|
||||
y = torch.rand(5, 5)
|
||||
z = torch.rand((5, 5), requires_grad=True)
|
||||
|
||||
a = x + y
|
||||
print(f"Does `a` require gradients? : {a.requires_grad}")
|
||||
b = x + z
|
||||
print(f"Does `b` require gradients?: {b.requires_grad}")
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Does `a` require gradients? : False
|
||||
Does `b` require gradients?: True
|
||||
|
||||
```
|
||||
|
||||
在 NN 中,不计算梯度的参数通常称为**冻结参数**。 如果事先知道您不需要这些参数的梯度,则“冻结”模型的一部分很有用(通过减少自动梯度计算,这会带来一些表现优势)。
|
||||
|
||||
从 DAG 中排除很重要的另一个常见用例是[调整预训练网络](https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html)
|
||||
|
||||
在微调中,我们冻结了大部分模型,通常仅修改分类器层以对新标签进行预测。 让我们来看一个小例子来说明这一点。 和以前一样,我们加载一个预训练的 resnet18 模型,并冻结所有参数。
|
||||
|
||||
```py
|
||||
from torch import nn, optim
|
||||
|
||||
model = torchvision.models.resnet18(pretrained=True)
|
||||
|
||||
# Freeze all the parameters in the network
|
||||
for param in model.parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
```
|
||||
|
||||
假设我们要在具有 10 个标签的新数据集中微调模型。 在 resnet 中,分类器是最后一个线性层`model.fc`。 我们可以简单地将其替换为充当我们的分类器的新线性层(默认情况下未冻结)。
|
||||
|
||||
```py
|
||||
model.fc = nn.Linear(512, 10)
|
||||
|
||||
```
|
||||
|
||||
现在,除了`model.fc`的参数外,模型中的所有参数都将冻结。 计算梯度的唯一参数是`model.fc`的权重和偏差。
|
||||
|
||||
```py
|
||||
# Optimize only the classifier
|
||||
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
|
||||
|
||||
```
|
||||
|
||||
请注意,尽管我们在优化器中注册了所有参数,但唯一可计算梯度的参数(因此会在梯度下降中进行更新)是分类器的权重和偏差。
|
||||
|
||||
[`torch.no_grad()`](https://pytorch.org/docs/stable/generated/torch.no_grad.html)中的上下文管理器可以使用相同的排除功能。
|
||||
|
||||
* * *
|
||||
|
||||
## 进一步阅读:
|
||||
|
||||
* [原地操作&多线程 Autograd](https://pytorch.org/docs/stable/notes/autograd.html)
|
||||
* [反向模式自动微分](https://colab.research.google.com/drive/1VpeE6UvEPRz9HmsHh1KS0XxXjYu533EC) 的示例实现
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 5.184 秒)
|
||||
|
||||
[下载 Python 源码:`autograd_tutorial.py`](https://pytorch.org/tutorials/_downloads/00a1ac60985c7481f4250bafeae15ffa/autograd_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`autograd_tutorial.ipynb`](https://pytorch.org/tutorials/_downloads/009cea8b0f40dfcb55e3280f73b06cc2/autograd_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
292
pytorch/官方教程/05.md
Normal file
@@ -0,0 +1,292 @@
|
||||
# 神经网络
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py>
|
||||
|
||||
可以使用`torch.nn`包构建神经网络。
|
||||
|
||||
现在您已经了解了`autograd`,`nn`依赖于`autograd`来定义模型并对其进行微分。 `nn.Module`包含层,以及返回`output`的方法`forward(input)`。
|
||||
|
||||
例如,查看以下对数字图像进行分类的网络:
|
||||
|
||||

|
||||
|
||||
卷积网
|
||||
|
||||
这是一个简单的前馈网络。 它获取输入,将其一层又一层地馈入,然后最终给出输出。
|
||||
|
||||
神经网络的典型训练过程如下:
|
||||
|
||||
* 定义具有一些可学习参数(或权重)的神经网络
|
||||
* 遍历输入数据集
|
||||
* 通过网络处理输入
|
||||
* 计算损失(输出正确的距离有多远)
|
||||
* 将梯度传播回网络参数
|
||||
* 通常使用简单的更新规则来更新网络的权重:`weight = weight - learning_rate * gradient`
|
||||
|
||||
## 定义网络
|
||||
|
||||
让我们定义这个网络:
|
||||
|
||||
```py
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
class Net(nn.Module):
|
||||
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
# 1 input image channel, 6 output channels, 3x3 square convolution
|
||||
# kernel
|
||||
self.conv1 = nn.Conv2d(1, 6, 3)
|
||||
self.conv2 = nn.Conv2d(6, 16, 3)
|
||||
# an affine operation: y = Wx + b
|
||||
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
|
||||
self.fc2 = nn.Linear(120, 84)
|
||||
self.fc3 = nn.Linear(84, 10)
|
||||
|
||||
def forward(self, x):
|
||||
# Max pooling over a (2, 2) window
|
||||
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
|
||||
# If the size is a square you can only specify a single number
|
||||
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
|
||||
x = x.view(-1, self.num_flat_features(x))
|
||||
x = F.relu(self.fc1(x))
|
||||
x = F.relu(self.fc2(x))
|
||||
x = self.fc3(x)
|
||||
return x
|
||||
|
||||
def num_flat_features(self, x):
|
||||
size = x.size()[1:] # all dimensions except the batch dimension
|
||||
num_features = 1
|
||||
for s in size:
|
||||
num_features *= s
|
||||
return num_features
|
||||
|
||||
net = Net()
|
||||
print(net)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Net(
|
||||
(conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
|
||||
(conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
|
||||
(fc1): Linear(in_features=576, out_features=120, bias=True)
|
||||
(fc2): Linear(in_features=120, out_features=84, bias=True)
|
||||
(fc3): Linear(in_features=84, out_features=10, bias=True)
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
您只需要定义`forward`函数,就可以使用`autograd`为您自动定义`backward`函数(计算梯度)。 您可以在`forward`函数中使用任何张量操作。
|
||||
|
||||
模型的可学习参数由`net.parameters()`返回
|
||||
|
||||
```py
|
||||
params = list(net.parameters())
|
||||
print(len(params))
|
||||
print(params[0].size()) # conv1's .weight
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
10
|
||||
torch.Size([6, 1, 3, 3])
|
||||
|
||||
```
|
||||
|
||||
让我们尝试一个`32x32`随机输入。 注意:该网络的预期输入大小(LeNet)为`32x32`。 要在 MNIST 数据集上使用此网络,请将图像从数据集中调整为`32x32`。
|
||||
|
||||
```py
|
||||
input = torch.randn(1, 1, 32, 32)
|
||||
out = net(input)
|
||||
print(out)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor([[ 0.1002, -0.0694, -0.0436, 0.0103, 0.0488, -0.0429, -0.0941, -0.0146,
|
||||
-0.0031, -0.0923]], grad_fn=<AddmmBackward>)
|
||||
|
||||
```
|
||||
|
||||
使用随机梯度将所有参数和反向传播的梯度缓冲区归零:
|
||||
|
||||
```py
|
||||
net.zero_grad()
|
||||
out.backward(torch.randn(1, 10))
|
||||
|
||||
```
|
||||
|
||||
注意
|
||||
|
||||
`torch.nn`仅支持小批量。 整个`torch.nn`包仅支持作为微型样本而不是单个样本的输入。
|
||||
|
||||
例如,`nn.Conv2d`将采用`nSamples x nChannels x Height x Width`的 4D 张量。
|
||||
|
||||
如果您只有一个样本,只需使用`input.unsqueeze(0)`添加一个假批量尺寸。
|
||||
|
||||
在继续之前,让我们回顾一下到目前为止所看到的所有类。
|
||||
|
||||
**回顾**:
|
||||
|
||||
* `torch.Tensor`-一个*多维数组*,支持诸如`backward()`的自动微分操作。 同样,保持相对于张量的梯度。
|
||||
* `nn.Module`-神经网络模块。 *封装参数*的便捷方法,并带有将其移动到 GPU,导出,加载等的帮助器。
|
||||
* `nn.Parameter`-一种张量,即将其分配为`Module`的属性时,自动注册为参数。
|
||||
* `autograd.Function`-实现自动微分操作的正向和反向定义。 每个`Tensor`操作都会创建至少一个`Function`节点,该节点连接到创建`Tensor`的函数,并且编码其历史记录。
|
||||
|
||||
**目前为止,我们涵盖了**:
|
||||
|
||||
* 定义神经网络
|
||||
* 处理输入并向后调用
|
||||
|
||||
**仍然剩下**:
|
||||
|
||||
* 计算损失
|
||||
* 更新网络的权重
|
||||
|
||||
## 损失函数
|
||||
|
||||
损失函数采用一对(输出,目标)输入,并计算一个值,该值估计输出与目标之间的距离。
|
||||
|
||||
`nn`包下有几种不同的[损失函数](https://pytorch.org/docs/nn.html#loss-functions)。 一个简单的损失是:`nn.MSELoss`,它计算输入和目标之间的均方误差。
|
||||
|
||||
例如:
|
||||
|
||||
```py
|
||||
output = net(input)
|
||||
target = torch.randn(10) # a dummy target, for example
|
||||
target = target.view(1, -1) # make it the same shape as output
|
||||
criterion = nn.MSELoss()
|
||||
|
||||
loss = criterion(output, target)
|
||||
print(loss)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.4969, grad_fn=<MseLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
现在,如果使用`.grad_fn`属性向后跟随`loss`,您将看到一个计算图,如下所示:
|
||||
|
||||
```py
|
||||
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
|
||||
-> view -> linear -> relu -> linear -> relu -> linear
|
||||
-> MSELoss
|
||||
-> loss
|
||||
|
||||
```
|
||||
|
||||
因此,当我们调用`loss.backward()`时,整个图将被微分。 损失,并且图中具有`requires_grad=True`的所有张量将随梯度累积其`.grad`张量。
|
||||
|
||||
为了说明,让我们向后走几步:
|
||||
|
||||
```py
|
||||
print(loss.grad_fn) # MSELoss
|
||||
print(loss.grad_fn.next_functions[0][0]) # Linear
|
||||
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
<MseLossBackward object at 0x7f1ba05a1ba8>
|
||||
<AddmmBackward object at 0x7f1ba05a19e8>
|
||||
<AccumulateGrad object at 0x7f1ba05a19e8>
|
||||
|
||||
```
|
||||
|
||||
## 反向传播
|
||||
|
||||
要反向传播误差,我们要做的只是对`loss.backward()`。 不过,您需要清除现有的梯度,否则梯度将累积到现有的梯度中。
|
||||
|
||||
现在,我们将其称为`loss.backward()`,然后看一下向后前后`conv1`的偏差梯度。
|
||||
|
||||
```py
|
||||
net.zero_grad() # zeroes the gradient buffers of all parameters
|
||||
|
||||
print('conv1.bias.grad before backward')
|
||||
print(net.conv1.bias.grad)
|
||||
|
||||
loss.backward()
|
||||
|
||||
print('conv1.bias.grad after backward')
|
||||
print(net.conv1.bias.grad)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
conv1.bias.grad before backward
|
||||
tensor([0., 0., 0., 0., 0., 0.])
|
||||
conv1.bias.grad after backward
|
||||
tensor([ 0.0111, -0.0064, 0.0053, -0.0047, 0.0026, -0.0153])
|
||||
|
||||
```
|
||||
|
||||
现在,我们已经看到了如何使用损失函数。
|
||||
|
||||
**稍后阅读**:
|
||||
|
||||
> 神经网络包包含各种模块和损失函数,这些模块和损失函数构成了深度神经网络的构建块。 带有文档的完整列表位于此处。
|
||||
|
||||
**唯一需要学习的是**:
|
||||
|
||||
> * 更新网络的权重
|
||||
|
||||
## 更新权重
|
||||
|
||||
实践中使用的最简单的更新规则是随机梯度下降(SGD):
|
||||
|
||||
> `weight = weight - learning_rate * gradient`
|
||||
|
||||
我们可以使用简单的 Python 代码实现此目标:
|
||||
|
||||
```py
|
||||
learning_rate = 0.01
|
||||
for f in net.parameters():
|
||||
f.data.sub_(f.grad.data * learning_rate)
|
||||
|
||||
```
|
||||
|
||||
但是,在使用神经网络时,您希望使用各种不同的更新规则,例如 SGD,Nesterov-SGD,Adam,RMSProp 等。为实现此目的,我们构建了一个小包装:`torch.optim`,可实现所有这些方法。 使用它非常简单:
|
||||
|
||||
```py
|
||||
import torch.optim as optim
|
||||
|
||||
# create your optimizer
|
||||
optimizer = optim.SGD(net.parameters(), lr=0.01)
|
||||
|
||||
# in your training loop:
|
||||
optimizer.zero_grad() # zero the gradient buffers
|
||||
output = net(input)
|
||||
loss = criterion(output, target)
|
||||
loss.backward()
|
||||
optimizer.step() # Does the update
|
||||
|
||||
```
|
||||
|
||||
注意
|
||||
|
||||
观察如何使用`optimizer.zero_grad()`将梯度缓冲区手动设置为零。 这是因为如[反向传播](#backprop)部分中所述累积了梯度。
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 3.778 秒)
|
||||
|
||||
[下载 Python 源码:`neural_networks_tutorial.py`](https://pytorch.org/tutorials/_downloads/3665741da15f111de82da3227a615699/neural_networks_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`neural_networks_tutorial.ipynb`](https://pytorch.org/tutorials/_downloads/97abb4c06a586d45ef3fc4b4b9634406/neural_networks_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
421
pytorch/官方教程/06.md
Normal file
@@ -0,0 +1,421 @@
|
||||
# 训练分类器
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py>
|
||||
|
||||
就是这个。 您已经了解了如何定义神经网络,计算损失并更新网络的权重。
|
||||
|
||||
现在您可能在想,
|
||||
|
||||
## 数据呢?
|
||||
|
||||
通常,当您必须处理图像,文本,音频或视频数据时,可以使用将数据加载到 NumPy 数组中的标准 Python 包。 然后,您可以将该数组转换为`torch.*Tensor`。
|
||||
|
||||
* 对于图像,Pillow,OpenCV 等包很有用
|
||||
* 对于音频,请使用 SciPy 和 librosa 等包
|
||||
* 对于文本,基于 Python 或 Cython 的原始加载,或者 NLTK 和 SpaCy 很有用
|
||||
|
||||
专门针对视觉,我们创建了一个名为`torchvision`的包,其中包含用于常见数据集(例如 Imagenet,CIFAR10,MNIST 等)的数据加载器,以及用于图像(即`torchvision.datasets`和`torch.utils.data.DataLoader`)的数据转换器。
|
||||
|
||||
这提供了极大的便利,并且避免了编写样板代码。
|
||||
|
||||
在本教程中,我们将使用 CIFAR10 数据集。 它具有以下类别:“飞机”,“汽车”,“鸟”,“猫”,“鹿”,“狗”,“青蛙”,“马”,“船”,“卡车”。 CIFAR-10 中的图像尺寸为`3x32x32`,即尺寸为`32x32`像素的 3 通道彩色图像。
|
||||
|
||||

|
||||
|
||||
cifar10
|
||||
|
||||
## 训练图像分类器
|
||||
|
||||
我们将按顺序执行以下步骤:
|
||||
|
||||
1. 使用`torchvision`加载并标准化 CIFAR10 训练和测试数据集
|
||||
2. 定义卷积神经网络
|
||||
3. 定义损失函数
|
||||
4. 根据训练数据训练网络
|
||||
5. 在测试数据上测试网络
|
||||
|
||||
### 1.加载并标准化 CIFAR10
|
||||
|
||||
使用`torchvision`,加载 CIFAR10 非常容易。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import torchvision
|
||||
import torchvision.transforms as transforms
|
||||
|
||||
```
|
||||
|
||||
TorchVision 数据集的输出是`[0, 1]`范围的`PILImage`图像。 我们将它们转换为归一化范围`[-1, 1]`的张量。 .. 注意:
|
||||
|
||||
```py
|
||||
If running on Windows and you get a BrokenPipeError, try setting
|
||||
the num_worker of torch.utils.data.DataLoader() to 0.
|
||||
|
||||
```
|
||||
|
||||
```py
|
||||
transform = transforms.Compose(
|
||||
[transforms.ToTensor(),
|
||||
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
|
||||
|
||||
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
|
||||
download=True, transform=transform)
|
||||
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
|
||||
shuffle=True, num_workers=2)
|
||||
|
||||
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
|
||||
download=True, transform=transform)
|
||||
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
|
||||
shuffle=False, num_workers=2)
|
||||
|
||||
classes = ('plane', 'car', 'bird', 'cat',
|
||||
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
|
||||
Extracting ./data/cifar-10-python.tar.gz to ./data
|
||||
Files already downloaded and verified
|
||||
|
||||
```
|
||||
|
||||
让我们展示一些训练图像,很有趣。
|
||||
|
||||
```py
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
# functions to show an image
|
||||
|
||||
def imshow(img):
|
||||
img = img / 2 + 0.5 # unnormalize
|
||||
npimg = img.numpy()
|
||||
plt.imshow(np.transpose(npimg, (1, 2, 0)))
|
||||
plt.show()
|
||||
|
||||
# get some random training images
|
||||
dataiter = iter(trainloader)
|
||||
images, labels = dataiter.next()
|
||||
|
||||
# show images
|
||||
imshow(torchvision.utils.make_grid(images))
|
||||
# print labels
|
||||
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
dog truck frog horse
|
||||
|
||||
```
|
||||
|
||||
### 2.定义卷积神经网络
|
||||
|
||||
之前从“神经网络”部分复制神经网络,然后对其进行修改以获取 3 通道图像(而不是定义的 1 通道图像)。
|
||||
|
||||
```py
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
class Net(nn.Module):
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
self.conv1 = nn.Conv2d(3, 6, 5)
|
||||
self.pool = nn.MaxPool2d(2, 2)
|
||||
self.conv2 = nn.Conv2d(6, 16, 5)
|
||||
self.fc1 = nn.Linear(16 * 5 * 5, 120)
|
||||
self.fc2 = nn.Linear(120, 84)
|
||||
self.fc3 = nn.Linear(84, 10)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.pool(F.relu(self.conv1(x)))
|
||||
x = self.pool(F.relu(self.conv2(x)))
|
||||
x = x.view(-1, 16 * 5 * 5)
|
||||
x = F.relu(self.fc1(x))
|
||||
x = F.relu(self.fc2(x))
|
||||
x = self.fc3(x)
|
||||
return x
|
||||
|
||||
net = Net()
|
||||
|
||||
```
|
||||
|
||||
### 3.定义损失函数和优化器
|
||||
|
||||
让我们使用分类交叉熵损失和带有动量的 SGD。
|
||||
|
||||
```py
|
||||
import torch.optim as optim
|
||||
|
||||
criterion = nn.CrossEntropyLoss()
|
||||
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
|
||||
|
||||
```
|
||||
|
||||
### 4.训练网络
|
||||
|
||||
这是事情开始变得有趣的时候。 我们只需要遍历数据迭代器,然后将输入馈送到网络并进行优化即可。
|
||||
|
||||
```py
|
||||
for epoch in range(2): # loop over the dataset multiple times
|
||||
|
||||
running_loss = 0.0
|
||||
for i, data in enumerate(trainloader, 0):
|
||||
# get the inputs; data is a list of [inputs, labels]
|
||||
inputs, labels = data
|
||||
|
||||
# zero the parameter gradients
|
||||
optimizer.zero_grad()
|
||||
|
||||
# forward + backward + optimize
|
||||
outputs = net(inputs)
|
||||
loss = criterion(outputs, labels)
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# print statistics
|
||||
running_loss += loss.item()
|
||||
if i % 2000 == 1999: # print every 2000 mini-batches
|
||||
print('[%d, %5d] loss: %.3f' %
|
||||
(epoch + 1, i + 1, running_loss / 2000))
|
||||
running_loss = 0.0
|
||||
|
||||
print('Finished Training')
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
[1, 2000] loss: 2.196
|
||||
[1, 4000] loss: 1.849
|
||||
[1, 6000] loss: 1.671
|
||||
[1, 8000] loss: 1.589
|
||||
[1, 10000] loss: 1.547
|
||||
[1, 12000] loss: 1.462
|
||||
[2, 2000] loss: 1.382
|
||||
[2, 4000] loss: 1.389
|
||||
[2, 6000] loss: 1.369
|
||||
[2, 8000] loss: 1.332
|
||||
[2, 10000] loss: 1.304
|
||||
[2, 12000] loss: 1.288
|
||||
Finished Training
|
||||
|
||||
```
|
||||
|
||||
让我们快速保存我们训练过的模型:
|
||||
|
||||
```py
|
||||
PATH = './cifar_net.pth'
|
||||
torch.save(net.state_dict(), PATH)
|
||||
|
||||
```
|
||||
|
||||
有关保存 PyTorch 模型的更多详细信息,请参见[此处](https://pytorch.org/docs/stable/notes/serialization.html)。
|
||||
|
||||
### 5.根据测试数据测试网络
|
||||
|
||||
我们已经在训练数据集中对网络进行了 2 次训练。 但是我们需要检查网络是否学到了什么。
|
||||
|
||||
我们将通过预测神经网络输出的类别标签并根据实际情况进行检查来进行检查。 如果预测正确,则将样本添加到正确预测列表中。
|
||||
|
||||
好的,第一步。 让我们显示测试集中的图像以使其熟悉。
|
||||
|
||||
```py
|
||||
dataiter = iter(testloader)
|
||||
images, labels = dataiter.next()
|
||||
|
||||
# print images
|
||||
imshow(torchvision.utils.make_grid(images))
|
||||
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
GroundTruth: cat ship ship plane
|
||||
|
||||
```
|
||||
|
||||
接下来,让我们重新加载保存的模型(注意:这里不需要保存和重新加载模型,我们只是为了说明如何这样做):
|
||||
|
||||
```py
|
||||
net = Net()
|
||||
net.load_state_dict(torch.load(PATH))
|
||||
|
||||
```
|
||||
|
||||
好的,现在让我们看看神经网络对以上这些示例的看法:
|
||||
|
||||
```py
|
||||
outputs = net(images)
|
||||
|
||||
```
|
||||
|
||||
输出是 10 类的能量。 一个类别的能量越高,网络就认为该图像属于特定类别。 因此,让我们获取最高能量的指数:
|
||||
|
||||
```py
|
||||
_, predicted = torch.max(outputs, 1)
|
||||
|
||||
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
|
||||
for j in range(4)))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Predicted: cat ship ship plane
|
||||
|
||||
```
|
||||
|
||||
结果似乎还不错。
|
||||
|
||||
让我们看一下网络在整个数据集上的表现。
|
||||
|
||||
```py
|
||||
correct = 0
|
||||
total = 0
|
||||
with torch.no_grad():
|
||||
for data in testloader:
|
||||
images, labels = data
|
||||
outputs = net(images)
|
||||
_, predicted = torch.max(outputs.data, 1)
|
||||
total += labels.size(0)
|
||||
correct += (predicted == labels).sum().item()
|
||||
|
||||
print('Accuracy of the network on the 10000 test images: %d %%' % (
|
||||
100 * correct / total))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Accuracy of the network on the 10000 test images: 53 %
|
||||
|
||||
```
|
||||
|
||||
看起来比偶然更好,准确率是 10%(从 10 个类中随机选择一个类)。 好像网络学到了一些东西。
|
||||
|
||||
嗯,哪些类的表现良好,哪些类的表现不佳:
|
||||
|
||||
```py
|
||||
class_correct = list(0\. for i in range(10))
|
||||
class_total = list(0\. for i in range(10))
|
||||
with torch.no_grad():
|
||||
for data in testloader:
|
||||
images, labels = data
|
||||
outputs = net(images)
|
||||
_, predicted = torch.max(outputs, 1)
|
||||
c = (predicted == labels).squeeze()
|
||||
for i in range(4):
|
||||
label = labels[i]
|
||||
class_correct[label] += c[i].item()
|
||||
class_total[label] += 1
|
||||
|
||||
for i in range(10):
|
||||
print('Accuracy of %5s : %2d %%' % (
|
||||
classes[i], 100 * class_correct[i] / class_total[i]))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Accuracy of plane : 50 %
|
||||
Accuracy of car : 62 %
|
||||
Accuracy of bird : 51 %
|
||||
Accuracy of cat : 32 %
|
||||
Accuracy of deer : 31 %
|
||||
Accuracy of dog : 35 %
|
||||
Accuracy of frog : 77 %
|
||||
Accuracy of horse : 70 %
|
||||
Accuracy of ship : 71 %
|
||||
Accuracy of truck : 52 %
|
||||
|
||||
```
|
||||
|
||||
好的,那下一步呢?
|
||||
|
||||
我们如何在 GPU 上运行这些神经网络?
|
||||
|
||||
## 在 GPU 上进行训练
|
||||
|
||||
就像将张量转移到 GPU 上一样,您也将神经网络转移到 GPU 上。
|
||||
|
||||
如果可以使用 CUDA,首先将我们的设备定义为第一个可见的 cuda 设备:
|
||||
|
||||
```py
|
||||
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
# Assuming that we are on a CUDA machine, this should print a CUDA device:
|
||||
|
||||
print(device)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
cuda:0
|
||||
|
||||
```
|
||||
|
||||
本节的其余部分假定`device`是 CUDA 设备。
|
||||
|
||||
然后,这些方法将递归遍历所有模块,并将其参数和缓冲区转换为 CUDA 张量:
|
||||
|
||||
```py
|
||||
net.to(device)
|
||||
|
||||
```
|
||||
|
||||
请记住,您还必须将每一步的输入和目标也发送到 GPU:
|
||||
|
||||
```py
|
||||
inputs, labels = data[0].to(device), data[1].to(device)
|
||||
|
||||
```
|
||||
|
||||
与 CPU 相比,为什么我没有注意到 MASSIVE 加速? 因为您的网络真的很小。
|
||||
|
||||
**练习**:尝试增加网络的宽度(第一个`nn.Conv2d`的参数 2 和第二个`nn.Conv2d`的参数 1 –它们必须是相同的数字),看看您可以得到哪种加速。
|
||||
|
||||
**已实现的目标**:
|
||||
|
||||
* 全面了解 PyTorch 的张量库和神经网络。
|
||||
* 训练一个小型神经网络对图像进行分类
|
||||
|
||||
## 在多个 GPU 上进行训练
|
||||
|
||||
如果您想使用所有 GPU 来获得更大的大规模加速,请查看[可选:数据并行](data_parallel_tutorial.html)。
|
||||
|
||||
## 我下一步要去哪里?
|
||||
|
||||
* [训练神经网络玩视频游戏](../../intermediate/reinforcement_q_learning.html)
|
||||
* [在 imagenet 上训练最先进的 ResNet 网络](https://github.com/pytorch/examples/tree/master/imagenet)
|
||||
* [使用生成对抗网络训练人脸生成器](https://github.com/pytorch/examples/tree/master/dcgan)
|
||||
* [使用递归 LSTM 网络训练单词级语言模型](https://github.com/pytorch/examples/tree/master/word_language_model)
|
||||
* [更多示例](https://github.com/pytorch/examples)
|
||||
* [更多教程](https://github.com/pytorch/tutorials)
|
||||
* [在论坛上讨论 PyTorch](https://discuss.pytorch.org/)
|
||||
* [在 Slack 上与其他用户聊天](https://pytorch.slack.com/messages/beginner/)
|
||||
|
||||
**脚本的总运行时间**:(2 分钟 39.965 秒)
|
||||
|
||||
[下载 Python 源码:`cifar10_tutorial.py`](https://pytorch.org/tutorials/_downloads/ba100c1433c3c42a16709bb6a2ed0f85/cifar10_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`cifar10_tutorial.ipynb`](https://pytorch.org/tutorials/_downloads/17a7c7cb80916fcdf921097825a0f562/cifar10_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
631
pytorch/官方教程/07.md
Normal file
@@ -0,0 +1,631 @@
|
||||
# 通过示例学习 PyTorch
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/pytorch_with_examples.html>
|
||||
|
||||
**作者**:[Justin Johnson](https://github.com/jcjohnson/pytorch-examples)
|
||||
|
||||
本教程通过独立的示例介绍 [PyTorch](https://github.com/pytorch/pytorch) 的基本概念。
|
||||
|
||||
PyTorch 的核心是提供两个主要功能:
|
||||
|
||||
* n 维张量,类似于 NumPy,但可以在 GPU 上运行
|
||||
* 用于构建和训练神经网络的自动微分
|
||||
|
||||
我们将使用将三阶多项式拟合`y = sin(x)`的问题作为运行示例。 该网络将具有四个参数,并且将通过使网络输出与实际输出之间的欧几里德距离最小化来进行梯度下降训练,以适应随机数据。
|
||||
|
||||
注意
|
||||
|
||||
您可以在[本页](#examples-download)浏览各个示例。
|
||||
|
||||
## 张量
|
||||
|
||||
### 预热:NumPy
|
||||
|
||||
在介绍 PyTorch 之前,我们将首先使用 numpy 实现网络。
|
||||
|
||||
Numpy 提供了一个 n 维数组对象,以及许多用于操纵这些数组的函数。 Numpy 是用于科学计算的通用框架。 它对计算图,深度学习或梯度一无所知。 但是,通过使用 numpy 操作手动实现网络的前向和后向传递,我们可以轻松地使用 numpy 使三阶多项式适合正弦函数:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import numpy as np
|
||||
import math
|
||||
|
||||
# Create random input and output data
|
||||
x = np.linspace(-math.pi, math.pi, 2000)
|
||||
y = np.sin(x)
|
||||
|
||||
# Randomly initialize weights
|
||||
a = np.random.randn()
|
||||
b = np.random.randn()
|
||||
c = np.random.randn()
|
||||
d = np.random.randn()
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y
|
||||
# y = a + b x + c x^2 + d x^3
|
||||
y_pred = a + b * x + c * x ** 2 + d * x ** 3
|
||||
|
||||
# Compute and print loss
|
||||
loss = np.square(y_pred - y).sum()
|
||||
if t % 100 == 99:
|
||||
print(t, loss)
|
||||
|
||||
# Backprop to compute gradients of a, b, c, d with respect to loss
|
||||
grad_y_pred = 2.0 * (y_pred - y)
|
||||
grad_a = grad_y_pred.sum()
|
||||
grad_b = (grad_y_pred * x).sum()
|
||||
grad_c = (grad_y_pred * x ** 2).sum()
|
||||
grad_d = (grad_y_pred * x ** 3).sum()
|
||||
|
||||
# Update weights
|
||||
a -= learning_rate * grad_a
|
||||
b -= learning_rate * grad_b
|
||||
c -= learning_rate * grad_c
|
||||
d -= learning_rate * grad_d
|
||||
|
||||
print(f'Result: y = {a} + {b} x + {c} x^2 + {d} x^3')
|
||||
|
||||
```
|
||||
|
||||
### PyTorch:张量
|
||||
|
||||
Numpy 是一个很棒的框架,但是它不能利用 GPU 来加速其数值计算。 对于现代深度神经网络,GPU 通常会提供 [50 倍或更高](https://github.com/jcjohnson/cnn-benchmarks)的加速,因此遗憾的是,numpy 不足以实现现代深度学习。
|
||||
|
||||
在这里,我们介绍最基本的 PyTorch 概念:**张量**。 PyTorch 张量在概念上与 numpy 数组相同:张量是 n 维数组,PyTorch 提供了许多在这些张量上进行操作的函数。 在幕后,张量可以跟踪计算图和梯度,但它们也可用作科学计算的通用工具。
|
||||
|
||||
与 numpy 不同,PyTorch 张量可以利用 GPU 加速其数字计算。 要在 GPU 上运行 PyTorch 张量,您只需要指定正确的设备即可。
|
||||
|
||||
在这里,我们使用 PyTorch 张量将三阶多项式拟合为正弦函数。 像上面的 numpy 示例一样,我们需要手动实现通过网络的正向和反向传递:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import torch
|
||||
import math
|
||||
|
||||
dtype = torch.float
|
||||
device = torch.device("cpu")
|
||||
# device = torch.device("cuda:0") # Uncomment this to run on GPU
|
||||
|
||||
# Create random input and output data
|
||||
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Randomly initialize weights
|
||||
a = torch.randn((), device=device, dtype=dtype)
|
||||
b = torch.randn((), device=device, dtype=dtype)
|
||||
c = torch.randn((), device=device, dtype=dtype)
|
||||
d = torch.randn((), device=device, dtype=dtype)
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y
|
||||
y_pred = a + b * x + c * x ** 2 + d * x ** 3
|
||||
|
||||
# Compute and print loss
|
||||
loss = (y_pred - y).pow(2).sum().item()
|
||||
if t % 100 == 99:
|
||||
print(t, loss)
|
||||
|
||||
# Backprop to compute gradients of a, b, c, d with respect to loss
|
||||
grad_y_pred = 2.0 * (y_pred - y)
|
||||
grad_a = grad_y_pred.sum()
|
||||
grad_b = (grad_y_pred * x).sum()
|
||||
grad_c = (grad_y_pred * x ** 2).sum()
|
||||
grad_d = (grad_y_pred * x ** 3).sum()
|
||||
|
||||
# Update weights using gradient descent
|
||||
a -= learning_rate * grad_a
|
||||
b -= learning_rate * grad_b
|
||||
c -= learning_rate * grad_c
|
||||
d -= learning_rate * grad_d
|
||||
|
||||
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
## Autograd
|
||||
|
||||
### PyTorch:张量和 Autograd
|
||||
|
||||
在上述示例中,我们必须手动实现神经网络的前向和后向传递。 对于小型的两层网络,手动实现反向传递并不是什么大问题,但是对于大型的复杂网络来说,可以很快变得非常麻烦。
|
||||
|
||||
幸运的是,我们可以使用[自动微分](https://en.wikipedia.org/wiki/Automatic_differentiation)来自动计算神经网络中的反向传递。 PyTorch 中的 **Autograd** 包正是提供了此功能。 使用 Autograd 时,网络的正向传播将定义**计算图**; 图中的节点为张量,边为从输入张量产生输出张量的函数。 然后通过该图进行反向传播,可以轻松计算梯度。
|
||||
|
||||
这听起来很复杂,在实践中非常简单。 每个张量代表计算图中的一个节点。 如果`x`是具有`x.requires_grad=True`的张量,则`x.grad`是另一个张量,其保持`x`相对于某个标量值的梯度。
|
||||
|
||||
在这里,我们使用 PyTorch 张量和 Autograd 来实现我们的正弦波与三阶多项式示例; 现在我们不再需要通过网络手动实现反向传递:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import torch
|
||||
import math
|
||||
|
||||
dtype = torch.float
|
||||
device = torch.device("cpu")
|
||||
# device = torch.device("cuda:0") # Uncomment this to run on GPU
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
# By default, requires_grad=False, which indicates that we do not need to
|
||||
# compute gradients with respect to these Tensors during the backward pass.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Create random Tensors for weights. For a third order polynomial, we need
|
||||
# 4 weights: y = a + b x + c x^2 + d x^3
|
||||
# Setting requires_grad=True indicates that we want to compute gradients with
|
||||
# respect to these Tensors during the backward pass.
|
||||
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y using operations on Tensors.
|
||||
y_pred = a + b * x + c * x ** 2 + d * x ** 3
|
||||
|
||||
# Compute and print loss using operations on Tensors.
|
||||
# Now loss is a Tensor of shape (1,)
|
||||
# loss.item() gets the scalar value held in the loss.
|
||||
loss = (y_pred - y).pow(2).sum()
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Use autograd to compute the backward pass. This call will compute the
|
||||
# gradient of loss with respect to all Tensors with requires_grad=True.
|
||||
# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
|
||||
# the gradient of the loss with respect to a, b, c, d respectively.
|
||||
loss.backward()
|
||||
|
||||
# Manually update weights using gradient descent. Wrap in torch.no_grad()
|
||||
# because weights have requires_grad=True, but we don't need to track this
|
||||
# in autograd.
|
||||
with torch.no_grad():
|
||||
a -= learning_rate * a.grad
|
||||
b -= learning_rate * b.grad
|
||||
c -= learning_rate * c.grad
|
||||
d -= learning_rate * d.grad
|
||||
|
||||
# Manually zero the gradients after updating weights
|
||||
a.grad = None
|
||||
b.grad = None
|
||||
c.grad = None
|
||||
d.grad = None
|
||||
|
||||
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
### PyTorch:定义新的 Autograd 函数
|
||||
|
||||
在幕后,每个原始的 Autograd 运算符实际上都是在张量上运行的两个函数。 **正向**函数从输入张量计算输出张量。 **反向**函数接收相对于某个标量值的输出张量的梯度,并计算相对于相同标量值的输入张量的梯度。
|
||||
|
||||
在 PyTorch 中,我们可以通过定义`torch.autograd.Function`的子类并实现`forward`和`backward`函数来轻松定义自己的 Autograd 运算符。 然后,我们可以通过构造实例并像调用函数一样调用新的 Autograd 运算符,并传递包含输入数据的张量。
|
||||
|
||||
在此示例中,我们将模型定义为`y = a + b P[3](c + dx)`而不是`y = a + bx + cx ^ 2 + dx ^ 3`,其中`P[3](x) = 1/2 (5x ^ 3 - 3x)`是三次的[勒让德多项式](https://en.wikipedia.org/wiki/Legendre_polynomials)。 我们编写了自己的自定义 Autograd 函数来计算`P[3]`的前进和后退,并使用它来实现我们的模型:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import torch
|
||||
import math
|
||||
|
||||
class LegendrePolynomial3(torch.autograd.Function):
|
||||
"""
|
||||
We can implement our own custom autograd Functions by subclassing
|
||||
torch.autograd.Function and implementing the forward and backward passes
|
||||
which operate on Tensors.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def forward(ctx, input):
|
||||
"""
|
||||
In the forward pass we receive a Tensor containing the input and return
|
||||
a Tensor containing the output. ctx is a context object that can be used
|
||||
to stash information for backward computation. You can cache arbitrary
|
||||
objects for use in the backward pass using the ctx.save_for_backward method.
|
||||
"""
|
||||
ctx.save_for_backward(input)
|
||||
return 0.5 * (5 * input ** 3 - 3 * input)
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grad_output):
|
||||
"""
|
||||
In the backward pass we receive a Tensor containing the gradient of the loss
|
||||
with respect to the output, and we need to compute the gradient of the loss
|
||||
with respect to the input.
|
||||
"""
|
||||
input, = ctx.saved_tensors
|
||||
return grad_output * 1.5 * (5 * input ** 2 - 1)
|
||||
|
||||
dtype = torch.float
|
||||
device = torch.device("cpu")
|
||||
# device = torch.device("cuda:0") # Uncomment this to run on GPU
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
# By default, requires_grad=False, which indicates that we do not need to
|
||||
# compute gradients with respect to these Tensors during the backward pass.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Create random Tensors for weights. For this example, we need
|
||||
# 4 weights: y = a + b * P3(c + d * x), these weights need to be initialized
|
||||
# not too far from the correct result to ensure convergence.
|
||||
# Setting requires_grad=True indicates that we want to compute gradients with
|
||||
# respect to these Tensors during the backward pass.
|
||||
a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
|
||||
b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True)
|
||||
c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
|
||||
d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True)
|
||||
|
||||
learning_rate = 5e-6
|
||||
for t in range(2000):
|
||||
# To apply our Function, we use Function.apply method. We alias this as 'P3'.
|
||||
P3 = LegendrePolynomial3.apply
|
||||
|
||||
# Forward pass: compute predicted y using operations; we compute
|
||||
# P3 using our custom autograd operation.
|
||||
y_pred = a + b * P3(c + d * x)
|
||||
|
||||
# Compute and print loss
|
||||
loss = (y_pred - y).pow(2).sum()
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Use autograd to compute the backward pass.
|
||||
loss.backward()
|
||||
|
||||
# Update weights using gradient descent
|
||||
with torch.no_grad():
|
||||
a -= learning_rate * a.grad
|
||||
b -= learning_rate * b.grad
|
||||
c -= learning_rate * c.grad
|
||||
d -= learning_rate * d.grad
|
||||
|
||||
# Manually zero the gradients after updating weights
|
||||
a.grad = None
|
||||
b.grad = None
|
||||
c.grad = None
|
||||
d.grad = None
|
||||
|
||||
print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)')
|
||||
|
||||
```
|
||||
|
||||
## `nn`模块
|
||||
|
||||
### PyTorch:`nn`
|
||||
|
||||
计算图和 Autograd 是定义复杂运算符并自动采用导数的非常强大的范例。 但是对于大型神经网络,原始的 Autograd 可能会太低级。
|
||||
|
||||
在构建神经网络时,我们经常想到将计算安排在**层**中,其中某些层具有**可学习的参数**,这些参数会在学习期间进行优化。
|
||||
|
||||
在 TensorFlow 中,像 [Keras](https://github.com/fchollet/keras) , [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) 和 [TFLearn](http://tflearn.org/) 之类的包在原始计算图上提供了更高层次的抽象,可用于构建神经网络。
|
||||
|
||||
在 PyTorch 中,`nn`包也达到了相同的目的。 `nn`包定义了一组**模块**,它们大致等效于神经网络层。 模块接收输入张量并计算输出张量,但也可以保持内部状态,例如包含可学习参数的张量。 `nn`包还定义了一组有用的损失函数,这些函数通常在训练神经网络时使用。
|
||||
|
||||
在此示例中,我们使用`nn`包来实现我们的多项式模型网络:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import torch
|
||||
import math
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# For this example, the output y is a linear function of (x, x^2, x^3), so
|
||||
# we can consider it as a linear layer neural network. Let's prepare the
|
||||
# tensor (x, x^2, x^3).
|
||||
p = torch.tensor([1, 2, 3])
|
||||
xx = x.unsqueeze(-1).pow(p)
|
||||
|
||||
# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
|
||||
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
|
||||
# of shape (2000, 3)
|
||||
|
||||
# Use the nn package to define our model as a sequence of layers. nn.Sequential
|
||||
# is a Module which contains other Modules, and applies them in sequence to
|
||||
# produce its output. The Linear Module computes output from input using a
|
||||
# linear function, and holds internal Tensors for its weight and bias.
|
||||
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
|
||||
# to match the shape of `y`.
|
||||
model = torch.nn.Sequential(
|
||||
torch.nn.Linear(3, 1),
|
||||
torch.nn.Flatten(0, 1)
|
||||
)
|
||||
|
||||
# The nn package also contains definitions of popular loss functions; in this
|
||||
# case we will use Mean Squared Error (MSE) as our loss function.
|
||||
loss_fn = torch.nn.MSELoss(reduction='sum')
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
|
||||
# Forward pass: compute predicted y by passing x to the model. Module objects
|
||||
# override the __call__ operator so you can call them like functions. When
|
||||
# doing so you pass a Tensor of input data to the Module and it produces
|
||||
# a Tensor of output data.
|
||||
y_pred = model(xx)
|
||||
|
||||
# Compute and print loss. We pass Tensors containing the predicted and true
|
||||
# values of y, and the loss function returns a Tensor containing the
|
||||
# loss.
|
||||
loss = loss_fn(y_pred, y)
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Zero the gradients before running the backward pass.
|
||||
model.zero_grad()
|
||||
|
||||
# Backward pass: compute gradient of the loss with respect to all the learnable
|
||||
# parameters of the model. Internally, the parameters of each Module are stored
|
||||
# in Tensors with requires_grad=True, so this call will compute gradients for
|
||||
# all learnable parameters in the model.
|
||||
loss.backward()
|
||||
|
||||
# Update the weights using gradient descent. Each parameter is a Tensor, so
|
||||
# we can access its gradients like we did before.
|
||||
with torch.no_grad():
|
||||
for param in model.parameters():
|
||||
param -= learning_rate * param.grad
|
||||
|
||||
# You can access the first layer of `model` like accessing the first item of a list
|
||||
linear_layer = model[0]
|
||||
|
||||
# For linear layer, its parameters are stored as `weight` and `bias`.
|
||||
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
### PyTorch:`optim`
|
||||
|
||||
到目前为止,我们已经通过使用`torch.no_grad()`手动更改持有可学习参数的张量来更新模型的权重。 对于像随机梯度下降这样的简单优化算法来说,这并不是一个巨大的负担,但是在实践中,我们经常使用更复杂的优化器(例如 AdaGrad,RMSProp,Adam 等)来训练神经网络。
|
||||
|
||||
PyTorch 中的`optim`包抽象了优化算法的思想,并提供了常用优化算法的实现。
|
||||
|
||||
在此示例中,我们将使用`nn`包像以前一样定义我们的模型,但是我们将使用`optim`包提供的 RMSprop 算法来优化模型:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import torch
|
||||
import math
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Prepare the input tensor (x, x^2, x^3).
|
||||
p = torch.tensor([1, 2, 3])
|
||||
xx = x.unsqueeze(-1).pow(p)
|
||||
|
||||
# Use the nn package to define our model and loss function.
|
||||
model = torch.nn.Sequential(
|
||||
torch.nn.Linear(3, 1),
|
||||
torch.nn.Flatten(0, 1)
|
||||
)
|
||||
loss_fn = torch.nn.MSELoss(reduction='sum')
|
||||
|
||||
# Use the optim package to define an Optimizer that will update the weights of
|
||||
# the model for us. Here we will use RMSprop; the optim package contains many other
|
||||
# optimization algorithms. The first argument to the RMSprop constructor tells the
|
||||
# optimizer which Tensors it should update.
|
||||
learning_rate = 1e-3
|
||||
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y by passing x to the model.
|
||||
y_pred = model(xx)
|
||||
|
||||
# Compute and print loss.
|
||||
loss = loss_fn(y_pred, y)
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Before the backward pass, use the optimizer object to zero all of the
|
||||
# gradients for the variables it will update (which are the learnable
|
||||
# weights of the model). This is because by default, gradients are
|
||||
# accumulated in buffers( i.e, not overwritten) whenever .backward()
|
||||
# is called. Checkout docs of torch.autograd.backward for more details.
|
||||
optimizer.zero_grad()
|
||||
|
||||
# Backward pass: compute gradient of the loss with respect to model
|
||||
# parameters
|
||||
loss.backward()
|
||||
|
||||
# Calling the step function on an Optimizer makes an update to its
|
||||
# parameters
|
||||
optimizer.step()
|
||||
|
||||
linear_layer = model[0]
|
||||
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
### PyTorch:自定义`nn`模块
|
||||
|
||||
有时,您将需要指定比一系列现有模块更复杂的模型。 对于这些情况,您可以通过子类化`nn.Module`并定义一个`forward`来定义自己的模块,该模块使用其他模块或在 Tensors 上的其他自动转换操作来接收输入 Tensors 并生成输出 Tensors。
|
||||
|
||||
在此示例中,我们将三阶多项式实现为自定义`Module`子类:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import torch
|
||||
import math
|
||||
|
||||
class Polynomial3(torch.nn.Module):
|
||||
def __init__(self):
|
||||
"""
|
||||
In the constructor we instantiate four parameters and assign them as
|
||||
member parameters.
|
||||
"""
|
||||
super().__init__()
|
||||
self.a = torch.nn.Parameter(torch.randn(()))
|
||||
self.b = torch.nn.Parameter(torch.randn(()))
|
||||
self.c = torch.nn.Parameter(torch.randn(()))
|
||||
self.d = torch.nn.Parameter(torch.randn(()))
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
In the forward function we accept a Tensor of input data and we must return
|
||||
a Tensor of output data. We can use Modules defined in the constructor as
|
||||
well as arbitrary operators on Tensors.
|
||||
"""
|
||||
return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
|
||||
|
||||
def string(self):
|
||||
"""
|
||||
Just like any class in Python, you can also define custom method on PyTorch modules
|
||||
"""
|
||||
return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3'
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Construct our model by instantiating the class defined above
|
||||
model = Polynomial3()
|
||||
|
||||
# Construct our loss function and an Optimizer. The call to model.parameters()
|
||||
# in the SGD constructor will contain the learnable parameters of the nn.Linear
|
||||
# module which is members of the model.
|
||||
criterion = torch.nn.MSELoss(reduction='sum')
|
||||
optimizer = torch.optim.SGD(model.parameters(), lr=1e-6)
|
||||
for t in range(2000):
|
||||
# Forward pass: Compute predicted y by passing x to the model
|
||||
y_pred = model(x)
|
||||
|
||||
# Compute and print loss
|
||||
loss = criterion(y_pred, y)
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Zero gradients, perform a backward pass, and update the weights.
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
print(f'Result: {model.string()}')
|
||||
|
||||
```
|
||||
|
||||
### PyTorch:控制流 + 权重共享
|
||||
|
||||
作为动态图和权重共享的示例,我们实现了一个非常奇怪的模型:一个三阶多项式,在每个正向传播中选择 3 到 5 之间的一个随机数,并使用该阶数,多次使用相同的权重重复计算四和五阶。
|
||||
|
||||
对于此模型,我们可以使用常规的 Python 流控制来实现循环,并且可以通过在定义正向传播时简单地多次重复使用相同的参数来实现权重共享。
|
||||
|
||||
我们可以轻松地将此模型实现为`Module`子类:
|
||||
|
||||
```py
|
||||
# -*- coding: utf-8 -*-
|
||||
import random
|
||||
import torch
|
||||
import math
|
||||
|
||||
class DynamicNet(torch.nn.Module):
|
||||
def __init__(self):
|
||||
"""
|
||||
In the constructor we instantiate five parameters and assign them as members.
|
||||
"""
|
||||
super().__init__()
|
||||
self.a = torch.nn.Parameter(torch.randn(()))
|
||||
self.b = torch.nn.Parameter(torch.randn(()))
|
||||
self.c = torch.nn.Parameter(torch.randn(()))
|
||||
self.d = torch.nn.Parameter(torch.randn(()))
|
||||
self.e = torch.nn.Parameter(torch.randn(()))
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
For the forward pass of the model, we randomly choose either 4, 5
|
||||
and reuse the e parameter to compute the contribution of these orders.
|
||||
|
||||
Since each forward pass builds a dynamic computation graph, we can use normal
|
||||
Python control-flow operators like loops or conditional statements when
|
||||
defining the forward pass of the model.
|
||||
|
||||
Here we also see that it is perfectly safe to reuse the same parameter many
|
||||
times when defining a computational graph.
|
||||
"""
|
||||
y = self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
|
||||
for exp in range(4, random.randint(4, 6)):
|
||||
y = y + self.e * x ** exp
|
||||
return y
|
||||
|
||||
def string(self):
|
||||
"""
|
||||
Just like any class in Python, you can also define custom method on PyTorch modules
|
||||
"""
|
||||
return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3 + {self.e.item()} x^4 ? + {self.e.item()} x^5 ?'
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Construct our model by instantiating the class defined above
|
||||
model = DynamicNet()
|
||||
|
||||
# Construct our loss function and an Optimizer. Training this strange model with
|
||||
# vanilla stochastic gradient descent is tough, so we use momentum
|
||||
criterion = torch.nn.MSELoss(reduction='sum')
|
||||
optimizer = torch.optim.SGD(model.parameters(), lr=1e-8, momentum=0.9)
|
||||
for t in range(30000):
|
||||
# Forward pass: Compute predicted y by passing x to the model
|
||||
y_pred = model(x)
|
||||
|
||||
# Compute and print loss
|
||||
loss = criterion(y_pred, y)
|
||||
if t % 2000 == 1999:
|
||||
print(t, loss.item())
|
||||
|
||||
# Zero gradients, perform a backward pass, and update the weights.
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
print(f'Result: {model.string()}')
|
||||
|
||||
```
|
||||
|
||||
## 示例
|
||||
|
||||
您可以在此处浏览以上示例。
|
||||
|
||||
### 张量
|
||||
|
||||

|
||||
|
||||
[热身:NumPy](examples_tensor/polynomial_numpy.html#sphx-glr-beginner-examples-tensor-polynomial-numpy-py)
|
||||
|
||||

|
||||
|
||||
[PyTorch:张量](examples_tensor/polynomial_tensor.html#sphx-glr-beginner-examples-tensor-polynomial-tensor-py)
|
||||
|
||||
### Autograd
|
||||
|
||||

|
||||
|
||||
[PyTorch:张量和 Autograd](examples_autograd/polynomial_autograd.html#sphx-glr-beginner-examples-autograd-polynomial-autograd-py)
|
||||
|
||||

|
||||
|
||||
[PyTorch:定义新的 Autograd 函数](examples_autograd/polynomial_custom_function.html#sphx-glr-beginner-examples-autograd-polynomial-custom-function-py)
|
||||
|
||||
### `nn`模块
|
||||
|
||||

|
||||
|
||||
[PyTorch:`nn`](examples_nn/polynomial_nn.html#sphx-glr-beginner-examples-nn-polynomial-nn-py)
|
||||
|
||||

|
||||
|
||||
[PyTorch:`optim`](examples_nn/polynomial_optim.html#sphx-glr-beginner-examples-nn-polynomial-optim-py)
|
||||
|
||||

|
||||
|
||||
[PyTorch:自定义`nn`模块](examples_nn/polynomial_module.html#sphx-glr-beginner-examples-nn-polynomial-module-py)
|
||||
|
||||

|
||||
|
||||
[PyTorch:控制流 + 权重共享](examples_nn/dynamic_net.html#sphx-glr-beginner-examples-nn-dynamic-net-py)
|
||||
59
pytorch/官方教程/08.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# 热身:NumPy
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_tensor/polynomial_numpy.html#sphx-glr-beginner-examples-tensor-polynomial-numpy-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。
|
||||
|
||||
此实现使用 numpy 手动计算正向传播,损失和后向通过。
|
||||
|
||||
numpy 数组是通用的 n 维数组; 它对深度学习,梯度或计算图一无所知,而只是执行通用数值计算的一种方法。
|
||||
|
||||
```py
|
||||
import numpy as np
|
||||
import math
|
||||
|
||||
# Create random input and output data
|
||||
x = np.linspace(-math.pi, math.pi, 2000)
|
||||
y = np.sin(x)
|
||||
|
||||
# Randomly initialize weights
|
||||
a = np.random.randn()
|
||||
b = np.random.randn()
|
||||
c = np.random.randn()
|
||||
d = np.random.randn()
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y
|
||||
# y = a + b x + c x^2 + d x^3
|
||||
y_pred = a + b * x + c * x ** 2 + d * x ** 3
|
||||
|
||||
# Compute and print loss
|
||||
loss = np.square(y_pred - y).sum()
|
||||
if t % 100 == 99:
|
||||
print(t, loss)
|
||||
|
||||
# Backprop to compute gradients of a, b, c, d with respect to loss
|
||||
grad_y_pred = 2.0 * (y_pred - y)
|
||||
grad_a = grad_y_pred.sum()
|
||||
grad_b = (grad_y_pred * x).sum()
|
||||
grad_c = (grad_y_pred * x ** 2).sum()
|
||||
grad_d = (grad_y_pred * x ** 3).sum()
|
||||
|
||||
# Update weights
|
||||
a -= learning_rate * grad_a
|
||||
b -= learning_rate * grad_b
|
||||
c -= learning_rate * grad_c
|
||||
d -= learning_rate * grad_d
|
||||
|
||||
print(f'Result: y = {a} + {b} x + {c} x^2 + {d} x^3')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_numpy.py`](https://pytorch.org/tutorials/_downloads/6287cd68dd239d4f34ac75d774a66e23/polynomial_numpy.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_numpy.ipynb`](https://pytorch.org/tutorials/_downloads/d4cfaf6a36486a5e37afb34266028d9e/polynomial_numpy.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
64
pytorch/官方教程/09.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# PyTorch:张量
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_tensor/polynomial_tensor.html#sphx-glr-beginner-examples-tensor-polynomial-tensor-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。
|
||||
|
||||
此实现使用 PyTorch 张量手动计算正向传播,损失和后向通过。
|
||||
|
||||
PyTorch 张量基本上与 numpy 数组相同:它对深度学习或计算图或梯度一无所知,只是用于任意数值计算的通用 n 维数组。
|
||||
|
||||
numpy 数组和 PyTorch 张量之间的最大区别是 PyTorch 张量可以在 CPU 或 GPU 上运行。 要在 GPU 上运行操作,只需将张量转换为 cuda 数据类型。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import math
|
||||
|
||||
dtype = torch.float
|
||||
device = torch.device("cpu")
|
||||
# device = torch.device("cuda:0") # Uncomment this to run on GPU
|
||||
|
||||
# Create random input and output data
|
||||
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Randomly initialize weights
|
||||
a = torch.randn((), device=device, dtype=dtype)
|
||||
b = torch.randn((), device=device, dtype=dtype)
|
||||
c = torch.randn((), device=device, dtype=dtype)
|
||||
d = torch.randn((), device=device, dtype=dtype)
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y
|
||||
y_pred = a + b * x + c * x ** 2 + d * x ** 3
|
||||
|
||||
# Compute and print loss
|
||||
loss = (y_pred - y).pow(2).sum().item()
|
||||
if t % 100 == 99:
|
||||
print(t, loss)
|
||||
|
||||
# Backprop to compute gradients of a, b, c, d with respect to loss
|
||||
grad_y_pred = 2.0 * (y_pred - y)
|
||||
grad_a = grad_y_pred.sum()
|
||||
grad_b = (grad_y_pred * x).sum()
|
||||
grad_c = (grad_y_pred * x ** 2).sum()
|
||||
grad_d = (grad_y_pred * x ** 3).sum()
|
||||
|
||||
# Update weights using gradient descent
|
||||
a -= learning_rate * grad_a
|
||||
b -= learning_rate * grad_b
|
||||
c -= learning_rate * grad_c
|
||||
d -= learning_rate * grad_d
|
||||
|
||||
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_tensor.py`](https://pytorch.org/tutorials/_downloads/38bc029908996abe0c601bcf0f5fd9d8/polynomial_tensor.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_tensor.ipynb`](https://pytorch.org/tutorials/_downloads/1c715a0888ae0e33279df327e1653329/polynomial_tensor.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
77
pytorch/官方教程/10.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# PyTorch:张量和 Autograd
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_autograd/polynomial_autograd.html#sphx-glr-beginner-examples-autograd-polynomial-autograd-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。
|
||||
|
||||
此实现使用 PyTorch 张量上的运算来计算正向传播,并使用 PyTorch Autograd 来计算梯度。
|
||||
|
||||
PyTorch 张量表示计算图中的一个节点。 如果`x`是具有`x.requires_grad=True`的张量,则`x.grad`是另一个张量,其保持`x`相对于某个标量值的梯度。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import math
|
||||
|
||||
dtype = torch.float
|
||||
device = torch.device("cpu")
|
||||
# device = torch.device("cuda:0") # Uncomment this to run on GPU
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
# By default, requires_grad=False, which indicates that we do not need to
|
||||
# compute gradients with respect to these Tensors during the backward pass.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Create random Tensors for weights. For a third order polynomial, we need
|
||||
# 4 weights: y = a + b x + c x^2 + d x^3
|
||||
# Setting requires_grad=True indicates that we want to compute gradients with
|
||||
# respect to these Tensors during the backward pass.
|
||||
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y using operations on Tensors.
|
||||
y_pred = a + b * x + c * x ** 2 + d * x ** 3
|
||||
|
||||
# Compute and print loss using operations on Tensors.
|
||||
# Now loss is a Tensor of shape (1,)
|
||||
# loss.item() gets the scalar value held in the loss.
|
||||
loss = (y_pred - y).pow(2).sum()
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Use autograd to compute the backward pass. This call will compute the
|
||||
# gradient of loss with respect to all Tensors with requires_grad=True.
|
||||
# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
|
||||
# the gradient of the loss with respect to a, b, c, d respectively.
|
||||
loss.backward()
|
||||
|
||||
# Manually update weights using gradient descent. Wrap in torch.no_grad()
|
||||
# because weights have requires_grad=True, but we don't need to track this
|
||||
# in autograd.
|
||||
with torch.no_grad():
|
||||
a -= learning_rate * a.grad
|
||||
b -= learning_rate * b.grad
|
||||
c -= learning_rate * c.grad
|
||||
d -= learning_rate * d.grad
|
||||
|
||||
# Manually zero the gradients after updating weights
|
||||
a.grad = None
|
||||
b.grad = None
|
||||
c.grad = None
|
||||
d.grad = None
|
||||
|
||||
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_autograd.py`](https://pytorch.org/tutorials/_downloads/2956e289de4f5fdd59114171805b23d2/polynomial_autograd.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_autograd.ipynb`](https://pytorch.org/tutorials/_downloads/e1d4d0ca7bd75ea2fff8032fcb79076e/polynomial_autograd.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
103
pytorch/官方教程/11.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# PyTorch:定义新的 Autograd 函数
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_autograd/polynomial_custom_function.html#sphx-glr-beginner-examples-autograd-polynomial-custom-function-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。 而不是将多项式写为`y = a + bx + cx ^ 2 + dx ^ 3`,我们将多项式写为`y = a + b P[3](c + dx)`其中`P[3](x) = 1/2 (5x ^ 3 - 3x)`是三次的[勒让德多项式](https://en.wikipedia.org/wiki/Legendre_polynomials)。
|
||||
|
||||
此实现使用 PyTorch 张量上的运算来计算正向传播,并使用 PyTorch Autograd 来计算梯度。
|
||||
|
||||
在此实现中,我们实现了自己的自定义 Autograd 函数来执行`P'[3](x)`。 通过数学,`P'[3](x) = 3/2 (5x ^ 2 - 1)`:
|
||||
|
||||
```py
|
||||
import torch
|
||||
import math
|
||||
|
||||
class LegendrePolynomial3(torch.autograd.Function):
|
||||
"""
|
||||
We can implement our own custom autograd Functions by subclassing
|
||||
torch.autograd.Function and implementing the forward and backward passes
|
||||
which operate on Tensors.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def forward(ctx, input):
|
||||
"""
|
||||
In the forward pass we receive a Tensor containing the input and return
|
||||
a Tensor containing the output. ctx is a context object that can be used
|
||||
to stash information for backward computation. You can cache arbitrary
|
||||
objects for use in the backward pass using the ctx.save_for_backward method.
|
||||
"""
|
||||
ctx.save_for_backward(input)
|
||||
return 0.5 * (5 * input ** 3 - 3 * input)
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grad_output):
|
||||
"""
|
||||
In the backward pass we receive a Tensor containing the gradient of the loss
|
||||
with respect to the output, and we need to compute the gradient of the loss
|
||||
with respect to the input.
|
||||
"""
|
||||
input, = ctx.saved_tensors
|
||||
return grad_output * 1.5 * (5 * input ** 2 - 1)
|
||||
|
||||
dtype = torch.float
|
||||
device = torch.device("cpu")
|
||||
# device = torch.device("cuda:0") # Uncomment this to run on GPU
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
# By default, requires_grad=False, which indicates that we do not need to
|
||||
# compute gradients with respect to these Tensors during the backward pass.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Create random Tensors for weights. For this example, we need
|
||||
# 4 weights: y = a + b * P3(c + d * x), these weights need to be initialized
|
||||
# not too far from the correct result to ensure convergence.
|
||||
# Setting requires_grad=True indicates that we want to compute gradients with
|
||||
# respect to these Tensors during the backward pass.
|
||||
a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
|
||||
b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True)
|
||||
c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
|
||||
d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True)
|
||||
|
||||
learning_rate = 5e-6
|
||||
for t in range(2000):
|
||||
# To apply our Function, we use Function.apply method. We alias this as 'P3'.
|
||||
P3 = LegendrePolynomial3.apply
|
||||
|
||||
# Forward pass: compute predicted y using operations; we compute
|
||||
# P3 using our custom autograd operation.
|
||||
y_pred = a + b * P3(c + d * x)
|
||||
|
||||
# Compute and print loss
|
||||
loss = (y_pred - y).pow(2).sum()
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Use autograd to compute the backward pass.
|
||||
loss.backward()
|
||||
|
||||
# Update weights using gradient descent
|
||||
with torch.no_grad():
|
||||
a -= learning_rate * a.grad
|
||||
b -= learning_rate * b.grad
|
||||
c -= learning_rate * c.grad
|
||||
d -= learning_rate * d.grad
|
||||
|
||||
# Manually zero the gradients after updating weights
|
||||
a.grad = None
|
||||
b.grad = None
|
||||
c.grad = None
|
||||
d.grad = None
|
||||
|
||||
print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_custom_function.py`](https://pytorch.org/tutorials/_downloads/b7ec15fd7bec1ca3f921104cfb6a54ed/polynomial_custom_function.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_custom_function.ipynb`](https://pytorch.org/tutorials/_downloads/0a64809624bf2f3eb497d30d5303a9a0/polynomial_custom_function.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
87
pytorch/官方教程/12.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# PyTorch:`nn`
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_nn/polynomial_nn.html#sphx-glr-beginner-examples-nn-polynomial-nn-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。
|
||||
|
||||
此实现使用来自 PyTorch 的`nn`包来构建网络。 PyTorch Autograd 使定义计算图和获取梯度变得容易,但是原始的 Autograd 对于定义复杂的神经网络来说可能太低了。 这是`nn`包可以提供帮助的地方。 `nn`包定义了一组模块,您可以将其视为神经网络层,该神经网络层从输入产生输出并且可能具有一些可训练的权重。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import math
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# For this example, the output y is a linear function of (x, x^2, x^3), so
|
||||
# we can consider it as a linear layer neural network. Let's prepare the
|
||||
# tensor (x, x^2, x^3).
|
||||
p = torch.tensor([1, 2, 3])
|
||||
xx = x.unsqueeze(-1).pow(p)
|
||||
|
||||
# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
|
||||
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
|
||||
# of shape (2000, 3)
|
||||
|
||||
# Use the nn package to define our model as a sequence of layers. nn.Sequential
|
||||
# is a Module which contains other Modules, and applies them in sequence to
|
||||
# produce its output. The Linear Module computes output from input using a
|
||||
# linear function, and holds internal Tensors for its weight and bias.
|
||||
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
|
||||
# to match the shape of `y`.
|
||||
model = torch.nn.Sequential(
|
||||
torch.nn.Linear(3, 1),
|
||||
torch.nn.Flatten(0, 1)
|
||||
)
|
||||
|
||||
# The nn package also contains definitions of popular loss functions; in this
|
||||
# case we will use Mean Squared Error (MSE) as our loss function.
|
||||
loss_fn = torch.nn.MSELoss(reduction='sum')
|
||||
|
||||
learning_rate = 1e-6
|
||||
for t in range(2000):
|
||||
|
||||
# Forward pass: compute predicted y by passing x to the model. Module objects
|
||||
# override the __call__ operator so you can call them like functions. When
|
||||
# doing so you pass a Tensor of input data to the Module and it produces
|
||||
# a Tensor of output data.
|
||||
y_pred = model(xx)
|
||||
|
||||
# Compute and print loss. We pass Tensors containing the predicted and true
|
||||
# values of y, and the loss function returns a Tensor containing the
|
||||
# loss.
|
||||
loss = loss_fn(y_pred, y)
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Zero the gradients before running the backward pass.
|
||||
model.zero_grad()
|
||||
|
||||
# Backward pass: compute gradient of the loss with respect to all the learnable
|
||||
# parameters of the model. Internally, the parameters of each Module are stored
|
||||
# in Tensors with requires_grad=True, so this call will compute gradients for
|
||||
# all learnable parameters in the model.
|
||||
loss.backward()
|
||||
|
||||
# Update the weights using gradient descent. Each parameter is a Tensor, so
|
||||
# we can access its gradients like we did before.
|
||||
with torch.no_grad():
|
||||
for param in model.parameters():
|
||||
param -= learning_rate * param.grad
|
||||
|
||||
# You can access the first layer of `model` like accessing the first item of a list
|
||||
linear_layer = model[0]
|
||||
|
||||
# For linear layer, its parameters are stored as `weight` and `bias`.
|
||||
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_nn.py`](https://pytorch.org/tutorials/_downloads/b4767df4367deade63dc8a0d3712c1d4/polynomial_nn.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_nn.ipynb`](https://pytorch.org/tutorials/_downloads/7bc167d8b8308ae65a717d7461d838fa/polynomial_nn.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
71
pytorch/官方教程/13.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# PyTorch:`optim`
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_nn/polynomial_optim.html#sphx-glr-beginner-examples-nn-polynomial-optim-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。
|
||||
|
||||
此实现使用来自 PyTorch 的`nn`包来构建网络。
|
||||
|
||||
与其像以前那样手动更新模型的权重,不如使用`optim`包定义一个优化器,该优化器将为我们更新权重。 `optim`包定义了许多深度学习常用的优化算法,包括 SGD + 动量,RMSProp,Adam 等。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import math
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Prepare the input tensor (x, x^2, x^3).
|
||||
p = torch.tensor([1, 2, 3])
|
||||
xx = x.unsqueeze(-1).pow(p)
|
||||
|
||||
# Use the nn package to define our model and loss function.
|
||||
model = torch.nn.Sequential(
|
||||
torch.nn.Linear(3, 1),
|
||||
torch.nn.Flatten(0, 1)
|
||||
)
|
||||
loss_fn = torch.nn.MSELoss(reduction='sum')
|
||||
|
||||
# Use the optim package to define an Optimizer that will update the weights of
|
||||
# the model for us. Here we will use RMSprop; the optim package contains many other
|
||||
# optimization algorithms. The first argument to the RMSprop constructor tells the
|
||||
# optimizer which Tensors it should update.
|
||||
learning_rate = 1e-3
|
||||
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
|
||||
for t in range(2000):
|
||||
# Forward pass: compute predicted y by passing x to the model.
|
||||
y_pred = model(xx)
|
||||
|
||||
# Compute and print loss.
|
||||
loss = loss_fn(y_pred, y)
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Before the backward pass, use the optimizer object to zero all of the
|
||||
# gradients for the variables it will update (which are the learnable
|
||||
# weights of the model). This is because by default, gradients are
|
||||
# accumulated in buffers( i.e, not overwritten) whenever .backward()
|
||||
# is called. Checkout docs of torch.autograd.backward for more details.
|
||||
optimizer.zero_grad()
|
||||
|
||||
# Backward pass: compute gradient of the loss with respect to model
|
||||
# parameters
|
||||
loss.backward()
|
||||
|
||||
# Calling the step function on an Optimizer makes an update to its
|
||||
# parameters
|
||||
optimizer.step()
|
||||
|
||||
linear_layer = model[0]
|
||||
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_optim.py`](https://pytorch.org/tutorials/_downloads/bcfec6f02e0fe747a42dbd1579267469/polynomial_optim.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_optim.ipynb`](https://pytorch.org/tutorials/_downloads/8ef669b2c61c6c5aa47c54dceac4979e/polynomial_optim.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
75
pytorch/官方教程/14.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# PyTorch:自定义`nn`模块
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_nn/polynomial_module.html#sphx-glr-beginner-examples-nn-polynomial-module-py>
|
||||
|
||||
经过训练的三阶多项式,可以通过最小化平方的欧几里得距离来预测`y = sin(x)`从`-pi`到`pi`。
|
||||
|
||||
此实现将模型定义为自定义`Module`子类。 每当您想要一个比现有模块的简单序列更复杂的模型时,都需要以这种方式定义模型。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import math
|
||||
|
||||
class Polynomial3(torch.nn.Module):
|
||||
def __init__(self):
|
||||
"""
|
||||
In the constructor we instantiate four parameters and assign them as
|
||||
member parameters.
|
||||
"""
|
||||
super().__init__()
|
||||
self.a = torch.nn.Parameter(torch.randn(()))
|
||||
self.b = torch.nn.Parameter(torch.randn(()))
|
||||
self.c = torch.nn.Parameter(torch.randn(()))
|
||||
self.d = torch.nn.Parameter(torch.randn(()))
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
In the forward function we accept a Tensor of input data and we must return
|
||||
a Tensor of output data. We can use Modules defined in the constructor as
|
||||
well as arbitrary operators on Tensors.
|
||||
"""
|
||||
return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
|
||||
|
||||
def string(self):
|
||||
"""
|
||||
Just like any class in Python, you can also define custom method on PyTorch modules
|
||||
"""
|
||||
return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3'
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Construct our model by instantiating the class defined above
|
||||
model = Polynomial3()
|
||||
|
||||
# Construct our loss function and an Optimizer. The call to model.parameters()
|
||||
# in the SGD constructor will contain the learnable parameters of the nn.Linear
|
||||
# module which is members of the model.
|
||||
criterion = torch.nn.MSELoss(reduction='sum')
|
||||
optimizer = torch.optim.SGD(model.parameters(), lr=1e-6)
|
||||
for t in range(2000):
|
||||
# Forward pass: Compute predicted y by passing x to the model
|
||||
y_pred = model(x)
|
||||
|
||||
# Compute and print loss
|
||||
loss = criterion(y_pred, y)
|
||||
if t % 100 == 99:
|
||||
print(t, loss.item())
|
||||
|
||||
# Zero gradients, perform a backward pass, and update the weights.
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
print(f'Result: {model.string()}')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`polynomial_module.py`](https://pytorch.org/tutorials/_downloads/916a9c460c899330dbc53216cc775358/polynomial_module.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`polynomial_module.ipynb`](https://pytorch.org/tutorials/_downloads/19f4ecdd2763dd4b90693df4d6e10ebe/polynomial_module.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
82
pytorch/官方教程/15.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# PyTorch:控制流 + 权重共享
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/examples_nn/dynamic_net.html#sphx-glr-beginner-examples-nn-dynamic-net-py>
|
||||
|
||||
为了展示 PyTorch 动态图的强大功能,我们将实现一个非常奇怪的模型:一个三阶多项式,在每个正向传播中选择 3 到 5 之间的一个随机数,并使用该数量的阶次,多次使用相同的权重重复计算四和五阶。
|
||||
|
||||
```py
|
||||
import random
|
||||
import torch
|
||||
import math
|
||||
|
||||
class DynamicNet(torch.nn.Module):
|
||||
def __init__(self):
|
||||
"""
|
||||
In the constructor we instantiate five parameters and assign them as members.
|
||||
"""
|
||||
super().__init__()
|
||||
self.a = torch.nn.Parameter(torch.randn(()))
|
||||
self.b = torch.nn.Parameter(torch.randn(()))
|
||||
self.c = torch.nn.Parameter(torch.randn(()))
|
||||
self.d = torch.nn.Parameter(torch.randn(()))
|
||||
self.e = torch.nn.Parameter(torch.randn(()))
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
For the forward pass of the model, we randomly choose either 4, 5
|
||||
and reuse the e parameter to compute the contribution of these orders.
|
||||
|
||||
Since each forward pass builds a dynamic computation graph, we can use normal
|
||||
Python control-flow operators like loops or conditional statements when
|
||||
defining the forward pass of the model.
|
||||
|
||||
Here we also see that it is perfectly safe to reuse the same parameter many
|
||||
times when defining a computational graph.
|
||||
"""
|
||||
y = self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3
|
||||
for exp in range(4, random.randint(4, 6)):
|
||||
y = y + self.e * x ** exp
|
||||
return y
|
||||
|
||||
def string(self):
|
||||
"""
|
||||
Just like any class in Python, you can also define custom method on PyTorch modules
|
||||
"""
|
||||
return f'y = {self.a.item()} + {self.b.item()} x + {self.c.item()} x^2 + {self.d.item()} x^3 + {self.e.item()} x^4 ? + {self.e.item()} x^5 ?'
|
||||
|
||||
# Create Tensors to hold input and outputs.
|
||||
x = torch.linspace(-math.pi, math.pi, 2000)
|
||||
y = torch.sin(x)
|
||||
|
||||
# Construct our model by instantiating the class defined above
|
||||
model = DynamicNet()
|
||||
|
||||
# Construct our loss function and an Optimizer. Training this strange model with
|
||||
# vanilla stochastic gradient descent is tough, so we use momentum
|
||||
criterion = torch.nn.MSELoss(reduction='sum')
|
||||
optimizer = torch.optim.SGD(model.parameters(), lr=1e-8, momentum=0.9)
|
||||
for t in range(30000):
|
||||
# Forward pass: Compute predicted y by passing x to the model
|
||||
y_pred = model(x)
|
||||
|
||||
# Compute and print loss
|
||||
loss = criterion(y_pred, y)
|
||||
if t % 2000 == 1999:
|
||||
print(t, loss.item())
|
||||
|
||||
# Zero gradients, perform a backward pass, and update the weights.
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
print(f'Result: {model.string()}')
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`dynamic_net.py`](https://pytorch.org/tutorials/_downloads/3900c903cde097dc0088c3b06d588c0b/dynamic_net.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`dynamic_net.ipynb`](https://pytorch.org/tutorials/_downloads/ad230923bd9eb0d42576725b63ad8d91/dynamic_net.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
971
pytorch/官方教程/16.md
Normal file
@@ -0,0 +1,971 @@
|
||||
# `torch.nn`到底是什么?
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/nn_tutorial.html>
|
||||
|
||||
作者:Jeremy Howard,[fast.ai](https://www.fast.ai)。 感谢 Rachel Thomas 和 Francisco Ingham。
|
||||
|
||||
我们建议将本教程作为笔记本而不是脚本来运行。 要下载笔记本(`.ipynb`)文件,请单击页面顶部的链接。
|
||||
|
||||
PyTorch 提供设计精美的模块和类[`torch.nn`](https://pytorch.org/docs/stable/nn.html),[`torch.optim`](https://pytorch.org/docs/stable/optim.html),[`Dataset`](https://pytorch.org/docs/stable/data.html?highlight=dataset#torch.utils.data.Dataset)和[`DataLoader`](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader)神经网络。 为了充分利用它们的功能并针对您的问题对其进行自定义,您需要真正了解它们在做什么。 为了建立这种理解,我们将首先在 MNIST 数据集上训练基本神经网络,而无需使用这些模型的任何功能。 我们最初将仅使用最基本的 PyTorch 张量函数。 然后,我们将一次从`torch.nn`,`torch.optim`,`Dataset`或`DataLoader`中逐个添加一个函数,以准确显示每个函数,以及如何使代码更简洁或更有效。 灵活。
|
||||
|
||||
**本教程假定您已经安装了 PyTorch,并且熟悉张量操作的基础知识。** (如果您熟悉 Numpy 数组操作,将会发现此处使用的 PyTorch 张量操作几乎相同)。
|
||||
|
||||
## MNIST 数据集
|
||||
|
||||
我们将使用经典的 [MNIST](http://deeplearning.net/data/mnist/) 数据集,该数据集由手绘数字的黑白图像组成(0 到 9 之间)。
|
||||
|
||||
我们将使用[`pathlib`](https://docs.python.org/3/library/pathlib.html)处理路径(Python 3 标准库的一部分),并使用[`requests`](http://docs.python-requests.org/en/master/)下载数据集。 我们只会在使用模块时才导入它们,因此您可以确切地看到每个位置上正在使用的模块。
|
||||
|
||||
```py
|
||||
from pathlib import Path
|
||||
import requests
|
||||
|
||||
DATA_PATH = Path("data")
|
||||
PATH = DATA_PATH / "mnist"
|
||||
|
||||
PATH.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
URL = "https://github.com/pytorch/tutorials/raw/master/_static/"
|
||||
FILENAME = "mnist.pkl.gz"
|
||||
|
||||
if not (PATH / FILENAME).exists():
|
||||
content = requests.get(URL + FILENAME).content
|
||||
(PATH / FILENAME).open("wb").write(content)
|
||||
|
||||
```
|
||||
|
||||
该数据集为 numpy 数组格式,并已使用`pickle`(一种用于序列化数据的 python 特定格式)存储。
|
||||
|
||||
```py
|
||||
import pickle
|
||||
import gzip
|
||||
|
||||
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
|
||||
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
|
||||
|
||||
```
|
||||
|
||||
每个图像为`28 x 28`,并存储为长度为`784 = 28x28`的扁平行。 让我们来看一个; 我们需要先将其重塑为 2d。
|
||||
|
||||
```py
|
||||
from matplotlib import pyplot
|
||||
import numpy as np
|
||||
|
||||
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
|
||||
print(x_train.shape)
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
(50000, 784)
|
||||
|
||||
```
|
||||
|
||||
PyTorch 使用`torch.tensor`而不是 numpy 数组,因此我们需要转换数据。
|
||||
|
||||
```py
|
||||
import torch
|
||||
|
||||
x_train, y_train, x_valid, y_valid = map(
|
||||
torch.tensor, (x_train, y_train, x_valid, y_valid)
|
||||
)
|
||||
n, c = x_train.shape
|
||||
x_train, x_train.shape, y_train.min(), y_train.max()
|
||||
print(x_train, y_train)
|
||||
print(x_train.shape)
|
||||
print(y_train.min(), y_train.max())
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor([[0., 0., 0., ..., 0., 0., 0.],
|
||||
[0., 0., 0., ..., 0., 0., 0.],
|
||||
[0., 0., 0., ..., 0., 0., 0.],
|
||||
...,
|
||||
[0., 0., 0., ..., 0., 0., 0.],
|
||||
[0., 0., 0., ..., 0., 0., 0.],
|
||||
[0., 0., 0., ..., 0., 0., 0.]]) tensor([5, 0, 4, ..., 8, 4, 8])
|
||||
torch.Size([50000, 784])
|
||||
tensor(0) tensor(9)
|
||||
|
||||
```
|
||||
|
||||
## 从零开始的神经网络(没有`torch.nn`)
|
||||
|
||||
首先,我们仅使用 PyTorch 张量操作创建模型。 我们假设您已经熟悉神经网络的基础知识。 (如果不是,则可以在 [course.fast.ai](https://course.fast.ai) 中学习它们)。
|
||||
|
||||
PyTorch 提供了创建随机或零填充张量的方法,我们将使用它们来为简单的线性模型创建权重和偏差。 这些只是常规张量,还有一个非常特殊的附加值:我们告诉 PyTorch 它们需要梯度。 这使 PyTorch 记录了在张量上完成的所有操作,因此它可以在反向传播时*自动计算*的梯度!
|
||||
|
||||
**对于权重,我们在初始化之后设置`requires_grad`,因为我们不希望该步骤包含在梯度中。 (请注意,PyTorch 中的尾随`_`表示该操作是原地执行的。)**
|
||||
|
||||
注意
|
||||
|
||||
我们在这里用 [Xavier 初始化](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf)(通过乘以`1 / sqrt(n)`)来初始化权重。
|
||||
|
||||
```py
|
||||
import math
|
||||
|
||||
weights = torch.randn(784, 10) / math.sqrt(784)
|
||||
weights.requires_grad_()
|
||||
bias = torch.zeros(10, requires_grad=True)
|
||||
|
||||
```
|
||||
|
||||
由于 PyTorch 具有自动计算梯度的功能,我们可以将任何标准的 Python 函数(或可调用对象)用作模型! 因此,我们只需编写一个普通矩阵乘法和广播加法即可创建一个简单的线性模型。 我们还需要激活函数,因此我们将编写并使用`log_softmax`。 请记住:尽管 PyTorch 提供了许多预写的损失函数,激活函数等,但是您可以使用纯 Python 轻松编写自己的函数。 PyTorch 甚至会自动为您的函数创建快速 GPU 或向量化的 CPU 代码。
|
||||
|
||||
```py
|
||||
def log_softmax(x):
|
||||
return x - x.exp().sum(-1).log().unsqueeze(-1)
|
||||
|
||||
def model(xb):
|
||||
return log_softmax(xb @ weights + bias)
|
||||
|
||||
```
|
||||
|
||||
在上面,`@`代表点积运算。 我们将对一批数据(在本例中为 64 张图像)调用函数。 这是一个*正向传播*。 请注意,由于我们从随机权重开始,因此在这一阶段,我们的预测不会比随机预测更好。
|
||||
|
||||
```py
|
||||
bs = 64 # batch size
|
||||
|
||||
xb = x_train[0:bs] # a mini-batch from x
|
||||
preds = model(xb) # predictions
|
||||
preds[0], preds.shape
|
||||
print(preds[0], preds.shape)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor([-2.5964, -2.3153, -2.1321, -2.4480, -2.2930, -1.9507, -2.1289, -2.4175,
|
||||
-2.5332, -2.3967], grad_fn=<SelectBackward>) torch.Size([64, 10])
|
||||
|
||||
```
|
||||
|
||||
如您所见,`preds`张量不仅包含张量值,还包含梯度函数。 稍后我们将使用它进行反向传播。
|
||||
|
||||
让我们实现负对数可能性作为损失函数(同样,我们只能使用标准 Python):
|
||||
|
||||
```py
|
||||
def nll(input, target):
|
||||
return -input[range(target.shape[0]), target].mean()
|
||||
|
||||
loss_func = nll
|
||||
|
||||
```
|
||||
|
||||
让我们使用随机模型来检查损失,以便我们稍后查看反向传播后是否可以改善我们的损失。
|
||||
|
||||
```py
|
||||
yb = y_train[0:bs]
|
||||
print(loss_func(preds, yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(2.3735, grad_fn=<NegBackward>)
|
||||
|
||||
```
|
||||
|
||||
我们还实现一个函数来计算模型的准确率。 对于每个预测,如果具有最大值的索引与目标值匹配,则该预测是正确的。
|
||||
|
||||
```py
|
||||
def accuracy(out, yb):
|
||||
preds = torch.argmax(out, dim=1)
|
||||
return (preds == yb).float().mean()
|
||||
|
||||
```
|
||||
|
||||
让我们检查一下随机模型的准确率,以便我们可以看出随着损失的增加,准确率是否有所提高。
|
||||
|
||||
```py
|
||||
print(accuracy(preds, yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0938)
|
||||
|
||||
```
|
||||
|
||||
现在,我们可以运行一个训练循环。 对于每次迭代,我们将:
|
||||
|
||||
* 选择一个小批量数据(大小为`bs`)
|
||||
* 使用模型进行预测
|
||||
* 计算损失
|
||||
* `loss.backward()`更新模型的梯度,在这种情况下为`weights`和`bias`。
|
||||
|
||||
现在,我们使用这些梯度来更新权重和偏差。 我们在`torch.no_grad()`上下文管理器中执行此操作,因为我们不希望在下一步的梯度计算中记录这些操作。 [您可以在这里阅读有关 PyTorch 的 Autograd 如何记录操作的更多信息](https://pytorch.org/docs/stable/notes/autograd.html)。
|
||||
|
||||
然后,将梯度设置为零,以便为下一个循环做好准备。 否则,我们的梯度会记录所有已发生操作的运行记录(即`loss.backward()`将梯度添加到已存储的内容中,而不是替换它们)。
|
||||
|
||||
小费
|
||||
|
||||
您可以使用标准的 python 调试器逐步浏览 PyTorch 代码,从而可以在每一步检查各种变量值。 取消注释以下`set_trace()`即可尝试。
|
||||
|
||||
```py
|
||||
from IPython.core.debugger import set_trace
|
||||
|
||||
lr = 0.5 # learning rate
|
||||
epochs = 2 # how many epochs to train for
|
||||
|
||||
for epoch in range(epochs):
|
||||
for i in range((n - 1) // bs + 1):
|
||||
# set_trace()
|
||||
start_i = i * bs
|
||||
end_i = start_i + bs
|
||||
xb = x_train[start_i:end_i]
|
||||
yb = y_train[start_i:end_i]
|
||||
pred = model(xb)
|
||||
loss = loss_func(pred, yb)
|
||||
|
||||
loss.backward()
|
||||
with torch.no_grad():
|
||||
weights -= weights.grad * lr
|
||||
bias -= bias.grad * lr
|
||||
weights.grad.zero_()
|
||||
bias.grad.zero_()
|
||||
|
||||
```
|
||||
|
||||
就是这样:我们完全从头开始创建并训练了一个最小的神经网络(在这种情况下,是逻辑回归,因为我们没有隐藏的层)!
|
||||
|
||||
让我们检查损失和准确率,并将其与我们之前获得的进行比较。 我们希望损失会减少,准确率会增加,而且确实如此。
|
||||
|
||||
```py
|
||||
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0811, grad_fn=<NegBackward>) tensor(1.)
|
||||
|
||||
```
|
||||
|
||||
## 使用`torch.nn.functional`
|
||||
|
||||
现在,我们将重构代码,使其执行与以前相同的操作,只是我们将开始利用 PyTorch 的`nn`类使其更加简洁和灵活。 从这里开始的每一步,我们都应该使代码中的一个或多个:更短,更易理解和/或更灵活。
|
||||
|
||||
第一步也是最简单的步骤,就是用`torch.nn.functional`(通常按照惯例将其导入到名称空间`F`中)替换我们的手写激活和损失函数,从而缩短代码长度。 该模块包含`torch.nn`库中的所有函数(而该库的其他部分包含类)。 除了广泛的损失和激活函数外,您还会在这里找到一些方便的函数来创建神经网络,例如合并函数。 (还有一些用于进行卷积,线性层等的函数,但是正如我们将看到的那样,通常可以使用库的其他部分来更好地处理这些函数。)
|
||||
|
||||
如果您使用的是负对数似然损失和对数 softmax 激活,那么 Pytorch 会提供结合了两者的单一函数`F.cross_entropy`。 因此,我们甚至可以从模型中删除激活函数。
|
||||
|
||||
```py
|
||||
import torch.nn.functional as F
|
||||
|
||||
loss_func = F.cross_entropy
|
||||
|
||||
def model(xb):
|
||||
return xb @ weights + bias
|
||||
|
||||
```
|
||||
|
||||
请注意,我们不再在`model`函数中调用`log_softmax`。 让我们确认我们的损失和准确率与以前相同:
|
||||
|
||||
```py
|
||||
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0811, grad_fn=<NllLossBackward>) tensor(1.)
|
||||
|
||||
```
|
||||
|
||||
## 使用`nn.Module`重构
|
||||
|
||||
接下来,我们将使用`nn.Module`和`nn.Parameter`进行更清晰,更简洁的训练循环。 我们将`nn.Module`子类化(它本身是一个类并且能够跟踪状态)。 在这种情况下,我们要创建一个类,该类包含前进步骤的权重,偏置和方法。 `nn.Module`具有许多我们将要使用的属性和方法(例如`.parameters()`和`.zero_grad()`)。
|
||||
|
||||
注意
|
||||
|
||||
`nn.Module`(大写`M`)是 PyTorch 的特定概念,并且是我们将经常使用的一类。 不要将`nn.Module`与[模块](https://docs.python.org/3/tutorial/modules.html)(小写`m`)的 Python 概念混淆,该模块是可以导入的 Python 代码文件。
|
||||
|
||||
```py
|
||||
from torch import nn
|
||||
|
||||
class Mnist_Logistic(nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
|
||||
self.bias = nn.Parameter(torch.zeros(10))
|
||||
|
||||
def forward(self, xb):
|
||||
return xb @ self.weights + self.bias
|
||||
|
||||
```
|
||||
|
||||
由于我们现在使用的是对象而不是仅使用函数,因此我们首先必须实例化模型:
|
||||
|
||||
```py
|
||||
model = Mnist_Logistic()
|
||||
|
||||
```
|
||||
|
||||
现在我们可以像以前一样计算损失。 请注意,`nn.Module`对象的使用就好像它们是函数一样(即,它们是*可调用的*),但是在后台 Pytorch 会自动调用我们的`forward`方法。
|
||||
|
||||
```py
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(2.3903, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
以前,在我们的训练循环中,我们必须按名称更新每个参数的值,并手动将每个参数的梯度分别归零,如下所示:
|
||||
|
||||
```py
|
||||
with torch.no_grad():
|
||||
weights -= weights.grad * lr
|
||||
bias -= bias.grad * lr
|
||||
weights.grad.zero_()
|
||||
bias.grad.zero_()
|
||||
|
||||
```
|
||||
|
||||
现在我们可以利用`model.parameters()`和`model.zero_grad()`(它们都由 PyTorch 为`nn.Module`定义)来使这些步骤更简洁,并且更不会出现忘记某些参数的错误,尤其是当我们有一个更复杂的模型的时候:
|
||||
|
||||
```py
|
||||
with torch.no_grad():
|
||||
for p in model.parameters(): p -= p.grad * lr
|
||||
model.zero_grad()
|
||||
|
||||
```
|
||||
|
||||
我们将把小的训练循环包装在`fit`函数中,以便稍后再运行。
|
||||
|
||||
```py
|
||||
def fit():
|
||||
for epoch in range(epochs):
|
||||
for i in range((n - 1) // bs + 1):
|
||||
start_i = i * bs
|
||||
end_i = start_i + bs
|
||||
xb = x_train[start_i:end_i]
|
||||
yb = y_train[start_i:end_i]
|
||||
pred = model(xb)
|
||||
loss = loss_func(pred, yb)
|
||||
|
||||
loss.backward()
|
||||
with torch.no_grad():
|
||||
for p in model.parameters():
|
||||
p -= p.grad * lr
|
||||
model.zero_grad()
|
||||
|
||||
fit()
|
||||
|
||||
```
|
||||
|
||||
让我们仔细检查一下我们的损失是否减少了:
|
||||
|
||||
```py
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0808, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
## 使用`nn.Linear`重构
|
||||
|
||||
我们继续重构我们的代码。 代替手动定义和初始化`self.weights`和`self.bias`并计算`xb @ self.weights + self.bias`,我们将对线性层使用 Pytorch 类[`nn.Linear`](https://pytorch.org/docs/stable/nn.html#linear-layers),这将为我们完成所有工作。 Pytorch 具有许多类型的预定义层,可以大大简化我们的代码,并且通常也可以使其速度更快。
|
||||
|
||||
```py
|
||||
class Mnist_Logistic(nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.lin = nn.Linear(784, 10)
|
||||
|
||||
def forward(self, xb):
|
||||
return self.lin(xb)
|
||||
|
||||
```
|
||||
|
||||
我们以与以前相同的方式实例化模型并计算损失:
|
||||
|
||||
```py
|
||||
model = Mnist_Logistic()
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(2.4215, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
我们仍然可以使用与以前相同的`fit`方法。
|
||||
|
||||
```py
|
||||
fit()
|
||||
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0824, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
## 使用`optim`重构
|
||||
|
||||
Pytorch 还提供了一个包含各种优化算法的包`torch.optim`。 我们可以使用优化器中的`step`方法采取向前的步骤,而不是手动更新每个参数。
|
||||
|
||||
这将使我们替换之前的手动编码优化步骤:
|
||||
|
||||
```py
|
||||
with torch.no_grad():
|
||||
for p in model.parameters(): p -= p.grad * lr
|
||||
model.zero_grad()
|
||||
|
||||
```
|
||||
|
||||
而是只使用:
|
||||
|
||||
```py
|
||||
opt.step()
|
||||
opt.zero_grad()
|
||||
|
||||
```
|
||||
|
||||
(`optim.zero_grad()`将梯度重置为 0,我们需要在计算下一个小批量的梯度之前调用它。)
|
||||
|
||||
```py
|
||||
from torch import optim
|
||||
|
||||
```
|
||||
|
||||
我们将定义一个小函数来创建模型和优化器,以便将来重用。
|
||||
|
||||
```py
|
||||
def get_model():
|
||||
model = Mnist_Logistic()
|
||||
return model, optim.SGD(model.parameters(), lr=lr)
|
||||
|
||||
model, opt = get_model()
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
for epoch in range(epochs):
|
||||
for i in range((n - 1) // bs + 1):
|
||||
start_i = i * bs
|
||||
end_i = start_i + bs
|
||||
xb = x_train[start_i:end_i]
|
||||
yb = y_train[start_i:end_i]
|
||||
pred = model(xb)
|
||||
loss = loss_func(pred, yb)
|
||||
|
||||
loss.backward()
|
||||
opt.step()
|
||||
opt.zero_grad()
|
||||
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(2.2999, grad_fn=<NllLossBackward>)
|
||||
tensor(0.0823, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
## 使用`Dataset`重构
|
||||
|
||||
PyTorch 有一个抽象的`Dataset`类。 数据集可以是具有`__len__`函数(由 Python 的标准`len`函数调用)和具有`__getitem__`函数作为对其进行索引的一种方法。 [本教程](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html)演示了一个不错的示例,该示例创建一个自定义`FacialLandmarkDataset`类作为`Dataset`的子类。
|
||||
|
||||
PyTorch 的[`TensorDataset`](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset)是一个数据集包装张量。 通过定义索引的长度和方式,这也为我们提供了沿张量的第一维进行迭代,索引和切片的方法。 这将使我们在训练的同一行中更容易访问自变量和因变量。
|
||||
|
||||
```py
|
||||
from torch.utils.data import TensorDataset
|
||||
|
||||
```
|
||||
|
||||
`x_train`和`y_train`都可以合并为一个`TensorDataset`,这将更易于迭代和切片。
|
||||
|
||||
```py
|
||||
train_ds = TensorDataset(x_train, y_train)
|
||||
|
||||
```
|
||||
|
||||
以前,我们不得不分别遍历`x`和`y`值的小批量:
|
||||
|
||||
```py
|
||||
xb = x_train[start_i:end_i]
|
||||
yb = y_train[start_i:end_i]
|
||||
|
||||
```
|
||||
|
||||
现在,我们可以一起执行以下两个步骤:
|
||||
|
||||
```py
|
||||
xb,yb = train_ds[i*bs : i*bs+bs]
|
||||
|
||||
```
|
||||
|
||||
```py
|
||||
model, opt = get_model()
|
||||
|
||||
for epoch in range(epochs):
|
||||
for i in range((n - 1) // bs + 1):
|
||||
xb, yb = train_ds[i * bs: i * bs + bs]
|
||||
pred = model(xb)
|
||||
loss = loss_func(pred, yb)
|
||||
|
||||
loss.backward()
|
||||
opt.step()
|
||||
opt.zero_grad()
|
||||
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0819, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
## 使用`DataLoader`重构
|
||||
|
||||
Pytorch 的`DataLoader`负责批量管理。 您可以从任何`Dataset`创建一个`DataLoader`。 `DataLoader`使迭代迭代变得更加容易。 不必使用`train_ds[i*bs : i*bs+bs]`,`DataLoader`会自动为我们提供每个小批量。
|
||||
|
||||
```py
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
train_ds = TensorDataset(x_train, y_train)
|
||||
train_dl = DataLoader(train_ds, batch_size=bs)
|
||||
|
||||
```
|
||||
|
||||
以前,我们的循环遍历如下批量`(xb, yb)`:
|
||||
|
||||
```py
|
||||
for i in range((n-1)//bs + 1):
|
||||
xb,yb = train_ds[i*bs : i*bs+bs]
|
||||
pred = model(xb)
|
||||
|
||||
```
|
||||
|
||||
现在,我们的循环更加简洁了,因为`(xb, yb)`是从数据加载器自动加载的:
|
||||
|
||||
```py
|
||||
for xb,yb in train_dl:
|
||||
pred = model(xb)
|
||||
|
||||
```
|
||||
|
||||
```py
|
||||
model, opt = get_model()
|
||||
|
||||
for epoch in range(epochs):
|
||||
for xb, yb in train_dl:
|
||||
pred = model(xb)
|
||||
loss = loss_func(pred, yb)
|
||||
|
||||
loss.backward()
|
||||
opt.step()
|
||||
opt.zero_grad()
|
||||
|
||||
print(loss_func(model(xb), yb))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor(0.0821, grad_fn=<NllLossBackward>)
|
||||
|
||||
```
|
||||
|
||||
得益于 Pytorch 的`nn.Module`,`nn.Parameter`,`Dataset`和`DataLoader`,我们的训练循环现在变得更小,更容易理解。 现在,让我们尝试添加在实践中创建有效模型所需的基本功能。
|
||||
|
||||
## 添加验证
|
||||
|
||||
在第 1 节中,我们只是试图建立一个合理的训练循环以用于我们的训练数据。 实际上,您也应该**始终**具有[验证集](https://www.fast.ai/2017/11/13/validation-sets/),以便识别您是否过拟合。
|
||||
|
||||
[对训练数据进行打乱](https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks)对于防止批量与过拟合之间的相关性很重要。 另一方面,无论我们是否打乱验证集,验证损失都是相同的。 由于打乱需要花费更多时间,因此打乱验证数据没有任何意义。
|
||||
|
||||
我们将验证集的批量大小设为训练集的两倍。 这是因为验证集不需要反向传播,因此占用的内存更少(不需要存储梯度)。 我们利用这一优势来使用更大的批量,并更快地计算损失。
|
||||
|
||||
```py
|
||||
train_ds = TensorDataset(x_train, y_train)
|
||||
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
|
||||
|
||||
valid_ds = TensorDataset(x_valid, y_valid)
|
||||
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
|
||||
|
||||
```
|
||||
|
||||
我们将在每个周期结束时计算并打印验证损失。
|
||||
|
||||
(请注意,我们总是在训练之前调用`model.train()`,并在推理之前调用`model.eval()`,因为诸如`nn.BatchNorm2d`和`nn.Dropout`之类的层会使用它们,以确保这些不同阶段的行为正确。)
|
||||
|
||||
```py
|
||||
model, opt = get_model()
|
||||
|
||||
for epoch in range(epochs):
|
||||
model.train()
|
||||
for xb, yb in train_dl:
|
||||
pred = model(xb)
|
||||
loss = loss_func(pred, yb)
|
||||
|
||||
loss.backward()
|
||||
opt.step()
|
||||
opt.zero_grad()
|
||||
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
|
||||
|
||||
print(epoch, valid_loss / len(valid_dl))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0 tensor(0.3743)
|
||||
1 tensor(0.3316)
|
||||
|
||||
```
|
||||
|
||||
## 创建`fit()`和`get_data()`
|
||||
|
||||
现在,我们将自己进行一些重构。 由于我们经历了两次相似的过程来计算训练集和验证集的损失,因此我们将其设为自己的函数`loss_batch`,该函数可计算一批损失。
|
||||
|
||||
我们将优化器传入训练集中,然后使用它执行反向传播。 对于验证集,我们没有通过优化程序,因此该方法不会执行反向传播。
|
||||
|
||||
```py
|
||||
def loss_batch(model, loss_func, xb, yb, opt=None):
|
||||
loss = loss_func(model(xb), yb)
|
||||
|
||||
if opt is not None:
|
||||
loss.backward()
|
||||
opt.step()
|
||||
opt.zero_grad()
|
||||
|
||||
return loss.item(), len(xb)
|
||||
|
||||
```
|
||||
|
||||
`fit`运行必要的操作来训练我们的模型,并计算每个周期的训练和验证损失。
|
||||
|
||||
```py
|
||||
import numpy as np
|
||||
|
||||
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
|
||||
for epoch in range(epochs):
|
||||
model.train()
|
||||
for xb, yb in train_dl:
|
||||
loss_batch(model, loss_func, xb, yb, opt)
|
||||
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
losses, nums = zip(
|
||||
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
|
||||
)
|
||||
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
|
||||
|
||||
print(epoch, val_loss)
|
||||
|
||||
```
|
||||
|
||||
`get_data`返回训练和验证集的数据加载器。
|
||||
|
||||
```py
|
||||
def get_data(train_ds, valid_ds, bs):
|
||||
return (
|
||||
DataLoader(train_ds, batch_size=bs, shuffle=True),
|
||||
DataLoader(valid_ds, batch_size=bs * 2),
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
现在,我们获取数据加载器和拟合模型的整个过程可以在 3 行代码中运行:
|
||||
|
||||
```py
|
||||
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
|
||||
model, opt = get_model()
|
||||
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0 0.3120644524335861
|
||||
1 0.28915613491535186
|
||||
|
||||
```
|
||||
|
||||
您可以使用这些基本的 3 行代码来训练各种各样的模型。 让我们看看是否可以使用它们来训练卷积神经网络(CNN)!
|
||||
|
||||
## 切换到 CNN
|
||||
|
||||
现在,我们将构建具有三个卷积层的神经网络。 由于上一节中的任何功能都不假设任何有关模型形式的信息,因此我们将能够使用它们来训练 CNN,而无需进行任何修改。
|
||||
|
||||
我们将使用 Pytorch 的预定义[`Conv2d`](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d)类作为我们的卷积层。 我们定义了具有 3 个卷积层的 CNN。 每个卷积后跟一个 ReLU。 最后,我们执行平均池化。 (请注意,`view`是 numpy 的`reshape`的 PyTorch 版本)
|
||||
|
||||
```py
|
||||
class Mnist_CNN(nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
|
||||
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
|
||||
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
|
||||
|
||||
def forward(self, xb):
|
||||
xb = xb.view(-1, 1, 28, 28)
|
||||
xb = F.relu(self.conv1(xb))
|
||||
xb = F.relu(self.conv2(xb))
|
||||
xb = F.relu(self.conv3(xb))
|
||||
xb = F.avg_pool2d(xb, 4)
|
||||
return xb.view(-1, xb.size(1))
|
||||
|
||||
lr = 0.1
|
||||
|
||||
```
|
||||
|
||||
[动量](https://cs231n.github.io/neural-networks-3/#sgd)是随机梯度下降的一种变体,它也考虑了以前的更新,通常可以加快训练速度。
|
||||
|
||||
```py
|
||||
model = Mnist_CNN()
|
||||
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
|
||||
|
||||
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0 0.32337012240886687
|
||||
1 0.25021172934770586
|
||||
|
||||
```
|
||||
|
||||
## `nn.Sequential`
|
||||
|
||||
`torch.nn`还有另一个方便的类,可以用来简化我们的代码:[`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential)。 `Sequential`对象以顺序方式运行其中包含的每个模块。 这是编写神经网络的一种简单方法。
|
||||
|
||||
为了利用这一点,我们需要能够从给定的函数轻松定义**自定义层**。 例如,PyTorch 没有视层,我们需要为我们的网络创建一个层。 `Lambda`将创建一个层,然后在使用`Sequential`定义网络时可以使用该层。
|
||||
|
||||
```py
|
||||
class Lambda(nn.Module):
|
||||
def __init__(self, func):
|
||||
super().__init__()
|
||||
self.func = func
|
||||
|
||||
def forward(self, x):
|
||||
return self.func(x)
|
||||
|
||||
def preprocess(x):
|
||||
return x.view(-1, 1, 28, 28)
|
||||
|
||||
```
|
||||
|
||||
用`Sequential`创建的模型很简单:
|
||||
|
||||
```py
|
||||
model = nn.Sequential(
|
||||
Lambda(preprocess),
|
||||
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
|
||||
nn.ReLU(),
|
||||
nn.AvgPool2d(4),
|
||||
Lambda(lambda x: x.view(x.size(0), -1)),
|
||||
)
|
||||
|
||||
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
|
||||
|
||||
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0 0.30119081069231035
|
||||
1 0.25335356528759
|
||||
|
||||
```
|
||||
|
||||
## 包装`DataLoader`
|
||||
|
||||
Our CNN is fairly concise, but it only works with MNIST, because:
|
||||
|
||||
* 假设输入为`28 * 28`长向量
|
||||
* 假设 CNN 的最终网格尺寸为`4 * 4`(因为这是平均值
|
||||
|
||||
我们使用的合并核大小)
|
||||
|
||||
让我们摆脱这两个假设,因此我们的模型适用于任何 2d 单通道图像。 首先,我们可以删除初始的 Lambda 层,但将数据预处理移至生成器中:
|
||||
|
||||
```py
|
||||
def preprocess(x, y):
|
||||
return x.view(-1, 1, 28, 28), y
|
||||
|
||||
class WrappedDataLoader:
|
||||
def __init__(self, dl, func):
|
||||
self.dl = dl
|
||||
self.func = func
|
||||
|
||||
def __len__(self):
|
||||
return len(self.dl)
|
||||
|
||||
def __iter__(self):
|
||||
batches = iter(self.dl)
|
||||
for b in batches:
|
||||
yield (self.func(*b))
|
||||
|
||||
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
|
||||
train_dl = WrappedDataLoader(train_dl, preprocess)
|
||||
valid_dl = WrappedDataLoader(valid_dl, preprocess)
|
||||
|
||||
```
|
||||
|
||||
接下来,我们可以将`nn.AvgPool2d`替换为`nn.AdaptiveAvgPool2d`,这使我们能够定义所需的*输出*张量的大小,而不是所需的*输入*张量的大小。 结果,我们的模型将适用于任何大小的输入。
|
||||
|
||||
```py
|
||||
model = nn.Sequential(
|
||||
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
|
||||
nn.ReLU(),
|
||||
nn.AdaptiveAvgPool2d(1),
|
||||
Lambda(lambda x: x.view(x.size(0), -1)),
|
||||
)
|
||||
|
||||
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
|
||||
|
||||
```
|
||||
|
||||
试试看:
|
||||
|
||||
```py
|
||||
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0 0.327303307390213
|
||||
1 0.2181092014491558
|
||||
|
||||
```
|
||||
|
||||
## 使用您的 GPU
|
||||
|
||||
如果您足够幸运地能够使用具有 CUDA 功能的 GPU(可以从大多数云提供商处以每小时 0.50 美元的价格租用一个),则可以使用它来加速代码。 首先检查您的 GPU 是否在 Pytorch 中正常工作:
|
||||
|
||||
```py
|
||||
print(torch.cuda.is_available())
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
True
|
||||
|
||||
```
|
||||
|
||||
然后为其创建一个设备对象:
|
||||
|
||||
```py
|
||||
dev = torch.device(
|
||||
"cuda") if torch.cuda.is_available() else torch.device("cpu")
|
||||
|
||||
```
|
||||
|
||||
让我们更新`preprocess`,将批量移至 GPU:
|
||||
|
||||
```py
|
||||
def preprocess(x, y):
|
||||
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
|
||||
|
||||
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
|
||||
train_dl = WrappedDataLoader(train_dl, preprocess)
|
||||
valid_dl = WrappedDataLoader(valid_dl, preprocess)
|
||||
|
||||
```
|
||||
|
||||
最后,我们可以将模型移至 GPU。
|
||||
|
||||
```py
|
||||
model.to(dev)
|
||||
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
|
||||
|
||||
```
|
||||
|
||||
您应该发现它现在运行得更快:
|
||||
|
||||
```py
|
||||
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0 0.1833980613708496
|
||||
1 0.17365939717292786
|
||||
|
||||
```
|
||||
|
||||
## 总结
|
||||
|
||||
现在,我们有了一个通用的数据管道和训练循环,您可以将其用于使用 Pytorch 训练许多类型的模型。 要了解现在可以轻松进行模型训练,请查看`mnist_sample`示例笔记本。
|
||||
|
||||
当然,您需要添加很多内容,例如数据扩充,超参数调整,监控训练,迁移学习等。 这些功能可在 fastai 库中使用,该库是使用本教程中所示的相同设计方法开发的,为希望进一步推广其模型的从业人员提供了自然的下一步。
|
||||
|
||||
我们承诺在本教程开始时将通过示例分别说明`torch.nn`,`torch.optim`,`Dataset`和`DataLoader`。 因此,让我们总结一下我们所看到的:
|
||||
|
||||
> * `torch.nn`
|
||||
> * `Module`:创建一个行为类似于函数的可调用对象,但也可以包含状态(例如神经网络层权重)。 它知道其中包含的 `Parameter` ,并且可以将其所有坡度归零,遍历它们以进行权重更新等。
|
||||
> * `Parameter`:张量的包装器,用于告知 `Module` 具有在反向传播期间需要更新的权重。 仅更新具有`require_grad`属性集的张量
|
||||
> * `functional`:一个模块(通常按照惯例导入到 `F` 名称空间中),其中包含激活函数,损失函数等。 以及卷积和线性层等层的无状态版本。
|
||||
> * `torch.optim`:包含诸如 `SGD` 的优化程序,这些优化程序在后退步骤
|
||||
> * `Dataset` 中更新 `Parameter` 的权重。 具有 `__len__` 和 `__getitem__` 的对象,包括 Pytorch 提供的类,例如 `TensorDataset`
|
||||
> * `DataLoader`:获取任何 `Dataset` 并创建一个迭代器,该迭代器返回批量数据。
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 57.062 秒)
|
||||
|
||||
[下载 Python 源码:`nn_tutorial.py`](../_downloads/a6246751179fbfb7cad9222ef1c16617/nn_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`nn_tutorial.ipynb`](../_downloads/5ddab57bb7482fbcc76722617dd47324/nn_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
348
pytorch/官方教程/17.md
Normal file
@@ -0,0 +1,348 @@
|
||||
# 使用 TensorBoard 可视化模型,数据和训练
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html>
|
||||
|
||||
在 [60 分钟突击](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)中,我们向您展示了如何加载数据,如何通过定义为`nn.Module`子类的模型提供数据,如何在训练数据上训练该模型以及在测试数据上对其进行测试。 为了了解发生的情况,我们在模型训练期间打印一些统计数据,以了解训练是否在进行中。 但是,我们可以做得更好:PyTorch 与 TensorBoard 集成在一起,TensorBoard 是一种工具,用于可视化神经网络训练运行的结果。 本教程使用 [Fashion-MNIST 数据集](https://github.com/zalandoresearch/fashion-mnist)说明了其某些功能,可以使用`torchvision.datasets`将其读入 PyTorch。
|
||||
|
||||
在本教程中,我们将学习如何:
|
||||
|
||||
> 1. 读取数据并进行适当的转换(与先前的教程几乎相同)。
|
||||
> 2. 设置 TensorBoard。
|
||||
> 3. 写入 TensorBoard。
|
||||
> 4. 使用 TensorBoard 检查模型架构。
|
||||
> 5. 使用 TensorBoard 来创建我们在上一个教程中创建的可视化的交互式版本,并使用较少的代码
|
||||
|
||||
具体来说,在第 5 点,我们将看到:
|
||||
|
||||
> * 有两种方法可以检查我们的训练数据
|
||||
> * 在训练模型时如何跟踪其表现
|
||||
> * 在训练后如何评估模型的表现。
|
||||
|
||||
我们将从 [CIFAR-10 教程](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html)中类似的样板代码开始:
|
||||
|
||||
```py
|
||||
# imports
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
import torch
|
||||
import torchvision
|
||||
import torchvision.transforms as transforms
|
||||
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import torch.optim as optim
|
||||
|
||||
# transforms
|
||||
transform = transforms.Compose(
|
||||
[transforms.ToTensor(),
|
||||
transforms.Normalize((0.5,), (0.5,))])
|
||||
|
||||
# datasets
|
||||
trainset = torchvision.datasets.FashionMNIST('./data',
|
||||
download=True,
|
||||
train=True,
|
||||
transform=transform)
|
||||
testset = torchvision.datasets.FashionMNIST('./data',
|
||||
download=True,
|
||||
train=False,
|
||||
transform=transform)
|
||||
|
||||
# dataloaders
|
||||
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
|
||||
shuffle=True, num_workers=2)
|
||||
|
||||
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
|
||||
shuffle=False, num_workers=2)
|
||||
|
||||
# constant for classes
|
||||
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
|
||||
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')
|
||||
|
||||
# helper function to show an image
|
||||
# (used in the `plot_classes_preds` function below)
|
||||
def matplotlib_imshow(img, one_channel=False):
|
||||
if one_channel:
|
||||
img = img.mean(dim=0)
|
||||
img = img / 2 + 0.5 # unnormalize
|
||||
npimg = img.numpy()
|
||||
if one_channel:
|
||||
plt.imshow(npimg, cmap="Greys")
|
||||
else:
|
||||
plt.imshow(np.transpose(npimg, (1, 2, 0)))
|
||||
|
||||
```
|
||||
|
||||
我们将在该教程中定义一个类似的模型架构,仅需进行少量修改即可解决以下事实:图像现在是一个通道而不是三个通道,而图像是`28x28`而不是`32x32`:
|
||||
|
||||
```py
|
||||
class Net(nn.Module):
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
self.conv1 = nn.Conv2d(1, 6, 5)
|
||||
self.pool = nn.MaxPool2d(2, 2)
|
||||
self.conv2 = nn.Conv2d(6, 16, 5)
|
||||
self.fc1 = nn.Linear(16 * 4 * 4, 120)
|
||||
self.fc2 = nn.Linear(120, 84)
|
||||
self.fc3 = nn.Linear(84, 10)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.pool(F.relu(self.conv1(x)))
|
||||
x = self.pool(F.relu(self.conv2(x)))
|
||||
x = x.view(-1, 16 * 4 * 4)
|
||||
x = F.relu(self.fc1(x))
|
||||
x = F.relu(self.fc2(x))
|
||||
x = self.fc3(x)
|
||||
return x
|
||||
|
||||
net = Net()
|
||||
|
||||
```
|
||||
|
||||
我们将在之前定义相同的`optimizer`和`criterion`:
|
||||
|
||||
```py
|
||||
criterion = nn.CrossEntropyLoss()
|
||||
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
|
||||
|
||||
```
|
||||
|
||||
## 1\. TensorBoard 设置
|
||||
|
||||
现在,我们将设置 TensorBoard,从`torch.utils`导入`tensorboard`并定义`SummaryWriter`,这是将信息写入 TensorBoard 的关键对象。
|
||||
|
||||
```py
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
|
||||
# default `log_dir` is "runs" - we'll be more specific here
|
||||
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
|
||||
|
||||
```
|
||||
|
||||
请注意,仅此行会创建一个`runs/fashion_mnist_experiment_1`文件夹。
|
||||
|
||||
## 2\. 写入 TensorBoard
|
||||
|
||||
现在,使用[`make_grid`](https://pytorch.org/docs/stable/torchvision/utils.html#torchvision.utils.make_grid)将图像写入到 TensorBoard 中,具体来说就是网格。
|
||||
|
||||
```py
|
||||
# get some random training images
|
||||
dataiter = iter(trainloader)
|
||||
images, labels = dataiter.next()
|
||||
|
||||
# create grid of images
|
||||
img_grid = torchvision.utils.make_grid(images)
|
||||
|
||||
# show images
|
||||
matplotlib_imshow(img_grid, one_channel=True)
|
||||
|
||||
# write to tensorboard
|
||||
writer.add_image('four_fashion_mnist_images', img_grid)
|
||||
|
||||
```
|
||||
|
||||
正在运行
|
||||
|
||||
```py
|
||||
tensorboard --logdir=runs
|
||||
|
||||
```
|
||||
|
||||
从命令行,然后导航到`https://localhost:6006`应该显示以下内容。
|
||||
|
||||

|
||||
|
||||
现在您知道如何使用 TensorBoard 了! 但是,此示例可以在 Jupyter 笔记本中完成-TensorBoard 真正擅长的地方是创建交互式可视化。 接下来,我们将介绍其中之一,并在本教程结束时介绍更多内容。
|
||||
|
||||
## 3\. 使用 TensorBoard 检查模型
|
||||
|
||||
TensorBoard 的优势之一是其可视化复杂模型结构的能力。 让我们可视化我们构建的模型。
|
||||
|
||||
```py
|
||||
writer.add_graph(net, images)
|
||||
writer.close()
|
||||
|
||||
```
|
||||
|
||||
现在刷新 TensorBoard 后,您应该会看到一个`Graphs`标签,如下所示:
|
||||
|
||||

|
||||
|
||||
继续并双击`Net`以展开它,查看构成模型的各个操作的详细视图。
|
||||
|
||||
TensorBoard 具有非常方便的功能,可在低维空间中可视化高维数据,例如图像数据。 接下来我们将介绍这一点。
|
||||
|
||||
## 4\. 在 TensorBoard 中添加“投影仪”
|
||||
|
||||
我们可以通过[`add_embedding`](https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_embedding)方法可视化高维数据的低维表示
|
||||
|
||||
```py
|
||||
# helper function
|
||||
def select_n_random(data, labels, n=100):
|
||||
'''
|
||||
Selects n random datapoints and their corresponding labels from a dataset
|
||||
'''
|
||||
assert len(data) == len(labels)
|
||||
|
||||
perm = torch.randperm(len(data))
|
||||
return data[perm][:n], labels[perm][:n]
|
||||
|
||||
# select random images and their target indices
|
||||
images, labels = select_n_random(trainset.data, trainset.targets)
|
||||
|
||||
# get the class labels for each image
|
||||
class_labels = [classes[lab] for lab in labels]
|
||||
|
||||
# log embeddings
|
||||
features = images.view(-1, 28 * 28)
|
||||
writer.add_embedding(features,
|
||||
metadata=class_labels,
|
||||
label_img=images.unsqueeze(1))
|
||||
writer.close()
|
||||
|
||||
```
|
||||
|
||||
现在,在 TensorBoard 的“投影仪”选项卡中,您可以看到这 100 张图像-每个图像 784 维-向下投影到三维空间中。 此外,这是交互式的:您可以单击并拖动以旋转三维投影。 最后,一些技巧可以使可视化效果更容易看到:选择左上方的“颜色:标签”,以及启用“夜间模式”,这将使图像更容易看到,因为它们的背景是白色的:
|
||||
|
||||

|
||||
|
||||
现在我们已经彻底检查了我们的数据,让我们展示了 TensorBoard 如何从训练开始就可以使跟踪模型的训练和评估更加清晰。
|
||||
|
||||
## 5\. 使用 TensorBoard 跟踪模型训练
|
||||
|
||||
在前面的示例中,我们仅*每 2000 次迭代*打印该模型的运行损失。 现在,我们将运行损失记录到 TensorBoard 中,并通过`plot_classes_preds`函数查看模型所做的预测。
|
||||
|
||||
```py
|
||||
# helper functions
|
||||
|
||||
def images_to_probs(net, images):
|
||||
'''
|
||||
Generates predictions and corresponding probabilities from a trained
|
||||
network and a list of images
|
||||
'''
|
||||
output = net(images)
|
||||
# convert output probabilities to predicted class
|
||||
_, preds_tensor = torch.max(output, 1)
|
||||
preds = np.squeeze(preds_tensor.numpy())
|
||||
return preds, [F.softmax(el, dim=0)[i].item() for i, el in zip(preds, output)]
|
||||
|
||||
def plot_classes_preds(net, images, labels):
|
||||
'''
|
||||
Generates matplotlib Figure using a trained network, along with images
|
||||
and labels from a batch, that shows the network's top prediction along
|
||||
with its probability, alongside the actual label, coloring this
|
||||
information based on whether the prediction was correct or not.
|
||||
Uses the "images_to_probs" function.
|
||||
'''
|
||||
preds, probs = images_to_probs(net, images)
|
||||
# plot the images in the batch, along with predicted and true labels
|
||||
fig = plt.figure(figsize=(12, 48))
|
||||
for idx in np.arange(4):
|
||||
ax = fig.add_subplot(1, 4, idx+1, xticks=[], yticks=[])
|
||||
matplotlib_imshow(images[idx], one_channel=True)
|
||||
ax.set_title("{0}, {1:.1f}%\n(label: {2})".format(
|
||||
classes[preds[idx]],
|
||||
probs[idx] * 100.0,
|
||||
classes[labels[idx]]),
|
||||
color=("green" if preds[idx]==labels[idx].item() else "red"))
|
||||
return fig
|
||||
|
||||
```
|
||||
|
||||
最后,让我们使用与之前教程相同的模型训练代码来训练模型,但是每 1000 批将结果写入 TensorBoard,而不是打印到控制台。 这是通过[`add_scalar`](https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_scalar)函数完成的。
|
||||
|
||||
此外,在训练过程中,我们将生成一幅图像,显示该批量中包含的四幅图像的模型预测与实际结果。
|
||||
|
||||
```py
|
||||
running_loss = 0.0
|
||||
for epoch in range(1): # loop over the dataset multiple times
|
||||
|
||||
for i, data in enumerate(trainloader, 0):
|
||||
|
||||
# get the inputs; data is a list of [inputs, labels]
|
||||
inputs, labels = data
|
||||
|
||||
# zero the parameter gradients
|
||||
optimizer.zero_grad()
|
||||
|
||||
# forward + backward + optimize
|
||||
outputs = net(inputs)
|
||||
loss = criterion(outputs, labels)
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
running_loss += loss.item()
|
||||
if i % 1000 == 999: # every 1000 mini-batches...
|
||||
|
||||
# ...log the running loss
|
||||
writer.add_scalar('training loss',
|
||||
running_loss / 1000,
|
||||
epoch * len(trainloader) + i)
|
||||
|
||||
# ...log a Matplotlib Figure showing the model's predictions on a
|
||||
# random mini-batch
|
||||
writer.add_figure('predictions vs. actuals',
|
||||
plot_classes_preds(net, inputs, labels),
|
||||
global_step=epoch * len(trainloader) + i)
|
||||
running_loss = 0.0
|
||||
print('Finished Training')
|
||||
|
||||
```
|
||||
|
||||
现在,您可以查看“标量”选项卡,以查看在 15,000 次训练迭代中绘制的运行损失:
|
||||
|
||||

|
||||
|
||||
此外,我们可以查看整个学习过程中模型在任意批量上所做的预测。 查看“图像”选项卡,然后在“预测与实际”可视化条件下向下滚动以查看此内容; 这表明,例如,仅经过 3000 次训练迭代,该模型就已经能够区分出视觉上截然不同的类,例如衬衫,运动鞋和外套,尽管它并没有像后来的训练那样有信心:
|
||||
|
||||

|
||||
|
||||
在之前的教程中,我们研究了模型训练后的每类准确率; 在这里,我们将使用 TensorBoard 绘制每个类别的精确调用曲线([在这里解释](https://www.scikit-yb.org/en/latest/api/classifier/prcurve.html))。
|
||||
|
||||
## 6\. 使用 TensorBoard 评估经过训练的模型
|
||||
|
||||
```py
|
||||
# 1\. gets the probability predictions in a test_size x num_classes Tensor
|
||||
# 2\. gets the preds in a test_size Tensor
|
||||
# takes ~10 seconds to run
|
||||
class_probs = []
|
||||
class_preds = []
|
||||
with torch.no_grad():
|
||||
for data in testloader:
|
||||
images, labels = data
|
||||
output = net(images)
|
||||
class_probs_batch = [F.softmax(el, dim=0) for el in output]
|
||||
_, class_preds_batch = torch.max(output, 1)
|
||||
|
||||
class_probs.append(class_probs_batch)
|
||||
class_preds.append(class_preds_batch)
|
||||
|
||||
test_probs = torch.cat([torch.stack(batch) for batch in class_probs])
|
||||
test_preds = torch.cat(class_preds)
|
||||
|
||||
# helper function
|
||||
def add_pr_curve_tensorboard(class_index, test_probs, test_preds, global_step=0):
|
||||
'''
|
||||
Takes in a "class_index" from 0 to 9 and plots the corresponding
|
||||
precision-recall curve
|
||||
'''
|
||||
tensorboard_preds = test_preds == class_index
|
||||
tensorboard_probs = test_probs[:, class_index]
|
||||
|
||||
writer.add_pr_curve(classes[class_index],
|
||||
tensorboard_preds,
|
||||
tensorboard_probs,
|
||||
global_step=global_step)
|
||||
writer.close()
|
||||
|
||||
# plot all the pr curves
|
||||
for i in range(len(classes)):
|
||||
add_pr_curve_tensorboard(i, test_probs, test_preds)
|
||||
|
||||
```
|
||||
|
||||
现在,您将看到一个`PR Curves`选项卡,其中包含每个类别的精确调用曲线。 继续四处戳; 您会发现在某些类别中,模型的“曲线下面积”接近 100%,而在另一些类别中,该面积更低:
|
||||
|
||||

|
||||
|
||||
这是 TensorBoard 和 PyTorch 与之集成的介绍。 当然,您可以在 Jupyter 笔记本中完成 TensorBoard 的所有操作,但是使用 TensorBoard 时,默认情况下会获得交互式的视觉效果。
|
||||
1
pytorch/官方教程/18.md
Normal file
@@ -0,0 +1 @@
|
||||
# 图片/视频
|
||||
453
pytorch/官方教程/19.md
Normal file
@@ -0,0 +1,453 @@
|
||||
# `torchvision`对象检测微调教程
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html>
|
||||
|
||||
小费
|
||||
|
||||
为了充分利用本教程,我们建议使用此 [Colab 版本](https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb)。 这将使您可以尝试以下信息。
|
||||
|
||||
在本教程中,我们将对 [Penn-Fudan 数据库](https://www.cis.upenn.edu/~jshi/ped_html/)中的行人检测和分割,使用预训练的 [Mask R-CNN](https://arxiv.org/abs/1703.06870) 模型进行微调。 它包含 170 个图像和 345 个行人实例,我们将用它来说明如何在`torchvision`中使用新功能,以便在自定义数据集上训练实例细分模型。
|
||||
|
||||
## 定义数据集
|
||||
|
||||
用于训练对象检测,实例细分和人员关键点检测的参考脚本可轻松支持添加新的自定义数据集。 数据集应继承自标准`torch.utils.data.Dataset`类,并实现`__len__`和`__getitem__`。
|
||||
|
||||
我们唯一需要的特异性是数据集`__getitem__`应该返回:
|
||||
|
||||
* 图像:大小为`(H, W)`的 PIL 图像
|
||||
* 目标:包含以下字段的字典
|
||||
* `boxes (FloatTensor[N, 4])`:`[x0, y0, x1, y1]`格式的`N`边界框的坐标,范围从`0`至`W`,从`0`至`H`
|
||||
* `labels (Int64Tensor[N])`:每个边界框的标签。 `0`始终代表背景类。
|
||||
* `image_id (Int64Tensor[1])`:图像标识符。 它在数据集中的所有图像之间应该是唯一的,并在评估过程中使用
|
||||
* `area (Tensor[N])`:边界框的区域。 在使用 COCO 度量进行评估时,可使用此值来区分小盒子,中盒子和大盒子之间的度量得分。
|
||||
* `iscrowd (UInt8Tensor[N])`:`iscrowd = True`的实例在评估期间将被忽略。
|
||||
* (可选)`masks (UInt8Tensor[N, H, W])`:每个对象的分割蒙版
|
||||
* (可选)`keypoints (FloatTensor[N, K, 3])`:对于 N 个对象中的每一个,它包含`[x, y, visibility]`格式的 K 个关键点,以定义对象。 可见性为 0 表示关键点不可见。 请注意,对于数据扩充,翻转关键点的概念取决于数据表示形式,您可能应该将`references/detection/transforms.py`修改为新的关键点表示形式
|
||||
|
||||
如果您的模型返回上述方法,则它们将使其适用于训练和评估,并将使用`pycocotools`中的评估脚本。
|
||||
|
||||
注意
|
||||
|
||||
对于 Windows,请使用命令从[`gautamchitnis`](https://github.com/gautamchitnis/cocoapi)安装`pycocotools`
|
||||
|
||||
`pip install git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI`
|
||||
|
||||
关于`labels`的注解。 该模型将`0`类作为背景。 如果您的数据集不包含背景类,则`labels`中不应包含`0`。 例如,假设您只有*猫*和*狗*两类,则可以定义`1`来表示*猫*和`0`代表*狗*。 因此,例如,如果其中一个图像同时具有两个类,则您的`labels`张量应类似于`[1,2]`。
|
||||
|
||||
此外,如果要在训练过程中使用宽高比分组(以便每个批量仅包含具有相似长宽比的图像),则建议您还实现`get_height_and_width`方法,该方法返回图像的高度和宽度。 如果未提供此方法,我们将通过`__getitem__`查询数据集的所有元素,这会将图像加载到内存中,并且比提供自定义方法慢。
|
||||
|
||||
### 为 PennFudan 编写自定义数据集
|
||||
|
||||
让我们为 PennFudan 数据集编写一个数据集。 在[下载并解压缩 zip 文件](https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip)之后,我们具有以下文件夹结构:
|
||||
|
||||
```py
|
||||
PennFudanPed/
|
||||
PedMasks/
|
||||
FudanPed00001_mask.png
|
||||
FudanPed00002_mask.png
|
||||
FudanPed00003_mask.png
|
||||
FudanPed00004_mask.png
|
||||
...
|
||||
PNGimg/
|
||||
FudanPed00001.png
|
||||
FudanPed00002.png
|
||||
FudanPed00003.png
|
||||
FudanPed00004.png
|
||||
|
||||
```
|
||||
|
||||
这是一对图像和分割蒙版的一个示例
|
||||
|
||||
 
|
||||
|
||||
因此,每个图像都有一个对应的分割蒙版,其中每个颜色对应一个不同的实例。 让我们为此数据集编写一个`torch.utils.data.Dataset`类。
|
||||
|
||||
```py
|
||||
import os
|
||||
import numpy as np
|
||||
import torch
|
||||
from PIL import Image
|
||||
|
||||
class PennFudanDataset(object):
|
||||
def __init__(self, root, transforms):
|
||||
self.root = root
|
||||
self.transforms = transforms
|
||||
# load all image files, sorting them to
|
||||
# ensure that they are aligned
|
||||
self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
|
||||
self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))
|
||||
|
||||
def __getitem__(self, idx):
|
||||
# load images ad masks
|
||||
img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
|
||||
mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
|
||||
img = Image.open(img_path).convert("RGB")
|
||||
# note that we haven't converted the mask to RGB,
|
||||
# because each color corresponds to a different instance
|
||||
# with 0 being background
|
||||
mask = Image.open(mask_path)
|
||||
# convert the PIL Image into a numpy array
|
||||
mask = np.array(mask)
|
||||
# instances are encoded as different colors
|
||||
obj_ids = np.unique(mask)
|
||||
# first id is the background, so remove it
|
||||
obj_ids = obj_ids[1:]
|
||||
|
||||
# split the color-encoded mask into a set
|
||||
# of binary masks
|
||||
masks = mask == obj_ids[:, None, None]
|
||||
|
||||
# get bounding box coordinates for each mask
|
||||
num_objs = len(obj_ids)
|
||||
boxes = []
|
||||
for i in range(num_objs):
|
||||
pos = np.where(masks[i])
|
||||
xmin = np.min(pos[1])
|
||||
xmax = np.max(pos[1])
|
||||
ymin = np.min(pos[0])
|
||||
ymax = np.max(pos[0])
|
||||
boxes.append([xmin, ymin, xmax, ymax])
|
||||
|
||||
# convert everything into a torch.Tensor
|
||||
boxes = torch.as_tensor(boxes, dtype=torch.float32)
|
||||
# there is only one class
|
||||
labels = torch.ones((num_objs,), dtype=torch.int64)
|
||||
masks = torch.as_tensor(masks, dtype=torch.uint8)
|
||||
|
||||
image_id = torch.tensor([idx])
|
||||
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
|
||||
# suppose all instances are not crowd
|
||||
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
|
||||
|
||||
target = {}
|
||||
target["boxes"] = boxes
|
||||
target["labels"] = labels
|
||||
target["masks"] = masks
|
||||
target["image_id"] = image_id
|
||||
target["area"] = area
|
||||
target["iscrowd"] = iscrowd
|
||||
|
||||
if self.transforms is not None:
|
||||
img, target = self.transforms(img, target)
|
||||
|
||||
return img, target
|
||||
|
||||
def __len__(self):
|
||||
return len(self.imgs)
|
||||
|
||||
```
|
||||
|
||||
这就是数据集的全部内容。 现在,我们定义一个可以对该数据集执行预测的模型。
|
||||
|
||||
## 定义模型
|
||||
|
||||
在本教程中,我们将基于 [Faster R-CNN](https://arxiv.org/abs/1506.01497) 使用 [Mask R-CNN](https://arxiv.org/abs/1703.06870) 。 Faster R-CNN 是可预测图像中潜在对象的边界框和类分数的模型。
|
||||
|
||||

|
||||
|
||||
Mask R-CNN 在 Faster R-CNN 中增加了一个分支,该分支还可以预测每个实例的分割掩码。
|
||||
|
||||

|
||||
|
||||
在两种常见情况下,可能要修改`torchvision`模型动物园中的可用模型之一。 首先是当我们想从预先训练的模型开始,然后微调最后一层时。 另一个是当我们想要用另一个模型替换主干时(例如,为了更快的预测)。
|
||||
|
||||
在以下各节中,让我们看看如何做一个或另一个。
|
||||
|
||||
### 1-将预训练模型用于微调
|
||||
|
||||
假设您要从在 COCO 上经过预训练的模型开始,并希望针对您的特定类对其进行微调。 这是一种可行的方法:
|
||||
|
||||
```py
|
||||
import torchvision
|
||||
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
|
||||
|
||||
# load a model pre-trained pre-trained on COCO
|
||||
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
|
||||
|
||||
# replace the classifier with a new one, that has
|
||||
# num_classes which is user-defined
|
||||
num_classes = 2 # 1 class (person) + background
|
||||
# get number of input features for the classifier
|
||||
in_features = model.roi_heads.box_predictor.cls_score.in_features
|
||||
# replace the pre-trained head with a new one
|
||||
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
|
||||
|
||||
```
|
||||
|
||||
### 2-修改模型以添加其他主干
|
||||
|
||||
```py
|
||||
import torchvision
|
||||
from torchvision.models.detection import FasterRCNN
|
||||
from torchvision.models.detection.rpn import AnchorGenerator
|
||||
|
||||
# load a pre-trained model for classification and return
|
||||
# only the features
|
||||
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
|
||||
# FasterRCNN needs to know the number of
|
||||
# output channels in a backbone. For mobilenet_v2, it's 1280
|
||||
# so we need to add it here
|
||||
backbone.out_channels = 1280
|
||||
|
||||
# let's make the RPN generate 5 x 3 anchors per spatial
|
||||
# location, with 5 different sizes and 3 different aspect
|
||||
# ratios. We have a Tuple[Tuple[int]] because each feature
|
||||
# map could potentially have different sizes and
|
||||
# aspect ratios
|
||||
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
|
||||
aspect_ratios=((0.5, 1.0, 2.0),))
|
||||
|
||||
# let's define what are the feature maps that we will
|
||||
# use to perform the region of interest cropping, as well as
|
||||
# the size of the crop after rescaling.
|
||||
# if your backbone returns a Tensor, featmap_names is expected to
|
||||
# be [0]. More generally, the backbone should return an
|
||||
# OrderedDict[Tensor], and in featmap_names you can choose which
|
||||
# feature maps to use.
|
||||
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
|
||||
output_size=7,
|
||||
sampling_ratio=2)
|
||||
|
||||
# put the pieces together inside a FasterRCNN model
|
||||
model = FasterRCNN(backbone,
|
||||
num_classes=2,
|
||||
rpn_anchor_generator=anchor_generator,
|
||||
box_roi_pool=roi_pooler)
|
||||
|
||||
```
|
||||
|
||||
### PennFudan 数据集的实例细分模型
|
||||
|
||||
在我们的案例中,由于我们的数据集非常小,我们希望从预训练模型中进行微调,因此我们将遵循方法 1。
|
||||
|
||||
这里我们还想计算实例分割掩码,因此我们将使用 Mask R-CNN:
|
||||
|
||||
```py
|
||||
import torchvision
|
||||
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
|
||||
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
|
||||
|
||||
def get_model_instance_segmentation(num_classes):
|
||||
# load an instance segmentation model pre-trained pre-trained on COCO
|
||||
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
|
||||
|
||||
# get number of input features for the classifier
|
||||
in_features = model.roi_heads.box_predictor.cls_score.in_features
|
||||
# replace the pre-trained head with a new one
|
||||
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
|
||||
|
||||
# now get the number of input features for the mask classifier
|
||||
in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
|
||||
hidden_layer = 256
|
||||
# and replace the mask predictor with a new one
|
||||
model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
|
||||
hidden_layer,
|
||||
num_classes)
|
||||
|
||||
return model
|
||||
|
||||
```
|
||||
|
||||
就是这样,这将使`model`随时可以在您的自定义数据集上进行训练和评估。
|
||||
|
||||
## 将所有内容放在一起
|
||||
|
||||
在`references/detection/`中,我们提供了许多帮助程序功能来简化训练和评估检测模型。 在这里,我们将使用`references/detection/engine.py`,`references/detection/utils.py`和`references/detection/transforms.py`。 只需将它们复制到您的文件夹中,然后在此处使用它们即可。
|
||||
|
||||
让我们写一些辅助函数来进行数据扩充/转换:
|
||||
|
||||
```py
|
||||
import transforms as T
|
||||
|
||||
def get_transform(train):
|
||||
transforms = []
|
||||
transforms.append(T.ToTensor())
|
||||
if train:
|
||||
transforms.append(T.RandomHorizontalFlip(0.5))
|
||||
return T.Compose(transforms)
|
||||
|
||||
```
|
||||
|
||||
## 测试`forward()`方法(可选)
|
||||
|
||||
在遍历数据集之前,最好先查看模型在训练过程中的期望值以及对样本数据的推断时间。
|
||||
|
||||
```py
|
||||
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
|
||||
dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
|
||||
data_loader = torch.utils.data.DataLoader(
|
||||
dataset, batch_size=2, shuffle=True, num_workers=4,
|
||||
collate_fn=utils.collate_fn)
|
||||
# For Training
|
||||
images,targets = next(iter(data_loader))
|
||||
images = list(image for image in images)
|
||||
targets = [{k: v for k, v in t.items()} for t in targets]
|
||||
output = model(images,targets) # Returns losses and detections
|
||||
# For inference
|
||||
model.eval()
|
||||
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
|
||||
predictions = model(x) # Returns predictions
|
||||
|
||||
```
|
||||
|
||||
现在,我们编写执行训练和验证的`main`函数:
|
||||
|
||||
```py
|
||||
from engine import train_one_epoch, evaluate
|
||||
import utils
|
||||
|
||||
def main():
|
||||
# train on the GPU or on the CPU, if a GPU is not available
|
||||
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
|
||||
|
||||
# our dataset has two classes only - background and person
|
||||
num_classes = 2
|
||||
# use our dataset and defined transformations
|
||||
dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
|
||||
dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))
|
||||
|
||||
# split the dataset in train and test set
|
||||
indices = torch.randperm(len(dataset)).tolist()
|
||||
dataset = torch.utils.data.Subset(dataset, indices[:-50])
|
||||
dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])
|
||||
|
||||
# define training and validation data loaders
|
||||
data_loader = torch.utils.data.DataLoader(
|
||||
dataset, batch_size=2, shuffle=True, num_workers=4,
|
||||
collate_fn=utils.collate_fn)
|
||||
|
||||
data_loader_test = torch.utils.data.DataLoader(
|
||||
dataset_test, batch_size=1, shuffle=False, num_workers=4,
|
||||
collate_fn=utils.collate_fn)
|
||||
|
||||
# get the model using our helper function
|
||||
model = get_model_instance_segmentation(num_classes)
|
||||
|
||||
# move model to the right device
|
||||
model.to(device)
|
||||
|
||||
# construct an optimizer
|
||||
params = [p for p in model.parameters() if p.requires_grad]
|
||||
optimizer = torch.optim.SGD(params, lr=0.005,
|
||||
momentum=0.9, weight_decay=0.0005)
|
||||
# and a learning rate scheduler
|
||||
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
|
||||
step_size=3,
|
||||
gamma=0.1)
|
||||
|
||||
# let's train it for 10 epochs
|
||||
num_epochs = 10
|
||||
|
||||
for epoch in range(num_epochs):
|
||||
# train for one epoch, printing every 10 iterations
|
||||
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
|
||||
# update the learning rate
|
||||
lr_scheduler.step()
|
||||
# evaluate on the test dataset
|
||||
evaluate(model, data_loader_test, device=device)
|
||||
|
||||
print("That's it!")
|
||||
|
||||
```
|
||||
|
||||
您应该获得第一个周期的输出:
|
||||
|
||||
```py
|
||||
Epoch: [0] [ 0/60] eta: 0:01:18 lr: 0.000090 loss: 2.5213 (2.5213) loss_classifier: 0.8025 (0.8025) loss_box_reg: 0.2634 (0.2634) loss_mask: 1.4265 (1.4265) loss_objectness: 0.0190 (0.0190) loss_rpn_box_reg: 0.0099 (0.0099) time: 1.3121 data: 0.3024 max mem: 3485
|
||||
Epoch: [0] [10/60] eta: 0:00:20 lr: 0.000936 loss: 1.3007 (1.5313) loss_classifier: 0.3979 (0.4719) loss_box_reg: 0.2454 (0.2272) loss_mask: 0.6089 (0.7953) loss_objectness: 0.0197 (0.0228) loss_rpn_box_reg: 0.0121 (0.0141) time: 0.4198 data: 0.0298 max mem: 5081
|
||||
Epoch: [0] [20/60] eta: 0:00:15 lr: 0.001783 loss: 0.7567 (1.1056) loss_classifier: 0.2221 (0.3319) loss_box_reg: 0.2002 (0.2106) loss_mask: 0.2904 (0.5332) loss_objectness: 0.0146 (0.0176) loss_rpn_box_reg: 0.0094 (0.0123) time: 0.3293 data: 0.0035 max mem: 5081
|
||||
Epoch: [0] [30/60] eta: 0:00:11 lr: 0.002629 loss: 0.4705 (0.8935) loss_classifier: 0.0991 (0.2517) loss_box_reg: 0.1578 (0.1957) loss_mask: 0.1970 (0.4204) loss_objectness: 0.0061 (0.0140) loss_rpn_box_reg: 0.0075 (0.0118) time: 0.3403 data: 0.0044 max mem: 5081
|
||||
Epoch: [0] [40/60] eta: 0:00:07 lr: 0.003476 loss: 0.3901 (0.7568) loss_classifier: 0.0648 (0.2022) loss_box_reg: 0.1207 (0.1736) loss_mask: 0.1705 (0.3585) loss_objectness: 0.0018 (0.0113) loss_rpn_box_reg: 0.0075 (0.0112) time: 0.3407 data: 0.0044 max mem: 5081
|
||||
Epoch: [0] [50/60] eta: 0:00:03 lr: 0.004323 loss: 0.3237 (0.6703) loss_classifier: 0.0474 (0.1731) loss_box_reg: 0.1109 (0.1561) loss_mask: 0.1658 (0.3201) loss_objectness: 0.0015 (0.0093) loss_rpn_box_reg: 0.0093 (0.0116) time: 0.3379 data: 0.0043 max mem: 5081
|
||||
Epoch: [0] [59/60] eta: 0:00:00 lr: 0.005000 loss: 0.2540 (0.6082) loss_classifier: 0.0309 (0.1526) loss_box_reg: 0.0463 (0.1405) loss_mask: 0.1568 (0.2945) loss_objectness: 0.0012 (0.0083) loss_rpn_box_reg: 0.0093 (0.0123) time: 0.3489 data: 0.0042 max mem: 5081
|
||||
Epoch: [0] Total time: 0:00:21 (0.3570 s / it)
|
||||
creating index...
|
||||
index created!
|
||||
Test: [ 0/50] eta: 0:00:19 model_time: 0.2152 (0.2152) evaluator_time: 0.0133 (0.0133) time: 0.4000 data: 0.1701 max mem: 5081
|
||||
Test: [49/50] eta: 0:00:00 model_time: 0.0628 (0.0687) evaluator_time: 0.0039 (0.0064) time: 0.0735 data: 0.0022 max mem: 5081
|
||||
Test: Total time: 0:00:04 (0.0828 s / it)
|
||||
Averaged stats: model_time: 0.0628 (0.0687) evaluator_time: 0.0039 (0.0064)
|
||||
Accumulating evaluation results...
|
||||
DONE (t=0.01s).
|
||||
Accumulating evaluation results...
|
||||
DONE (t=0.01s).
|
||||
IoU metric: bbox
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.606
|
||||
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.984
|
||||
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.780
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.313
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.582
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.612
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.270
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.672
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.672
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.650
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.755
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.664
|
||||
IoU metric: segm
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.704
|
||||
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.979
|
||||
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.871
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.325
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.488
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.727
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.316
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.748
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.749
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.650
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.673
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.758
|
||||
|
||||
```
|
||||
|
||||
因此,经过一个周期的训练,我们获得了 60.6 的 COCO 风格 mAP 和 70.4 的遮罩 mAP。
|
||||
|
||||
经过 10 个周期的训练,我得到了以下指标
|
||||
|
||||
```py
|
||||
IoU metric: bbox
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.799
|
||||
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.969
|
||||
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.935
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.349
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.592
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.324
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.844
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.844
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.400
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.777
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.870
|
||||
IoU metric: segm
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.761
|
||||
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.969
|
||||
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.919
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.341
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.464
|
||||
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.788
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.303
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.799
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.799
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.400
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.769
|
||||
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.818
|
||||
|
||||
```
|
||||
|
||||
但是这些预测是什么样的? 让我们在数据集中拍摄一张图像并进行验证
|
||||
|
||||

|
||||
|
||||
经过训练的模型会在此图片中预测 9 个人物实例,让我们看看其中的几个:
|
||||
|
||||
 
|
||||
|
||||
结果看起来还不错!
|
||||
|
||||
## 总结
|
||||
|
||||
在本教程中,您学习了如何在自定义数据集上为实例细分模型创建自己的训练管道。 为此,您编写了一个`torch.utils.data.Dataset`类,该类返回图像以及真实情况框和分段蒙版。 您还利用了在 COCO train2017 上预先训练的 Mask R-CNN 模型,以便对该新数据集执行迁移学习。
|
||||
|
||||
对于更完整的示例(包括多机/多 GPU 训练),请检查在`torchvision`存储库中存在的`references/detection/train.py`。
|
||||
|
||||
[您可以在此处下载本教程的完整源文件](https://pytorch.org/tutorials/_static/tv-training-code.py)。
|
||||
590
pytorch/官方教程/20.md
Normal file
@@ -0,0 +1,590 @@
|
||||
# 计算机视觉的迁移学习教程
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html>
|
||||
|
||||
**作者**: [Sasank Chilamkurthy](https://chsasank.github.io)
|
||||
|
||||
在本教程中,您将学习如何使用迁移学习训练卷积神经网络进行图像分类。 您可以在 [cs231n 笔记](https://cs231n.github.io/transfer-learning/)中阅读有关转学的更多信息。
|
||||
|
||||
引用这些注解,
|
||||
|
||||
> 实际上,很少有人从头开始训练整个卷积网络(使用随机初始化),因为拥有足够大小的数据集相对很少。 相反,通常在非常大的数据集上对 ConvNet 进行预训练(例如 ImageNet,其中包含 120 万个具有 1000 个类别的图像),然后将 ConvNet 用作初始化或固定特征提取器以完成感兴趣的任务。
|
||||
|
||||
这两个主要的迁移学习方案如下所示:
|
||||
|
||||
* **卷积网络的微调**:代替随机初始化,我们使用经过预训练的网络初始化网络,例如在 imagenet 1000 数据集上进行训练的网络。 其余的训练照常进行。
|
||||
* **作为固定特征提取器的 ConvNet**:在这里,我们将冻结除最终全连接层之外的所有网络的权重。 最后一个全连接层将替换为具有随机权重的新层,并且仅训练该层。
|
||||
|
||||
```py
|
||||
# License: BSD
|
||||
# Author: Sasank Chilamkurthy
|
||||
|
||||
from __future__ import print_function, division
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.optim as optim
|
||||
from torch.optim import lr_scheduler
|
||||
import numpy as np
|
||||
import torchvision
|
||||
from torchvision import datasets, models, transforms
|
||||
import matplotlib.pyplot as plt
|
||||
import time
|
||||
import os
|
||||
import copy
|
||||
|
||||
plt.ion() # interactive mode
|
||||
|
||||
```
|
||||
|
||||
## 加载数据
|
||||
|
||||
我们将使用`torchvision`和`torch.utils.data`包来加载数据。
|
||||
|
||||
我们今天要解决的问题是训练一个模型来对**蚂蚁**和**蜜蜂**进行分类。 我们为蚂蚁和蜜蜂提供了大约 120 张训练图像。 每个类别有 75 个验证图像。 通常,如果从头开始训练的话,这是一个非常小的数据集。 由于我们正在使用迁移学习,因此我们应该能够很好地概括。
|
||||
|
||||
该数据集是 imagenet 的很小一部分。
|
||||
|
||||
注意
|
||||
|
||||
从的下载数据,并将其提取到当前目录。
|
||||
|
||||
```py
|
||||
# Data augmentation and normalization for training
|
||||
# Just normalization for validation
|
||||
data_transforms = {
|
||||
'train': transforms.Compose([
|
||||
transforms.RandomResizedCrop(224),
|
||||
transforms.RandomHorizontalFlip(),
|
||||
transforms.ToTensor(),
|
||||
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
|
||||
]),
|
||||
'val': transforms.Compose([
|
||||
transforms.Resize(256),
|
||||
transforms.CenterCrop(224),
|
||||
transforms.ToTensor(),
|
||||
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
|
||||
]),
|
||||
}
|
||||
|
||||
data_dir = 'data/hymenoptera_data'
|
||||
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
|
||||
data_transforms[x])
|
||||
for x in ['train', 'val']}
|
||||
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
|
||||
shuffle=True, num_workers=4)
|
||||
for x in ['train', 'val']}
|
||||
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
|
||||
class_names = image_datasets['train'].classes
|
||||
|
||||
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
```
|
||||
|
||||
### 可视化一些图像
|
||||
|
||||
让我们可视化一些训练图像,以了解数据扩充。
|
||||
|
||||
```py
|
||||
def imshow(inp, title=None):
|
||||
"""Imshow for Tensor."""
|
||||
inp = inp.numpy().transpose((1, 2, 0))
|
||||
mean = np.array([0.485, 0.456, 0.406])
|
||||
std = np.array([0.229, 0.224, 0.225])
|
||||
inp = std * inp + mean
|
||||
inp = np.clip(inp, 0, 1)
|
||||
plt.imshow(inp)
|
||||
if title is not None:
|
||||
plt.title(title)
|
||||
plt.pause(0.001) # pause a bit so that plots are updated
|
||||
|
||||
# Get a batch of training data
|
||||
inputs, classes = next(iter(dataloaders['train']))
|
||||
|
||||
# Make a grid from batch
|
||||
out = torchvision.utils.make_grid(inputs)
|
||||
|
||||
imshow(out, title=[class_names[x] for x in classes])
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 训练模型
|
||||
|
||||
现在,让我们编写一个通用函数来训练模型。 在这里,我们将说明:
|
||||
|
||||
* 安排学习率
|
||||
* 保存最佳模型
|
||||
|
||||
以下,参数`scheduler`是来自`torch.optim.lr_scheduler`的 LR 调度器对象。
|
||||
|
||||
```py
|
||||
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
|
||||
since = time.time()
|
||||
|
||||
best_model_wts = copy.deepcopy(model.state_dict())
|
||||
best_acc = 0.0
|
||||
|
||||
for epoch in range(num_epochs):
|
||||
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
|
||||
print('-' * 10)
|
||||
|
||||
# Each epoch has a training and validation phase
|
||||
for phase in ['train', 'val']:
|
||||
if phase == 'train':
|
||||
model.train() # Set model to training mode
|
||||
else:
|
||||
model.eval() # Set model to evaluate mode
|
||||
|
||||
running_loss = 0.0
|
||||
running_corrects = 0
|
||||
|
||||
# Iterate over data.
|
||||
for inputs, labels in dataloaders[phase]:
|
||||
inputs = inputs.to(device)
|
||||
labels = labels.to(device)
|
||||
|
||||
# zero the parameter gradients
|
||||
optimizer.zero_grad()
|
||||
|
||||
# forward
|
||||
# track history if only in train
|
||||
with torch.set_grad_enabled(phase == 'train'):
|
||||
outputs = model(inputs)
|
||||
_, preds = torch.max(outputs, 1)
|
||||
loss = criterion(outputs, labels)
|
||||
|
||||
# backward + optimize only if in training phase
|
||||
if phase == 'train':
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# statistics
|
||||
running_loss += loss.item() * inputs.size(0)
|
||||
running_corrects += torch.sum(preds == labels.data)
|
||||
if phase == 'train':
|
||||
scheduler.step()
|
||||
|
||||
epoch_loss = running_loss / dataset_sizes[phase]
|
||||
epoch_acc = running_corrects.double() / dataset_sizes[phase]
|
||||
|
||||
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
|
||||
phase, epoch_loss, epoch_acc))
|
||||
|
||||
# deep copy the model
|
||||
if phase == 'val' and epoch_acc > best_acc:
|
||||
best_acc = epoch_acc
|
||||
best_model_wts = copy.deepcopy(model.state_dict())
|
||||
|
||||
print()
|
||||
|
||||
time_elapsed = time.time() - since
|
||||
print('Training complete in {:.0f}m {:.0f}s'.format(
|
||||
time_elapsed // 60, time_elapsed % 60))
|
||||
print('Best val Acc: {:4f}'.format(best_acc))
|
||||
|
||||
# load best model weights
|
||||
model.load_state_dict(best_model_wts)
|
||||
return model
|
||||
|
||||
```
|
||||
|
||||
### 可视化模型预测
|
||||
|
||||
通用函数,显示一些图像的预测
|
||||
|
||||
```py
|
||||
def visualize_model(model, num_images=6):
|
||||
was_training = model.training
|
||||
model.eval()
|
||||
images_so_far = 0
|
||||
fig = plt.figure()
|
||||
|
||||
with torch.no_grad():
|
||||
for i, (inputs, labels) in enumerate(dataloaders['val']):
|
||||
inputs = inputs.to(device)
|
||||
labels = labels.to(device)
|
||||
|
||||
outputs = model(inputs)
|
||||
_, preds = torch.max(outputs, 1)
|
||||
|
||||
for j in range(inputs.size()[0]):
|
||||
images_so_far += 1
|
||||
ax = plt.subplot(num_img//2, 2, images_so_far)
|
||||
ax.axis('off')
|
||||
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
|
||||
imshow(inputs.cpu().data[j])
|
||||
|
||||
if images_so_far == num_images:
|
||||
model.train(mode=was_training)
|
||||
return
|
||||
model.train(mode=was_training)
|
||||
|
||||
```
|
||||
|
||||
## 微调 ConvNet
|
||||
|
||||
加载预训练的模型并重置最终的全连接层。
|
||||
|
||||
```py
|
||||
model_ft = models.resnet18(pretrained=True)
|
||||
num_ftrs = model_ft.fc.in_features
|
||||
# Here the size of each output sample is set to 2.
|
||||
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
|
||||
model_ft.fc = nn.Linear(num_ftrs, 2)
|
||||
|
||||
model_ft = model_ft.to(device)
|
||||
|
||||
criterion = nn.CrossEntropyLoss()
|
||||
|
||||
# Observe that all parameters are being optimized
|
||||
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
|
||||
|
||||
# Decay LR by a factor of 0.1 every 7 epochs
|
||||
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
|
||||
|
||||
```
|
||||
|
||||
### 训练和评估
|
||||
|
||||
在 CPU 上大约需要 15-25 分钟。 但是在 GPU 上,此过程不到一分钟。
|
||||
|
||||
```py
|
||||
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
|
||||
num_epochs=25)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Epoch 0/24
|
||||
----------
|
||||
train Loss: 0.6303 Acc: 0.6926
|
||||
val Loss: 0.1492 Acc: 0.9346
|
||||
|
||||
Epoch 1/24
|
||||
----------
|
||||
train Loss: 0.5511 Acc: 0.7869
|
||||
val Loss: 0.2577 Acc: 0.8889
|
||||
|
||||
Epoch 2/24
|
||||
----------
|
||||
train Loss: 0.4885 Acc: 0.8115
|
||||
val Loss: 0.3390 Acc: 0.8758
|
||||
|
||||
Epoch 3/24
|
||||
----------
|
||||
train Loss: 0.5158 Acc: 0.7992
|
||||
val Loss: 0.5070 Acc: 0.8366
|
||||
|
||||
Epoch 4/24
|
||||
----------
|
||||
train Loss: 0.5878 Acc: 0.7992
|
||||
val Loss: 0.2706 Acc: 0.8758
|
||||
|
||||
Epoch 5/24
|
||||
----------
|
||||
train Loss: 0.4396 Acc: 0.8279
|
||||
val Loss: 0.2870 Acc: 0.8954
|
||||
|
||||
Epoch 6/24
|
||||
----------
|
||||
train Loss: 0.4612 Acc: 0.8238
|
||||
val Loss: 0.2809 Acc: 0.9150
|
||||
|
||||
Epoch 7/24
|
||||
----------
|
||||
train Loss: 0.4387 Acc: 0.8402
|
||||
val Loss: 0.1853 Acc: 0.9281
|
||||
|
||||
Epoch 8/24
|
||||
----------
|
||||
train Loss: 0.2998 Acc: 0.8648
|
||||
val Loss: 0.1926 Acc: 0.9085
|
||||
|
||||
Epoch 9/24
|
||||
----------
|
||||
train Loss: 0.3383 Acc: 0.9016
|
||||
val Loss: 0.1762 Acc: 0.9281
|
||||
|
||||
Epoch 10/24
|
||||
----------
|
||||
train Loss: 0.2969 Acc: 0.8730
|
||||
val Loss: 0.1872 Acc: 0.8954
|
||||
|
||||
Epoch 11/24
|
||||
----------
|
||||
train Loss: 0.3117 Acc: 0.8811
|
||||
val Loss: 0.1807 Acc: 0.9150
|
||||
|
||||
Epoch 12/24
|
||||
----------
|
||||
train Loss: 0.3005 Acc: 0.8770
|
||||
val Loss: 0.1930 Acc: 0.9085
|
||||
|
||||
Epoch 13/24
|
||||
----------
|
||||
train Loss: 0.3129 Acc: 0.8689
|
||||
val Loss: 0.2184 Acc: 0.9150
|
||||
|
||||
Epoch 14/24
|
||||
----------
|
||||
train Loss: 0.3776 Acc: 0.8607
|
||||
val Loss: 0.1869 Acc: 0.9216
|
||||
|
||||
Epoch 15/24
|
||||
----------
|
||||
train Loss: 0.2245 Acc: 0.9016
|
||||
val Loss: 0.1742 Acc: 0.9346
|
||||
|
||||
Epoch 16/24
|
||||
----------
|
||||
train Loss: 0.3105 Acc: 0.8607
|
||||
val Loss: 0.2056 Acc: 0.9216
|
||||
|
||||
Epoch 17/24
|
||||
----------
|
||||
train Loss: 0.2729 Acc: 0.8893
|
||||
val Loss: 0.1722 Acc: 0.9085
|
||||
|
||||
Epoch 18/24
|
||||
----------
|
||||
train Loss: 0.3210 Acc: 0.8730
|
||||
val Loss: 0.1977 Acc: 0.9281
|
||||
|
||||
Epoch 19/24
|
||||
----------
|
||||
train Loss: 0.3231 Acc: 0.8566
|
||||
val Loss: 0.1811 Acc: 0.9216
|
||||
|
||||
Epoch 20/24
|
||||
----------
|
||||
train Loss: 0.3206 Acc: 0.8648
|
||||
val Loss: 0.2033 Acc: 0.9150
|
||||
|
||||
Epoch 21/24
|
||||
----------
|
||||
train Loss: 0.2917 Acc: 0.8648
|
||||
val Loss: 0.1694 Acc: 0.9150
|
||||
|
||||
Epoch 22/24
|
||||
----------
|
||||
train Loss: 0.2412 Acc: 0.8852
|
||||
val Loss: 0.1757 Acc: 0.9216
|
||||
|
||||
Epoch 23/24
|
||||
----------
|
||||
train Loss: 0.2508 Acc: 0.8975
|
||||
val Loss: 0.1662 Acc: 0.9281
|
||||
|
||||
Epoch 24/24
|
||||
----------
|
||||
train Loss: 0.3283 Acc: 0.8566
|
||||
val Loss: 0.1761 Acc: 0.9281
|
||||
|
||||
Training complete in 1m 10s
|
||||
Best val Acc: 0.934641
|
||||
|
||||
```
|
||||
|
||||
```py
|
||||
visualize_model(model_ft)
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 作为固定特征提取器的 ConvNet
|
||||
|
||||
在这里,我们需要冻结除最后一层之外的所有网络。 我们需要设置`requires_grad == False`冻结参数,以便不在`backward()`中计算梯度。
|
||||
|
||||
[您可以在文档中阅读有关此内容的更多信息](https://pytorch.org/docs/notes/autograd.html#excluding-subgraphs-from-backward)。
|
||||
|
||||
```py
|
||||
model_conv = torchvision.models.resnet18(pretrained=True)
|
||||
for param in model_conv.parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
# Parameters of newly constructed modules have requires_grad=True by default
|
||||
num_ftrs = model_conv.fc.in_features
|
||||
model_conv.fc = nn.Linear(num_ftrs, 2)
|
||||
|
||||
model_conv = model_conv.to(device)
|
||||
|
||||
criterion = nn.CrossEntropyLoss()
|
||||
|
||||
# Observe that only parameters of final layer are being optimized as
|
||||
# opposed to before.
|
||||
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
|
||||
|
||||
# Decay LR by a factor of 0.1 every 7 epochs
|
||||
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
|
||||
|
||||
```
|
||||
|
||||
### 训练和评估
|
||||
|
||||
与以前的方案相比,在 CPU 上将花费大约一半的时间。 这是可以预期的,因为不需要为大多数网络计算梯度。 但是,确实需要计算正向。
|
||||
|
||||
```py
|
||||
model_conv = train_model(model_conv, criterion, optimizer_conv,
|
||||
exp_lr_scheduler, num_epochs=25)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Epoch 0/24
|
||||
----------
|
||||
train Loss: 0.7258 Acc: 0.6148
|
||||
val Loss: 0.2690 Acc: 0.9020
|
||||
|
||||
Epoch 1/24
|
||||
----------
|
||||
train Loss: 0.5342 Acc: 0.7500
|
||||
val Loss: 0.1905 Acc: 0.9412
|
||||
|
||||
Epoch 2/24
|
||||
----------
|
||||
train Loss: 0.4262 Acc: 0.8320
|
||||
val Loss: 0.1903 Acc: 0.9412
|
||||
|
||||
Epoch 3/24
|
||||
----------
|
||||
train Loss: 0.4103 Acc: 0.8197
|
||||
val Loss: 0.2658 Acc: 0.8954
|
||||
|
||||
Epoch 4/24
|
||||
----------
|
||||
train Loss: 0.3938 Acc: 0.8115
|
||||
val Loss: 0.2871 Acc: 0.8954
|
||||
|
||||
Epoch 5/24
|
||||
----------
|
||||
train Loss: 0.4623 Acc: 0.8361
|
||||
val Loss: 0.1651 Acc: 0.9346
|
||||
|
||||
Epoch 6/24
|
||||
----------
|
||||
train Loss: 0.5348 Acc: 0.7869
|
||||
val Loss: 0.1944 Acc: 0.9477
|
||||
|
||||
Epoch 7/24
|
||||
----------
|
||||
train Loss: 0.3827 Acc: 0.8402
|
||||
val Loss: 0.1846 Acc: 0.9412
|
||||
|
||||
Epoch 8/24
|
||||
----------
|
||||
train Loss: 0.3655 Acc: 0.8443
|
||||
val Loss: 0.1873 Acc: 0.9412
|
||||
|
||||
Epoch 9/24
|
||||
----------
|
||||
train Loss: 0.3275 Acc: 0.8525
|
||||
val Loss: 0.2091 Acc: 0.9412
|
||||
|
||||
Epoch 10/24
|
||||
----------
|
||||
train Loss: 0.3375 Acc: 0.8320
|
||||
val Loss: 0.1798 Acc: 0.9412
|
||||
|
||||
Epoch 11/24
|
||||
----------
|
||||
train Loss: 0.3077 Acc: 0.8648
|
||||
val Loss: 0.1942 Acc: 0.9346
|
||||
|
||||
Epoch 12/24
|
||||
----------
|
||||
train Loss: 0.4336 Acc: 0.7787
|
||||
val Loss: 0.1934 Acc: 0.9346
|
||||
|
||||
Epoch 13/24
|
||||
----------
|
||||
train Loss: 0.3149 Acc: 0.8566
|
||||
val Loss: 0.2062 Acc: 0.9281
|
||||
|
||||
Epoch 14/24
|
||||
----------
|
||||
train Loss: 0.3617 Acc: 0.8320
|
||||
val Loss: 0.1761 Acc: 0.9412
|
||||
|
||||
Epoch 15/24
|
||||
----------
|
||||
train Loss: 0.3066 Acc: 0.8361
|
||||
val Loss: 0.1799 Acc: 0.9281
|
||||
|
||||
Epoch 16/24
|
||||
----------
|
||||
train Loss: 0.3952 Acc: 0.8443
|
||||
val Loss: 0.1666 Acc: 0.9346
|
||||
|
||||
Epoch 17/24
|
||||
----------
|
||||
train Loss: 0.3552 Acc: 0.8443
|
||||
val Loss: 0.1928 Acc: 0.9412
|
||||
|
||||
Epoch 18/24
|
||||
----------
|
||||
train Loss: 0.3106 Acc: 0.8648
|
||||
val Loss: 0.1964 Acc: 0.9346
|
||||
|
||||
Epoch 19/24
|
||||
----------
|
||||
train Loss: 0.3675 Acc: 0.8566
|
||||
val Loss: 0.1813 Acc: 0.9346
|
||||
|
||||
Epoch 20/24
|
||||
----------
|
||||
train Loss: 0.3565 Acc: 0.8320
|
||||
val Loss: 0.1758 Acc: 0.9346
|
||||
|
||||
Epoch 21/24
|
||||
----------
|
||||
train Loss: 0.2922 Acc: 0.8566
|
||||
val Loss: 0.2295 Acc: 0.9216
|
||||
|
||||
Epoch 22/24
|
||||
----------
|
||||
train Loss: 0.3283 Acc: 0.8402
|
||||
val Loss: 0.2267 Acc: 0.9281
|
||||
|
||||
Epoch 23/24
|
||||
----------
|
||||
train Loss: 0.2875 Acc: 0.8770
|
||||
val Loss: 0.1878 Acc: 0.9346
|
||||
|
||||
Epoch 24/24
|
||||
----------
|
||||
train Loss: 0.3172 Acc: 0.8689
|
||||
val Loss: 0.1849 Acc: 0.9412
|
||||
|
||||
Training complete in 0m 34s
|
||||
Best val Acc: 0.947712
|
||||
|
||||
```
|
||||
|
||||
```py
|
||||
visualize_model(model_conv)
|
||||
|
||||
plt.ioff()
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 进一步学习
|
||||
|
||||
如果您想了解有关迁移学习的更多信息,请查看我们的[计算机视觉教程的量化迁移学习](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html)。
|
||||
|
||||
**脚本的总运行时间**:(1 分钟 56.157 秒)
|
||||
|
||||
[下载 Python 源码:`transfer_learning_tutorial.py`](../_downloads/07d5af1ef41e43c07f848afaf5a1c3cc/transfer_learning_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`transfer_learning_tutorial.ipynb`](../_downloads/62840b1eece760d5e42593187847261f/transfer_learning_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
301
pytorch/官方教程/21.md
Normal file
@@ -0,0 +1,301 @@
|
||||
# 对抗示例生成
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/fgsm_tutorial.html>
|
||||
|
||||
**作者:** [Nathan Inkawhich](https://github.com/inkawhich)
|
||||
|
||||
如果您正在阅读本文,希望您能体会到某些机器学习模型的有效性。 研究不断推动 ML 模型更快,更准确和更高效。 但是,设计和训练模型的一个经常被忽略的方面是安全性和鲁棒性,尤其是在面对想要欺骗模型的对手的情况下。
|
||||
|
||||
本教程将提高您对 ML 模型的安全漏洞的认识,并深入了解对抗性机器学习的热门话题。 您可能会惊讶地发现,*在图像上添加无法察觉的扰动会导致完全不同的模型表现*。 鉴于这是一个教程,我们将通过图像分类器上的示例来探讨该主题。 具体而言,我们将使用最流行的一种攻击方法,即快速梯度符号攻击(FGSM)来欺骗 MNIST 分类器。
|
||||
|
||||
## 威胁模型
|
||||
|
||||
就上下文而言,有多种类型的对抗性攻击,每种攻击者的目标和假设都不同。 但是,总的来说,总体目标是向输入数据添加最少的扰动,以引起所需的错误分类。 攻击者的知识有几种假设,其中两种是:**白盒**和**黑盒**。 *白盒*攻击假定攻击者具有完全的知识并可以访问模型,包括架构,输入,输出和权重。 *黑盒*攻击假定攻击者只能访问模型的输入和输出,并且对底层架构或权重一无所知。 目标也有几种类型,包括**错误分类**和**源/目标错误分类**。 *错误分类*意味着对手只希望输出分类错误,而不在乎新分类是什么。 *源/目标错误分类*意味着对手想要更改最初属于特定源类别的图像,以便将其分类为特定目标类别。
|
||||
|
||||
在这种情况下,FGSM 攻击是*白盒*攻击,目标是*错误分类*。 有了这些背景信息,我们现在可以详细讨论攻击了。
|
||||
|
||||
## 快速梯度符号攻击
|
||||
|
||||
迄今为止,最早的也是最流行的对抗性攻击之一被称为*快速梯度符号攻击(FGSM)*,由[《解释和利用对抗性示例》](https://arxiv.org/abs/1412.6572)(Goodfellow 等)描述。 攻击非常强大,而且直观。 它旨在利用神经网络学习*梯度*的方式来攻击神经网络。 这个想法很简单,不是通过基于反向传播的梯度来调整权重来使损失最小化,攻击会基于相同的反向传播的梯度来调整输入数据,以使损失最大化。 换句话说,攻击使用损失相对于输入数据的梯度,然后调整输入数据以使损失最大化。
|
||||
|
||||
在进入代码之前,让我们看一下著名的 [FGSM](https://arxiv.org/abs/1412.6572) Pandas 示例,并提取一些符号。
|
||||
|
||||

|
||||
|
||||
从图中,`x`是正确分类为“Pandas”的原始输入图像,`y`是`x`的输出,`θ`表示模型参数,而`J(θ, x, y)`是用于训练网络的损失。 攻击会将梯度反向传播回输入数据,以计算`ᐁ[x] J(θ, x, y)`。 然后,它会沿方向(即`ᐁ[x] J(θ)`)沿一小步(图片中的`ε`或`0.007`)调整输入数据,`(x, y)`,这将使损失最大化。 然后,当目标图像仍明显是“Pandas”时,目标网络将它们误分类为“长臂猿”。
|
||||
|
||||
希望本教程的动机已经明确,所以让我们跳入实现过程。
|
||||
|
||||
```py
|
||||
from __future__ import print_function
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import torch.optim as optim
|
||||
from torchvision import datasets, transforms
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
```
|
||||
|
||||
## 实现
|
||||
|
||||
在本节中,我们将讨论本教程的输入参数,定义受到攻击的模型,然后编写攻击代码并运行一些测试。
|
||||
|
||||
### 输入
|
||||
|
||||
本教程只有三个输入,定义如下:
|
||||
|
||||
* `epsilons`-用于运行的`ε`值列表。 在列表中保留 0 很重要,因为它表示原始测试集上的模型表现。 同样,从直觉上讲,我们期望`ε`越大,扰动越明显,但是从降低模型准确率的角度来看,攻击越有效。 由于此处的数据范围为`[0,1]`,因此`ε`值不得超过 1。
|
||||
* `pretrained_model`-使用[`pytorch/examples/mnist`](https://github.com/pytorch/examples/tree/master/mnist)训练的 MNIST 模型的路径。 为简单起见,[请在此处下载预训练模型](https://drive.google.com/drive/folders/1fn83DF14tWmit0RTKWRhPq5uVXt73e0h?usp=sharing)。
|
||||
* `use_cuda`-布尔标志,如果需要和可用,则使用 CUDA。 请注意,具有 CUDA 的 GPU 在本教程中并不重要,因为 CPU 不会花费很多时间。
|
||||
|
||||
```py
|
||||
epsilons = [0, .05, .1, .15, .2, .25, .3]
|
||||
pretrained_model = "data/lenet_mnist_model.pth"
|
||||
use_cuda=True
|
||||
|
||||
```
|
||||
|
||||
### 受到攻击的模型
|
||||
|
||||
如前所述,受到攻击的模型与[`pytorch/examples/mnist`](https://github.com/pytorch/examples/tree/master/mnist)中的 MNIST 模型相同。 您可以训练并保存自己的 MNIST 模型,也可以下载并使用提供的模型。 这里的*网络*定义和测试数据加载器已从 MNIST 示例中复制而来。 本部分的目的是定义模型和数据加载器,然后初始化模型并加载预训练的权重。
|
||||
|
||||
```py
|
||||
# LeNet Model definition
|
||||
class Net(nn.Module):
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
|
||||
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
|
||||
self.conv2_drop = nn.Dropout2d()
|
||||
self.fc1 = nn.Linear(320, 50)
|
||||
self.fc2 = nn.Linear(50, 10)
|
||||
|
||||
def forward(self, x):
|
||||
x = F.relu(F.max_pool2d(self.conv1(x), 2))
|
||||
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
|
||||
x = x.view(-1, 320)
|
||||
x = F.relu(self.fc1(x))
|
||||
x = F.dropout(x, training=self.training)
|
||||
x = self.fc2(x)
|
||||
return F.log_softmax(x, dim=1)
|
||||
|
||||
# MNIST Test dataset and dataloader declaration
|
||||
test_loader = torch.utils.data.DataLoader(
|
||||
datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([
|
||||
transforms.ToTensor(),
|
||||
])),
|
||||
batch_size=1, shuffle=True)
|
||||
|
||||
# Define what device we are using
|
||||
print("CUDA Available: ",torch.cuda.is_available())
|
||||
device = torch.device("cuda" if (use_cuda and torch.cuda.is_available()) else "cpu")
|
||||
|
||||
# Initialize the network
|
||||
model = Net().to(device)
|
||||
|
||||
# Load the pretrained model
|
||||
model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
|
||||
|
||||
# Set the model in evaluation mode. In this case this is for the Dropout layers
|
||||
model.eval()
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../data/MNIST/raw/train-images-idx3-ubyte.gz
|
||||
Extracting ../data/MNIST/raw/train-images-idx3-ubyte.gz to ../data/MNIST/raw
|
||||
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ../data/MNIST/raw/train-labels-idx1-ubyte.gz
|
||||
Extracting ../data/MNIST/raw/train-labels-idx1-ubyte.gz to ../data/MNIST/raw
|
||||
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ../data/MNIST/raw/t10k-images-idx3-ubyte.gz
|
||||
Extracting ../data/MNIST/raw/t10k-images-idx3-ubyte.gz to ../data/MNIST/raw
|
||||
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ../data/MNIST/raw/t10k-labels-idx1-ubyte.gz
|
||||
Extracting ../data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ../data/MNIST/raw
|
||||
Processing...
|
||||
Done!
|
||||
CUDA Available: True
|
||||
|
||||
```
|
||||
|
||||
### FGSM 攻击
|
||||
|
||||
现在,我们可以通过干扰原始输入来定义创建对抗示例的函数。 `fgsm_attack`函数接受三个输入,`image`是原始的干净图像(`x`),`epsilon`是像素级扰动量`ε`,`data_grad`是输入图像损失的梯度(`ᐁ[x] J(θ, x, y)`)。 该函数然后创建扰动图像为
|
||||
|
||||

|
||||
|
||||
最后,为了维持数据的原始范围,将被扰动的图像裁剪到范围`[0,1]`。
|
||||
|
||||
```py
|
||||
# FGSM attack code
|
||||
def fgsm_attack(image, epsilon, data_grad):
|
||||
# Collect the element-wise sign of the data gradient
|
||||
sign_data_grad = data_grad.sign()
|
||||
# Create the perturbed image by adjusting each pixel of the input image
|
||||
perturbed_image = image + epsilon*sign_data_grad
|
||||
# Adding clipping to maintain [0,1] range
|
||||
perturbed_image = torch.clamp(perturbed_image, 0, 1)
|
||||
# Return the perturbed image
|
||||
return perturbed_image
|
||||
|
||||
```
|
||||
|
||||
### 测试函数
|
||||
|
||||
最后,本教程的主要结果来自`test`函数。 每次调用此测试函数都会在 MNIST 测试集上执行完整的测试步骤,并报告最终精度。 但是,请注意,此函数还需要`epsilon`输入。 这是因为`test`函数报告实力为`ε`的来自对手的攻击模型的准确率。 更具体地说,对于测试集中的每个样本,函数都会计算输入数据`data_grad`的损失梯度,并使用`fgsm_attack`创建一个扰动图像`perturbed_data`,然后检查受干扰的示例是否具有对抗性。 除了测试模型的准确率外,该函数还保存并返回了一些成功的对抗示例,以供以后可视化。
|
||||
|
||||
```py
|
||||
def test( model, device, test_loader, epsilon ):
|
||||
|
||||
# Accuracy counter
|
||||
correct = 0
|
||||
adv_examples = []
|
||||
|
||||
# Loop over all examples in test set
|
||||
for data, target in test_loader:
|
||||
|
||||
# Send the data and label to the device
|
||||
data, target = data.to(device), target.to(device)
|
||||
|
||||
# Set requires_grad attribute of tensor. Important for Attack
|
||||
data.requires_grad = True
|
||||
|
||||
# Forward pass the data through the model
|
||||
output = model(data)
|
||||
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
|
||||
|
||||
# If the initial prediction is wrong, dont bother attacking, just move on
|
||||
if init_pred.item() != target.item():
|
||||
continue
|
||||
|
||||
# Calculate the loss
|
||||
loss = F.nll_loss(output, target)
|
||||
|
||||
# Zero all existing gradients
|
||||
model.zero_grad()
|
||||
|
||||
# Calculate gradients of model in backward pass
|
||||
loss.backward()
|
||||
|
||||
# Collect datagrad
|
||||
data_grad = data.grad.data
|
||||
|
||||
# Call FGSM Attack
|
||||
perturbed_data = fgsm_attack(data, epsilon, data_grad)
|
||||
|
||||
# Re-classify the perturbed image
|
||||
output = model(perturbed_data)
|
||||
|
||||
# Check for success
|
||||
final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
|
||||
if final_pred.item() == target.item():
|
||||
correct += 1
|
||||
# Special case for saving 0 epsilon examples
|
||||
if (epsilon == 0) and (len(adv_examples) < 5):
|
||||
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
|
||||
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
|
||||
else:
|
||||
# Save some adv examples for visualization later
|
||||
if len(adv_examples) < 5:
|
||||
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
|
||||
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
|
||||
|
||||
# Calculate final accuracy for this epsilon
|
||||
final_acc = correct/float(len(test_loader))
|
||||
print("Epsilon: {}\tTest Accuracy = {} / {} = {}".format(epsilon, correct, len(test_loader), final_acc))
|
||||
|
||||
# Return the accuracy and an adversarial example
|
||||
return final_acc, adv_examples
|
||||
|
||||
```
|
||||
|
||||
### 运行攻击
|
||||
|
||||
实现的最后一部分是实际运行攻击。 在这里,我们为`epsilon`输入中的每个`ε`值运行完整的测试步骤。 对于每个`ε`,我们还保存最终精度,并在接下来的部分中绘制一些成功的对抗示例。 请注意,随着ε值的增加,打印的精度如何降低。 另外,请注意`ε = 0`表示原始测试准确率,没有受到攻击。
|
||||
|
||||
```py
|
||||
accuracies = []
|
||||
examples = []
|
||||
|
||||
# Run test for each epsilon
|
||||
for eps in epsilons:
|
||||
acc, ex = test(model, device, test_loader, eps)
|
||||
accuracies.append(acc)
|
||||
examples.append(ex)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Epsilon: 0 Test Accuracy = 9810 / 10000 = 0.981
|
||||
Epsilon: 0.05 Test Accuracy = 9426 / 10000 = 0.9426
|
||||
Epsilon: 0.1 Test Accuracy = 8510 / 10000 = 0.851
|
||||
Epsilon: 0.15 Test Accuracy = 6826 / 10000 = 0.6826
|
||||
Epsilon: 0.2 Test Accuracy = 4301 / 10000 = 0.4301
|
||||
Epsilon: 0.25 Test Accuracy = 2082 / 10000 = 0.2082
|
||||
Epsilon: 0.3 Test Accuracy = 869 / 10000 = 0.0869
|
||||
|
||||
```
|
||||
|
||||
## 结果
|
||||
|
||||
### 准确率与`ε`
|
||||
|
||||
第一个结果是精度与`ε`曲线的关系。 如前所述,随着`ε`的增加,我们预计测试精度会降低。 这是因为更大的ε意味着我们朝着将损失最大化的方向迈出了更大的一步。 请注意,即使`ε`值是线性间隔的,曲线中的趋势也不是线性的。 例如,`ε = 0.05`处的精度仅比`ε = 0`低约 4%,但`ε = 0.2`处的精度比`ε = 0.15`。 另外,请注意,模型的准确率在`ε = 0.25`和`ε = 0.3`之间达到 10 类分类器的随机准确率。
|
||||
|
||||
```py
|
||||
plt.figure(figsize=(5,5))
|
||||
plt.plot(epsilons, accuracies, "*-")
|
||||
plt.yticks(np.arange(0, 1.1, step=0.1))
|
||||
plt.xticks(np.arange(0, .35, step=0.05))
|
||||
plt.title("Accuracy vs Epsilon")
|
||||
plt.xlabel("Epsilon")
|
||||
plt.ylabel("Accuracy")
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 对抗示例样本
|
||||
|
||||
还记得没有免费午餐的想法吗? 在这种情况下,随着`ε`的增加,测试精度降低,但扰动变得更容易察觉。 实际上,在攻击者必须考虑的准确率下降和可感知性之间要进行权衡。 在这里,我们展示了每个`ε`值下成功对抗示例的一些示例。 绘图的每一行显示不同的`ε`值。 第一行是`ε = 0`示例,这些示例表示没有干扰的原始“干净”图像。 每张图片的标题均显示“原始分类->对抗分类”。 注意,扰动在`ε = 0.15`处开始变得明显,而在`ε = 0.3`处则非常明显。 但是,在所有情况下,尽管增加了噪音,人类仍然能够识别正确的类别。
|
||||
|
||||
```py
|
||||
# Plot several examples of adversarial samples at each epsilon
|
||||
cnt = 0
|
||||
plt.figure(figsize=(8,10))
|
||||
for i in range(len(epsilons)):
|
||||
for j in range(len(examples[i])):
|
||||
cnt += 1
|
||||
plt.subplot(len(epsilons),len(examples[0]),cnt)
|
||||
plt.xticks([], [])
|
||||
plt.yticks([], [])
|
||||
if j == 0:
|
||||
plt.ylabel("Eps: {}".format(epsilons[i]), fontsize=14)
|
||||
orig,adv,ex = examples[i][j]
|
||||
plt.title("{} -> {}".format(orig, adv))
|
||||
plt.imshow(ex, cmap="gray")
|
||||
plt.tight_layout()
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 接下来要去哪里?
|
||||
|
||||
希望本教程对对抗性机器学习主题有所了解。 从这里可以找到许多潜在的方向。 这种攻击代表了对抗性攻击研究的最开始,并且由于随后有许多关于如何攻击和防御来自对手的 ML 模型的想法。 实际上,在 NIPS 2017 上有一个对抗性的攻击和防御竞赛,并且本文描述了该竞赛中使用的许多方法:[《对抗性的攻击与防御竞赛》](https://arxiv.org/pdf/1804.00097.pdf)。 防御方面的工作还引发了使机器学习模型总体上更*健壮*的想法,以适应自然扰动和对抗性输入。
|
||||
|
||||
另一个方向是不同领域的对抗性攻击和防御。 对抗性研究不仅限于图像领域,请查看[对语音到文本模型的这种攻击](https://arxiv.org/pdf/1801.01944.pdf)。 但是,也许更多地了解对抗性机器学习的最好方法是动手。 尝试实现与 NIPS 2017 竞赛不同的攻击,并查看它与 FGSM 有何不同。 然后,尝试保护模型免受自己的攻击。
|
||||
|
||||
**脚本的总运行时间**:(4 分钟 22.519 秒)
|
||||
|
||||
[下载 Python 源码:`fgsm_tutorial.py`](../_downloads/c9aee5c8955d797c051f02c07927b0c0/fgsm_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`fgsm_tutorial.ipynb`](../_downloads/fba7866856a418520404ba3a11142335/fgsm_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
735
pytorch/官方教程/22.md
Normal file
@@ -0,0 +1,735 @@
|
||||
# DCGAN 教程
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html>
|
||||
|
||||
**作者**: [Nathan Inkawhich](https://github.com/inkawhich)
|
||||
|
||||
## 简介
|
||||
|
||||
本教程将通过一个示例对 DCGAN 进行介绍。 在向其展示许多真实名人的照片后,我们将训练一个生成对抗网络(GAN)来产生新名人。 此处的大多数代码来自[`pytorch/examples`](https://github.com/pytorch/examples)中的 dcgan 实现,并且本文档将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。 但请放心,不需要 GAN 的先验知识,但这可能需要新手花一些时间来推理幕后实际发生的事情。 同样,为了节省时间,拥有一两个 GPU 也将有所帮助。 让我们从头开始。
|
||||
|
||||
## 生成对抗网络
|
||||
|
||||
### 什么是 GAN?
|
||||
|
||||
GAN 是用于教授 DL 模型以捕获训练数据分布的框架,因此我们可以从同一分布中生成新数据。 GAN 由 Ian Goodfellow 于 2014 年发明,并在论文[《生成对抗网络》](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf)中首次进行了描述。 它们由两个不同的模型组成:*生成器*和*判别器*。 生成器的工作是生成看起来像训练图像的“假”图像。 判别器的工作是查看图像并从生成器输出它是真实的训练图像还是伪图像。 在训练过程中,生成器不断尝试通过生成越来越好的伪造品而使判别器的表现超过智者,而判别器正在努力成为更好的侦探并正确地对真实和伪造图像进行分类。 博弈的平衡点是当生成器生成的伪造品看起来像直接来自训练数据时,而判别器则总是猜测生成器输出是真实还是伪造品的 50% 置信度。
|
||||
|
||||
现在,让我们从判别器开始定义一些在整个教程中使用的符号。 令`x`为代表图像的数据。 `D(x)`是判别器网络,其输出`x`来自训练数据而不是生成器的(标量)概率。 在这里,由于我们要处理图像,因此`D(x)`的输入是 CHW 大小为`3x64x64`的图像。 直观地,当`x`来自训练数据时,`D(x)`应该为高,而当`x`来自生成器时,它应该为低。 `D(x)`也可以被认为是传统的二分类器。
|
||||
|
||||
对于生成器的表示法,令`z`是从标准正态分布中采样的潜在空间向量。 `G(z)`表示将隐向量`z`映射到数据空间的生成器函数。 `G`的目标是估计训练数据来自`p_data`的分布,以便它可以从该估计分布(`p_g`)生成假样本。
|
||||
|
||||
因此,`D(G(z))`是生成器`G`的输出是真实图像的概率(标量)。 如 [Goodfellow 的论文](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf)中所述,`D`和`G`玩一个 minimax 游戏,其中`D`试图最大化其正确分类实物和假物`log D(x)`,并且`G`尝试最小化`D`预测其输出为假的概率`log(1 - D(G(g(x))))`。 从本文来看,GAN 损失函数为
|
||||
|
||||

|
||||
|
||||
从理论上讲,此极小极大游戏的解决方案是`p_g = p_data`,判别器会随机猜测输入是真实的还是假的。 但是,GAN 的收敛理论仍在积极研究中,实际上,模型并不总是能达到这一目的。
|
||||
|
||||
### 什么是 DCGAN?
|
||||
|
||||
DCGAN 是上述 GAN 的直接扩展,不同之处在于,DCGAN 分别在判别器和生成器中分别使用卷积和卷积转置层。 它最早由 Radford 等人,在论文[《使用深度卷积生成对抗网络的无监督表示学习》](https://arxiv.org/pdf/1511.06434.pdf)中描述。 判别器由分层的[卷积层](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d),[批量规范层](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm2d)和 [LeakyReLU](https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU) 激活组成。 输入是`3x64x64`的输入图像,输出是输入来自真实数据分布的标量概率。 生成器由[转置卷积层](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d),批量规范层和 [ReLU](https://pytorch.org/docs/stable/nn.html#relu) 激活组成。 输入是从标准正态分布中提取的潜向量`z`,输出是`3x64x64` RGB 图像。 跨步的转置层使潜向量可以转换为具有与图像相同形状的体积。 在本文中,作者还提供了一些有关如何设置优化器,如何计算损失函数以及如何初始化模型权重的提示,所有这些都将在接下来的部分中进行解释。
|
||||
|
||||
```py
|
||||
from __future__ import print_function
|
||||
#%matplotlib inline
|
||||
import argparse
|
||||
import os
|
||||
import random
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.parallel
|
||||
import torch.backends.cudnn as cudnn
|
||||
import torch.optim as optim
|
||||
import torch.utils.data
|
||||
import torchvision.datasets as dset
|
||||
import torchvision.transforms as transforms
|
||||
import torchvision.utils as vutils
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.animation as animation
|
||||
from IPython.display import HTML
|
||||
|
||||
# Set random seed for reproducibility
|
||||
manualSeed = 999
|
||||
#manualSeed = random.randint(1, 10000) # use if you want new results
|
||||
print("Random Seed: ", manualSeed)
|
||||
random.seed(manualSeed)
|
||||
torch.manual_seed(manualSeed)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Random Seed: 999
|
||||
|
||||
```
|
||||
|
||||
## 输入
|
||||
|
||||
让我们为跑步定义一些输入:
|
||||
|
||||
* `dataroot`-数据集文件夹根目录的路径。 我们将在下一节中进一步讨论数据集
|
||||
* `worker`-使用`DataLoader`加载数据的工作线程数
|
||||
* `batch_size`-训练中使用的批量大小。 DCGAN 纸使用的批量大小为 128
|
||||
* `image_size`-用于训练的图像的空间大小。 此实现默认为`64x64`。 如果需要其他尺寸,则必须更改`D`和`G`的结构。 有关更多详细信息,请参见[此处](https://github.com/pytorch/examples/issues/70)。
|
||||
* `nc`-输入图像中的彩色通道数。 对于彩色图像,这是 3
|
||||
* `nz`-潜向量的长度
|
||||
* `ngf`-与通过生成器传送的特征映射的深度有关
|
||||
* `ndf`-设置通过判别器传播的特征映射的深度
|
||||
* `num_epochs`-要运行的训练周期数。 训练更长的时间可能会导致更好的结果,但也会花费更长的时间
|
||||
* `lr`-训练的学习率。 如 DCGAN 文件中所述,此数字应为 0.0002
|
||||
* `beta1`-Adam 优化器的`beta1`超参数。 如论文所述,该数字应为 0.5
|
||||
* `ngpu`-可用的 GPU 数量。 如果为 0,则代码将在 CPU 模式下运行。 如果此数字大于 0,它将在该数量的 GPU 上运行
|
||||
|
||||
```py
|
||||
# Root directory for dataset
|
||||
dataroot = "data/celeba"
|
||||
|
||||
# Number of workers for dataloader
|
||||
workers = 2
|
||||
|
||||
# Batch size during training
|
||||
batch_size = 128
|
||||
|
||||
# Spatial size of training images. All images will be resized to this
|
||||
# size using a transformer.
|
||||
image_size = 64
|
||||
|
||||
# Number of channels in the training images. For color images this is 3
|
||||
nc = 3
|
||||
|
||||
# Size of z latent vector (i.e. size of generator input)
|
||||
nz = 100
|
||||
|
||||
# Size of feature maps in generator
|
||||
ngf = 64
|
||||
|
||||
# Size of feature maps in discriminator
|
||||
ndf = 64
|
||||
|
||||
# Number of training epochs
|
||||
num_epochs = 5
|
||||
|
||||
# Learning rate for optimizers
|
||||
lr = 0.0002
|
||||
|
||||
# Beta1 hyperparam for Adam optimizers
|
||||
beta1 = 0.5
|
||||
|
||||
# Number of GPUs available. Use 0 for CPU mode.
|
||||
ngpu = 1
|
||||
|
||||
```
|
||||
|
||||
## 数据
|
||||
|
||||
在本教程中,我们将使用 [Celeb-A Faces 数据集](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html),该数据集可在链接的站点或 [Google 云端硬盘](https://drive.google.com/drive/folders/0B7EVK8r0v71pTUZsaXdaSnZBZzg)中下载。 数据集将下载为名为`img_align_celeba.zip`的文件。 下载完成后,创建一个名为`celeba`的目录,并将 zip 文件解压缩到该目录中。 然后,将此笔记本的`dataroot `输入设置为刚创建的`celeba`目录。 结果目录结构应为:
|
||||
|
||||
```py
|
||||
/path/to/celeba
|
||||
-> img_align_celeba
|
||||
-> 188242.jpg
|
||||
-> 173822.jpg
|
||||
-> 284702.jpg
|
||||
-> 537394.jpg
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
这是重要的一步,因为我们将使用`ImageFolder`数据集类,该类要求数据集的根文件夹中有子目录。 现在,我们可以创建数据集,创建数据加载器,将设备设置为可以运行,并最终可视化一些训练数据。
|
||||
|
||||
```py
|
||||
# We can use an image folder dataset the way we have it setup.
|
||||
# Create the dataset
|
||||
dataset = dset.ImageFolder(root=dataroot,
|
||||
transform=transforms.Compose([
|
||||
transforms.Resize(image_size),
|
||||
transforms.CenterCrop(image_size),
|
||||
transforms.ToTensor(),
|
||||
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
|
||||
]))
|
||||
# Create the dataloader
|
||||
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
|
||||
shuffle=True, num_workers=workers)
|
||||
|
||||
# Decide which device we want to run on
|
||||
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
|
||||
|
||||
# Plot some training images
|
||||
real_batch = next(iter(dataloader))
|
||||
plt.figure(figsize=(8,8))
|
||||
plt.axis("off")
|
||||
plt.title("Training Images")
|
||||
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 实现
|
||||
|
||||
设置好输入参数并准备好数据集后,我们现在可以进入实现了。 我们将从权重初始化策略开始,然后详细讨论生成器,判别器,损失函数和训练循环。
|
||||
|
||||
### 权重初始化
|
||||
|
||||
在 DCGAN 论文中,作者指定所有模型权重均应从均值为 0,`stdev = 0.02`的正态分布中随机初始化。 `weights_init`函数采用已初始化的模型作为输入,并重新初始化所有卷积,卷积转置和批量归一化层以满足此标准。 初始化后立即将此函数应用于模型。
|
||||
|
||||
```py
|
||||
# custom weights initialization called on netG and netD
|
||||
def weights_init(m):
|
||||
classname = m.__class__.__name__
|
||||
if classname.find('Conv') != -1:
|
||||
nn.init.normal_(m.weight.data, 0.0, 0.02)
|
||||
elif classname.find('BatchNorm') != -1:
|
||||
nn.init.normal_(m.weight.data, 1.0, 0.02)
|
||||
nn.init.constant_(m.bias.data, 0)
|
||||
|
||||
```
|
||||
|
||||
### 生成器
|
||||
|
||||
生成器`G`用于将潜在空间向量(`z`)映射到数据空间。 由于我们的数据是图像,因此将`z`转换为数据空间意味着最终创建与训练图像大小相同的 RGB 图像(即`3x64x64`)。 在实践中,这是通过一系列跨步的二维卷积转置层来完成的,每个层都与 2d 批量规范层和 relu 激活配对。 生成器的输出通过 tanh 函数馈送,以使其返回到输入数据范围`[-1,1]`。 值得注意的是,在卷积转置层之后存在批量规范函数,因为这是 DCGAN 论文的关键贡献。 这些层有助于训练过程中的梯度流动。 DCGAN 纸生成的图像如下所示。
|
||||
|
||||

|
||||
|
||||
请注意,我们在输入部分中设置的输入(`nz`,`ngf`和`nc`)如何影响代码中的生成器架构。 `nz`是`z`输入向量的长度,`ngf`与通过生成器传播的特征映射的大小有关, `nc`是输出图像中的通道(对于 RGB 图像设置为 3)。 下面是生成器的代码。
|
||||
|
||||
```py
|
||||
# Generator Code
|
||||
|
||||
class Generator(nn.Module):
|
||||
def __init__(self, ngpu):
|
||||
super(Generator, self).__init__()
|
||||
self.ngpu = ngpu
|
||||
self.main = nn.Sequential(
|
||||
# input is Z, going into a convolution
|
||||
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
|
||||
nn.BatchNorm2d(ngf * 8),
|
||||
nn.ReLU(True),
|
||||
# state size. (ngf*8) x 4 x 4
|
||||
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
|
||||
nn.BatchNorm2d(ngf * 4),
|
||||
nn.ReLU(True),
|
||||
# state size. (ngf*4) x 8 x 8
|
||||
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
|
||||
nn.BatchNorm2d(ngf * 2),
|
||||
nn.ReLU(True),
|
||||
# state size. (ngf*2) x 16 x 16
|
||||
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
|
||||
nn.BatchNorm2d(ngf),
|
||||
nn.ReLU(True),
|
||||
# state size. (ngf) x 32 x 32
|
||||
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
|
||||
nn.Tanh()
|
||||
# state size. (nc) x 64 x 64
|
||||
)
|
||||
|
||||
def forward(self, input):
|
||||
return self.main(input)
|
||||
|
||||
```
|
||||
|
||||
现在,我们可以实例化生成器并应用`weights_init`函数。 签出打印的模型以查看生成器对象的结构。
|
||||
|
||||
```py
|
||||
# Create the generator
|
||||
netG = Generator(ngpu).to(device)
|
||||
|
||||
# Handle multi-gpu if desired
|
||||
if (device.type == 'cuda') and (ngpu > 1):
|
||||
netG = nn.DataParallel(netG, list(range(ngpu)))
|
||||
|
||||
# Apply the weights_init function to randomly initialize all weights
|
||||
# to mean=0, stdev=0.2.
|
||||
netG.apply(weights_init)
|
||||
|
||||
# Print the model
|
||||
print(netG)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Generator(
|
||||
(main): Sequential(
|
||||
(0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
|
||||
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(2): ReLU(inplace=True)
|
||||
(3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(5): ReLU(inplace=True)
|
||||
(6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(8): ReLU(inplace=True)
|
||||
(9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(11): ReLU(inplace=True)
|
||||
(12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(13): Tanh()
|
||||
)
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
### 判别器
|
||||
|
||||
如前所述,判别器`D`是一个二分类网络,将图像作为输入并输出标量概率,即输入图像是真实的(与假的相对)。 在这里,`D`拍摄`3x64x64`的输入图像,通过一系列的`Conv2d`,`BatchNorm2d`和`LeakyReLU`层对其进行处理,然后通过 Sigmoid 激活函数输出最终概率。 如果需要解决此问题,可以用更多层扩展此架构,但是使用跨步卷积,`BatchNorm`和`LeakyReLU`仍然很重要。 DCGAN 论文提到,使用跨步卷积而不是通过池化来进行下采样是一个好习惯,因为它可以让网络学习自己的池化特征。 批量规范和泄漏 ReLU 函数还可以促进健康的梯度流,这对于`G`和`D`的学习过程都是至关重要的。
|
||||
|
||||
鉴别码
|
||||
|
||||
```py
|
||||
class Discriminator(nn.Module):
|
||||
def __init__(self, ngpu):
|
||||
super(Discriminator, self).__init__()
|
||||
self.ngpu = ngpu
|
||||
self.main = nn.Sequential(
|
||||
# input is (nc) x 64 x 64
|
||||
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
|
||||
nn.LeakyReLU(0.2, inplace=True),
|
||||
# state size. (ndf) x 32 x 32
|
||||
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
|
||||
nn.BatchNorm2d(ndf * 2),
|
||||
nn.LeakyReLU(0.2, inplace=True),
|
||||
# state size. (ndf*2) x 16 x 16
|
||||
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
|
||||
nn.BatchNorm2d(ndf * 4),
|
||||
nn.LeakyReLU(0.2, inplace=True),
|
||||
# state size. (ndf*4) x 8 x 8
|
||||
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
|
||||
nn.BatchNorm2d(ndf * 8),
|
||||
nn.LeakyReLU(0.2, inplace=True),
|
||||
# state size. (ndf*8) x 4 x 4
|
||||
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
|
||||
nn.Sigmoid()
|
||||
)
|
||||
|
||||
def forward(self, input):
|
||||
return self.main(input)
|
||||
|
||||
```
|
||||
|
||||
现在,与生成器一样,我们可以创建判别器,应用`weights_init`函数,并打印模型的结构。
|
||||
|
||||
```py
|
||||
# Create the Discriminator
|
||||
netD = Discriminator(ngpu).to(device)
|
||||
|
||||
# Handle multi-gpu if desired
|
||||
if (device.type == 'cuda') and (ngpu > 1):
|
||||
netD = nn.DataParallel(netD, list(range(ngpu)))
|
||||
|
||||
# Apply the weights_init function to randomly initialize all weights
|
||||
# to mean=0, stdev=0.2.
|
||||
netD.apply(weights_init)
|
||||
|
||||
# Print the model
|
||||
print(netD)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Discriminator(
|
||||
(main): Sequential(
|
||||
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(1): LeakyReLU(negative_slope=0.2, inplace=True)
|
||||
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(4): LeakyReLU(negative_slope=0.2, inplace=True)
|
||||
(5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(7): LeakyReLU(negative_slope=0.2, inplace=True)
|
||||
(8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
|
||||
(9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
|
||||
(10): LeakyReLU(negative_slope=0.2, inplace=True)
|
||||
(11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
|
||||
(12): Sigmoid()
|
||||
)
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
### 损失函数和优化器
|
||||
|
||||
使用`D`和`G`设置,我们可以指定它们如何通过损失函数和优化器学习。 我们将使用在 PyTorch 中定义的二进制交叉熵损失([BCELoss](https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss))函数:
|
||||
|
||||

|
||||
|
||||
请注意,此函数如何提供目标函数中两个对数分量的计算(即`log D(x)`和`log(1 - D(G(z)))`)。 我们可以指定`y`输入使用 BCE 方程的哪一部分。 这是在即将到来的训练循环中完成的,但重要的是要了解我们如何仅通过更改`y`(即`GT`标签)即可选择希望计算的分量。
|
||||
|
||||
接下来,我们将实际标签定义为 1,将假标签定义为 0。这些标签将在计算`D`和`G`的损失时使用,这也是 GAN 原始论文中使用的惯例 。 最后,我们设置了两个单独的优化器,一个用于`D`,另一个用于`G`。 如 DCGAN 论文中所指定,这两个都是学习速度为 0.0002 和`Beta1 = 0.5`的 Adam 优化器。 为了跟踪生成器的学习进度,我们将生成一批固定的潜在向量,这些向量是从高斯分布(即`fixed_noise`)中提取的。 在训练循环中,我们将定期将此`fixed_noise`输入到`G`中,并且在迭代过程中,我们将看到图像形成于噪声之外。
|
||||
|
||||
```py
|
||||
# Initialize BCELoss function
|
||||
criterion = nn.BCELoss()
|
||||
|
||||
# Create batch of latent vectors that we will use to visualize
|
||||
# the progression of the generator
|
||||
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
|
||||
|
||||
# Establish convention for real and fake labels during training
|
||||
real_label = 1.
|
||||
fake_label = 0.
|
||||
|
||||
# Setup Adam optimizers for both G and D
|
||||
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
|
||||
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
|
||||
|
||||
```
|
||||
|
||||
### 训练
|
||||
|
||||
最后,既然我们已经定义了 GAN 框架的所有部分,我们就可以对其进行训练。 请注意,训练 GAN 某种程度上是一种艺术形式,因为不正确的超参数设置会导致模式崩溃,而对失败的原因几乎没有解释。 在这里,我们将严格遵循 Goodfellow 论文中的算法 1,同时遵守[`ganhacks`](https://github.com/soumith/ganhacks)中显示的一些最佳做法。 即,我们将“为真实和伪造构建不同的小批量”图像,并调整`G`的目标函数以最大化`log D(G(z))`。 训练分为两个主要部分。 第 1 部分更新了判别器,第 2 部分更新了生成器。
|
||||
|
||||
**第 1 部分-训练判别器**
|
||||
|
||||
回想一下,训练判别器的目的是最大程度地提高将给定输入正确分类为真实或伪造的可能性。 就古德费罗而言,我们希望“通过提高其随机梯度来更新判别器”。 实际上,我们要最大化`log D(x) + log(1 - D(G(z))`。 由于 ganhacks 提出了单独的小批量建议,因此我们将分两步进行计算。 首先,我们将从训练集中构造一批真实样本,向前通过`D`,计算损失(`log D(x)`),然后在向后通过中计算梯度。 其次,我们将使用当前生成器构造一批假样本,将这批伪造通过`D`,计算损失(`log(1 - D(G(z)))`),然后*反向累积*梯度。 现在,利用全批量和全批量的累积梯度,我们称之为判别器优化程序的一个步骤。
|
||||
|
||||
**第 2 部分-训练生成器**
|
||||
|
||||
如原始论文所述,我们希望通过最小化`log(1 - D(G(z)))`来训练生成器,以产生更好的假货。 如前所述,Goodfellow 证明这不能提供足够的梯度,尤其是在学习过程的早期。 作为解决方法,我们希望最大化`log D(G(z))`。 在代码中,我们通过以下步骤来实现此目的:将第 1 部分的生成器输出与判别器进行分类,使用实数标签`GT`计算`G`的损失,反向计算`G`的梯度,最后使用优化器步骤更新`G`的参数。 将真实标签用作损失函数的`GT`标签似乎是违反直觉的,但这使我们可以使用 BCELoss 的`log(x)`部分(而不是`log(1 - x)`部分),这正是我们想要的。
|
||||
|
||||
最后,我们将进行一些统计报告,并在每个周期结束时,将我们的`fixed_noise`批量推送到生成器中,以直观地跟踪`G`的训练进度。 报告的训练统计数据是:
|
||||
|
||||
* `Loss_D`-判别器损失,计算为所有真实批量和所有假批量的损失总和(`log D(x) + log D(G(z))`)。
|
||||
* `Loss_G`-生成器损失计算为`log D(G(z))`
|
||||
* `D(x)`-所有真实批量的判别器的平均输出(整个批量)。 这应该从接近 1 开始,然后在`G`变得更好时理论上收敛到 0.5。 想想这是为什么。
|
||||
* `D(G(z))`-所有假批量的平均判别器输出。 第一个数字在`D`更新之前,第二个数字在`D`更新之后。 这些数字应从 0 开始,并随着`G`的提高收敛到 0.5。 想想这是为什么。
|
||||
|
||||
**注意**:此步骤可能需要一段时间,具体取决于您运行了多少个周期以及是否从数据集中删除了一些数据。
|
||||
|
||||
```py
|
||||
# Training Loop
|
||||
|
||||
# Lists to keep track of progress
|
||||
img_list = []
|
||||
G_losses = []
|
||||
D_losses = []
|
||||
iters = 0
|
||||
|
||||
print("Starting Training Loop...")
|
||||
# For each epoch
|
||||
for epoch in range(num_epochs):
|
||||
# For each batch in the dataloader
|
||||
for i, data in enumerate(dataloader, 0):
|
||||
|
||||
############################
|
||||
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
|
||||
###########################
|
||||
## Train with all-real batch
|
||||
netD.zero_grad()
|
||||
# Format batch
|
||||
real_cpu = data[0].to(device)
|
||||
b_size = real_cpu.size(0)
|
||||
label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
|
||||
# Forward pass real batch through D
|
||||
output = netD(real_cpu).view(-1)
|
||||
# Calculate loss on all-real batch
|
||||
errD_real = criterion(output, label)
|
||||
# Calculate gradients for D in backward pass
|
||||
errD_real.backward()
|
||||
D_x = output.mean().item()
|
||||
|
||||
## Train with all-fake batch
|
||||
# Generate batch of latent vectors
|
||||
noise = torch.randn(b_size, nz, 1, 1, device=device)
|
||||
# Generate fake image batch with G
|
||||
fake = netG(noise)
|
||||
label.fill_(fake_label)
|
||||
# Classify all fake batch with D
|
||||
output = netD(fake.detach()).view(-1)
|
||||
# Calculate D's loss on the all-fake batch
|
||||
errD_fake = criterion(output, label)
|
||||
# Calculate the gradients for this batch
|
||||
errD_fake.backward()
|
||||
D_G_z1 = output.mean().item()
|
||||
# Add the gradients from the all-real and all-fake batches
|
||||
errD = errD_real + errD_fake
|
||||
# Update D
|
||||
optimizerD.step()
|
||||
|
||||
############################
|
||||
# (2) Update G network: maximize log(D(G(z)))
|
||||
###########################
|
||||
netG.zero_grad()
|
||||
label.fill_(real_label) # fake labels are real for generator cost
|
||||
# Since we just updated D, perform another forward pass of all-fake batch through D
|
||||
output = netD(fake).view(-1)
|
||||
# Calculate G's loss based on this output
|
||||
errG = criterion(output, label)
|
||||
# Calculate gradients for G
|
||||
errG.backward()
|
||||
D_G_z2 = output.mean().item()
|
||||
# Update G
|
||||
optimizerG.step()
|
||||
|
||||
# Output training stats
|
||||
if i % 50 == 0:
|
||||
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
|
||||
% (epoch, num_epochs, i, len(dataloader),
|
||||
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
|
||||
|
||||
# Save Losses for plotting later
|
||||
G_losses.append(errG.item())
|
||||
D_losses.append(errD.item())
|
||||
|
||||
# Check how the generator is doing by saving G's output on fixed_noise
|
||||
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
|
||||
with torch.no_grad():
|
||||
fake = netG(fixed_noise).detach().cpu()
|
||||
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
|
||||
|
||||
iters += 1
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Starting Training Loop...
|
||||
[0/5][0/1583] Loss_D: 1.9847 Loss_G: 5.5914 D(x): 0.6004 D(G(z)): 0.6680 / 0.0062
|
||||
[0/5][50/1583] Loss_D: 0.7168 Loss_G: 35.7954 D(x): 0.7127 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][100/1583] Loss_D: 0.0007 Loss_G: 28.2580 D(x): 0.9994 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][150/1583] Loss_D: 0.0001 Loss_G: 42.5731 D(x): 0.9999 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][200/1583] Loss_D: 0.0138 Loss_G: 42.3603 D(x): 0.9933 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][250/1583] Loss_D: 0.0010 Loss_G: 42.2029 D(x): 0.9991 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][300/1583] Loss_D: 0.0000 Loss_G: 41.9521 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][350/1583] Loss_D: 0.0000 Loss_G: 41.7962 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][400/1583] Loss_D: 0.0000 Loss_G: 41.6345 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][450/1583] Loss_D: 0.0000 Loss_G: 41.6058 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][500/1583] Loss_D: 0.0001 Loss_G: 41.6208 D(x): 0.9999 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][550/1583] Loss_D: 0.0000 Loss_G: 41.3979 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][600/1583] Loss_D: 0.0000 Loss_G: 41.2545 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][650/1583] Loss_D: 0.0000 Loss_G: 41.0200 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][700/1583] Loss_D: 0.0000 Loss_G: 39.6461 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][750/1583] Loss_D: 0.0000 Loss_G: 38.8834 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][800/1583] Loss_D: 0.0000 Loss_G: 38.5914 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][850/1583] Loss_D: 0.0000 Loss_G: 38.8209 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][900/1583] Loss_D: 0.0000 Loss_G: 38.9713 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][950/1583] Loss_D: 0.0000 Loss_G: 38.4995 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1000/1583] Loss_D: 0.0001 Loss_G: 38.5549 D(x): 0.9999 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1050/1583] Loss_D: 0.0000 Loss_G: 39.1773 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1100/1583] Loss_D: 0.0000 Loss_G: 39.0142 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1150/1583] Loss_D: 0.0000 Loss_G: 38.6368 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1200/1583] Loss_D: 0.0000 Loss_G: 38.7159 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1250/1583] Loss_D: 0.0000 Loss_G: 38.7660 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1300/1583] Loss_D: 0.0000 Loss_G: 38.5522 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1350/1583] Loss_D: 0.0001 Loss_G: 38.6703 D(x): 0.9999 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1400/1583] Loss_D: 0.0000 Loss_G: 38.5487 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1450/1583] Loss_D: 0.0000 Loss_G: 38.0378 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1500/1583] Loss_D: 0.0000 Loss_G: 38.1258 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[0/5][1550/1583] Loss_D: 0.0000 Loss_G: 38.3473 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][0/1583] Loss_D: 0.0000 Loss_G: 37.8825 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][50/1583] Loss_D: 0.0000 Loss_G: 38.2248 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][100/1583] Loss_D: 0.0000 Loss_G: 38.2204 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][150/1583] Loss_D: 0.0000 Loss_G: 38.0967 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][200/1583] Loss_D: 0.0000 Loss_G: 38.0669 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][250/1583] Loss_D: 0.0000 Loss_G: 37.4736 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][300/1583] Loss_D: 0.0000 Loss_G: 37.0766 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][350/1583] Loss_D: 0.0000 Loss_G: 36.6055 D(x): 1.0000 D(G(z)): 0.0000 / 0.0000
|
||||
[1/5][400/1583] Loss_D: 2.5403 Loss_G: 12.8251 D(x): 0.8672 D(G(z)): 0.8088 / 0.0000
|
||||
[1/5][450/1583] Loss_D: 1.3779 Loss_G: 2.0631 D(x): 0.5850 D(G(z)): 0.4734 / 0.1820
|
||||
[1/5][500/1583] Loss_D: 1.0299 Loss_G: 2.4048 D(x): 0.5165 D(G(z)): 0.1698 / 0.1333
|
||||
[1/5][550/1583] Loss_D: 1.4922 Loss_G: 3.2383 D(x): 0.5854 D(G(z)): 0.4773 / 0.0888
|
||||
[1/5][600/1583] Loss_D: 0.9283 Loss_G: 1.8533 D(x): 0.6231 D(G(z)): 0.2962 / 0.2153
|
||||
[1/5][650/1583] Loss_D: 0.8065 Loss_G: 2.9684 D(x): 0.6684 D(G(z)): 0.2624 / 0.0715
|
||||
[1/5][700/1583] Loss_D: 0.6909 Loss_G: 2.8746 D(x): 0.7910 D(G(z)): 0.3013 / 0.0819
|
||||
[1/5][750/1583] Loss_D: 1.3242 Loss_G: 2.5236 D(x): 0.7183 D(G(z)): 0.5300 / 0.1090
|
||||
[1/5][800/1583] Loss_D: 1.0871 Loss_G: 2.0203 D(x): 0.4993 D(G(z)): 0.1716 / 0.1727
|
||||
[1/5][850/1583] Loss_D: 1.7561 Loss_G: 4.9674 D(x): 0.8542 D(G(z)): 0.7052 / 0.0133
|
||||
[1/5][900/1583] Loss_D: 0.8294 Loss_G: 2.5024 D(x): 0.6913 D(G(z)): 0.2910 / 0.1178
|
||||
[1/5][950/1583] Loss_D: 0.9390 Loss_G: 2.2087 D(x): 0.5508 D(G(z)): 0.1638 / 0.1617
|
||||
[1/5][1000/1583] Loss_D: 1.8202 Loss_G: 1.2178 D(x): 0.2535 D(G(z)): 0.0684 / 0.3527
|
||||
[1/5][1050/1583] Loss_D: 0.9816 Loss_G: 3.7976 D(x): 0.7310 D(G(z)): 0.3944 / 0.0343
|
||||
[1/5][1100/1583] Loss_D: 0.9798 Loss_G: 2.0990 D(x): 0.5963 D(G(z)): 0.2328 / 0.1660
|
||||
[1/5][1150/1583] Loss_D: 0.7173 Loss_G: 2.7879 D(x): 0.6385 D(G(z)): 0.1424 / 0.1057
|
||||
[1/5][1200/1583] Loss_D: 0.8903 Loss_G: 2.3547 D(x): 0.7371 D(G(z)): 0.3589 / 0.1251
|
||||
[1/5][1250/1583] Loss_D: 0.6137 Loss_G: 2.1031 D(x): 0.7491 D(G(z)): 0.2062 / 0.1588
|
||||
[1/5][1300/1583] Loss_D: 1.0179 Loss_G: 5.0280 D(x): 0.7465 D(G(z)): 0.4325 / 0.0129
|
||||
[1/5][1350/1583] Loss_D: 0.7131 Loss_G: 3.6670 D(x): 0.7931 D(G(z)): 0.3270 / 0.0398
|
||||
[1/5][1400/1583] Loss_D: 1.0736 Loss_G: 4.2392 D(x): 0.8172 D(G(z)): 0.4861 / 0.0351
|
||||
[1/5][1450/1583] Loss_D: 0.6050 Loss_G: 2.6052 D(x): 0.7590 D(G(z)): 0.2240 / 0.1019
|
||||
[1/5][1500/1583] Loss_D: 1.3370 Loss_G: 1.9105 D(x): 0.3786 D(G(z)): 0.0405 / 0.2013
|
||||
[1/5][1550/1583] Loss_D: 0.6698 Loss_G: 2.3040 D(x): 0.6444 D(G(z)): 0.1071 / 0.1372
|
||||
[2/5][0/1583] Loss_D: 1.3043 Loss_G: 2.1213 D(x): 0.4073 D(G(z)): 0.0423 / 0.1682
|
||||
[2/5][50/1583] Loss_D: 1.3636 Loss_G: 3.4322 D(x): 0.7959 D(G(z)): 0.6129 / 0.0510
|
||||
[2/5][100/1583] Loss_D: 0.8047 Loss_G: 3.4262 D(x): 0.9067 D(G(z)): 0.4371 / 0.0536
|
||||
[2/5][150/1583] Loss_D: 0.7103 Loss_G: 2.4974 D(x): 0.6212 D(G(z)): 0.0862 / 0.1273
|
||||
[2/5][200/1583] Loss_D: 0.8335 Loss_G: 2.9292 D(x): 0.7340 D(G(z)): 0.3396 / 0.0772
|
||||
[2/5][250/1583] Loss_D: 1.4766 Loss_G: 1.4532 D(x): 0.3469 D(G(z)): 0.0140 / 0.3162
|
||||
[2/5][300/1583] Loss_D: 0.8063 Loss_G: 2.5363 D(x): 0.6939 D(G(z)): 0.2714 / 0.1160
|
||||
[2/5][350/1583] Loss_D: 2.4655 Loss_G: 1.7710 D(x): 0.1625 D(G(z)): 0.0049 / 0.2345
|
||||
[2/5][400/1583] Loss_D: 0.9256 Loss_G: 1.4698 D(x): 0.5101 D(G(z)): 0.1192 / 0.2926
|
||||
[2/5][450/1583] Loss_D: 0.7932 Loss_G: 3.1267 D(x): 0.8831 D(G(z)): 0.4330 / 0.0657
|
||||
[2/5][500/1583] Loss_D: 1.0515 Loss_G: 1.8415 D(x): 0.4922 D(G(z)): 0.0817 / 0.2372
|
||||
[2/5][550/1583] Loss_D: 1.1575 Loss_G: 2.3904 D(x): 0.8286 D(G(z)): 0.5113 / 0.1394
|
||||
[2/5][600/1583] Loss_D: 0.8667 Loss_G: 4.0253 D(x): 0.8805 D(G(z)): 0.4499 / 0.0329
|
||||
[2/5][650/1583] Loss_D: 0.9943 Loss_G: 3.0625 D(x): 0.8224 D(G(z)): 0.4700 / 0.0678
|
||||
[2/5][700/1583] Loss_D: 0.7634 Loss_G: 3.7297 D(x): 0.7855 D(G(z)): 0.3507 / 0.0369
|
||||
[2/5][750/1583] Loss_D: 0.6280 Loss_G: 2.7439 D(x): 0.7664 D(G(z)): 0.2518 / 0.0897
|
||||
[2/5][800/1583] Loss_D: 0.9011 Loss_G: 1.3725 D(x): 0.5495 D(G(z)): 0.1341 / 0.3033
|
||||
[2/5][850/1583] Loss_D: 0.4595 Loss_G: 3.0410 D(x): 0.8186 D(G(z)): 0.1808 / 0.0721
|
||||
[2/5][900/1583] Loss_D: 0.8331 Loss_G: 1.3725 D(x): 0.5696 D(G(z)): 0.1528 / 0.3128
|
||||
[2/5][950/1583] Loss_D: 1.2701 Loss_G: 4.4360 D(x): 0.9365 D(G(z)): 0.6218 / 0.0226
|
||||
[2/5][1000/1583] Loss_D: 0.5165 Loss_G: 3.2817 D(x): 0.7543 D(G(z)): 0.1460 / 0.0651
|
||||
[2/5][1050/1583] Loss_D: 0.5562 Loss_G: 2.5533 D(x): 0.8034 D(G(z)): 0.2385 / 0.1047
|
||||
[2/5][1100/1583] Loss_D: 0.9842 Loss_G: 3.5247 D(x): 0.7936 D(G(z)): 0.4511 / 0.0446
|
||||
[2/5][1150/1583] Loss_D: 0.6793 Loss_G: 3.2208 D(x): 0.8038 D(G(z)): 0.3133 / 0.0571
|
||||
[2/5][1200/1583] Loss_D: 1.8110 Loss_G: 5.4461 D(x): 0.8337 D(G(z)): 0.7185 / 0.0090
|
||||
[2/5][1250/1583] Loss_D: 0.6310 Loss_G: 2.8066 D(x): 0.7859 D(G(z)): 0.2644 / 0.0822
|
||||
[2/5][1300/1583] Loss_D: 0.6009 Loss_G: 1.6727 D(x): 0.6759 D(G(z)): 0.1297 / 0.2422
|
||||
[2/5][1350/1583] Loss_D: 0.5156 Loss_G: 3.5893 D(x): 0.8552 D(G(z)): 0.2686 / 0.0385
|
||||
[2/5][1400/1583] Loss_D: 0.7672 Loss_G: 1.0321 D(x): 0.5755 D(G(z)): 0.0938 / 0.4195
|
||||
[2/5][1450/1583] Loss_D: 0.6583 Loss_G: 2.0611 D(x): 0.6727 D(G(z)): 0.1675 / 0.1591
|
||||
[2/5][1500/1583] Loss_D: 1.2956 Loss_G: 3.7047 D(x): 0.9324 D(G(z)): 0.6345 / 0.0479
|
||||
[2/5][1550/1583] Loss_D: 0.8555 Loss_G: 3.0119 D(x): 0.8243 D(G(z)): 0.4237 / 0.0696
|
||||
[3/5][0/1583] Loss_D: 0.7295 Loss_G: 2.0605 D(x): 0.7051 D(G(z)): 0.2466 / 0.1671
|
||||
[3/5][50/1583] Loss_D: 0.6551 Loss_G: 3.0267 D(x): 0.8502 D(G(z)): 0.3419 / 0.0676
|
||||
[3/5][100/1583] Loss_D: 0.9209 Loss_G: 1.3069 D(x): 0.5238 D(G(z)): 0.1032 / 0.3367
|
||||
[3/5][150/1583] Loss_D: 0.6289 Loss_G: 1.8684 D(x): 0.6835 D(G(z)): 0.1555 / 0.1994
|
||||
[3/5][200/1583] Loss_D: 1.0600 Loss_G: 1.3343 D(x): 0.4512 D(G(z)): 0.0575 / 0.3259
|
||||
[3/5][250/1583] Loss_D: 0.7251 Loss_G: 1.7242 D(x): 0.6128 D(G(z)): 0.1340 / 0.2269
|
||||
[3/5][300/1583] Loss_D: 0.7097 Loss_G: 1.7072 D(x): 0.7143 D(G(z)): 0.2623 / 0.2238
|
||||
[3/5][350/1583] Loss_D: 0.8045 Loss_G: 2.7455 D(x): 0.7958 D(G(z)): 0.3825 / 0.0901
|
||||
[3/5][400/1583] Loss_D: 0.8351 Loss_G: 1.6116 D(x): 0.5394 D(G(z)): 0.1106 / 0.2425
|
||||
[3/5][450/1583] Loss_D: 1.4829 Loss_G: 0.5346 D(x): 0.3523 D(G(z)): 0.0987 / 0.6289
|
||||
[3/5][500/1583] Loss_D: 0.6972 Loss_G: 2.1915 D(x): 0.7656 D(G(z)): 0.2987 / 0.1450
|
||||
[3/5][550/1583] Loss_D: 0.7369 Loss_G: 1.7250 D(x): 0.6402 D(G(z)): 0.1899 / 0.2224
|
||||
[3/5][600/1583] Loss_D: 0.8170 Loss_G: 2.6806 D(x): 0.7843 D(G(z)): 0.3880 / 0.0929
|
||||
[3/5][650/1583] Loss_D: 1.1531 Loss_G: 0.9077 D(x): 0.4340 D(G(z)): 0.1224 / 0.4550
|
||||
[3/5][700/1583] Loss_D: 0.8751 Loss_G: 1.0230 D(x): 0.5587 D(G(z)): 0.1808 / 0.4021
|
||||
[3/5][750/1583] Loss_D: 0.7169 Loss_G: 2.1268 D(x): 0.6690 D(G(z)): 0.2219 / 0.1588
|
||||
[3/5][800/1583] Loss_D: 0.9772 Loss_G: 3.1279 D(x): 0.8451 D(G(z)): 0.5081 / 0.0632
|
||||
[3/5][850/1583] Loss_D: 0.6574 Loss_G: 1.9605 D(x): 0.7010 D(G(z)): 0.2120 / 0.1775
|
||||
[3/5][900/1583] Loss_D: 0.6153 Loss_G: 2.8981 D(x): 0.8399 D(G(z)): 0.3197 / 0.0697
|
||||
[3/5][950/1583] Loss_D: 0.9155 Loss_G: 1.1091 D(x): 0.5482 D(G(z)): 0.1730 / 0.3799
|
||||
[3/5][1000/1583] Loss_D: 0.9873 Loss_G: 3.9150 D(x): 0.8838 D(G(z)): 0.5423 / 0.0284
|
||||
[3/5][1050/1583] Loss_D: 0.8369 Loss_G: 2.1366 D(x): 0.8039 D(G(z)): 0.4067 / 0.1533
|
||||
[3/5][1100/1583] Loss_D: 0.9522 Loss_G: 3.4744 D(x): 0.8732 D(G(z)): 0.5049 / 0.0412
|
||||
[3/5][1150/1583] Loss_D: 0.6371 Loss_G: 2.1278 D(x): 0.7648 D(G(z)): 0.2672 / 0.1424
|
||||
[3/5][1200/1583] Loss_D: 1.0349 Loss_G: 2.7710 D(x): 0.7604 D(G(z)): 0.4512 / 0.0920
|
||||
[3/5][1250/1583] Loss_D: 0.9350 Loss_G: 2.7946 D(x): 0.8007 D(G(z)): 0.4649 / 0.0805
|
||||
[3/5][1300/1583] Loss_D: 0.7655 Loss_G: 2.7838 D(x): 0.7965 D(G(z)): 0.3724 / 0.0803
|
||||
[3/5][1350/1583] Loss_D: 0.7623 Loss_G: 2.2647 D(x): 0.7979 D(G(z)): 0.3641 / 0.1414
|
||||
[3/5][1400/1583] Loss_D: 0.9361 Loss_G: 3.1341 D(x): 0.8601 D(G(z)): 0.4938 / 0.0628
|
||||
[3/5][1450/1583] Loss_D: 0.7966 Loss_G: 3.1544 D(x): 0.8568 D(G(z)): 0.4211 / 0.0623
|
||||
[3/5][1500/1583] Loss_D: 1.0768 Loss_G: 3.8304 D(x): 0.8364 D(G(z)): 0.5348 / 0.0353
|
||||
[3/5][1550/1583] Loss_D: 0.8528 Loss_G: 3.3978 D(x): 0.8824 D(G(z)): 0.4788 / 0.0491
|
||||
[4/5][0/1583] Loss_D: 0.8361 Loss_G: 1.9086 D(x): 0.6756 D(G(z)): 0.2975 / 0.1872
|
||||
[4/5][50/1583] Loss_D: 0.7666 Loss_G: 2.3647 D(x): 0.7698 D(G(z)): 0.3487 / 0.1232
|
||||
[4/5][100/1583] Loss_D: 0.7536 Loss_G: 1.6556 D(x): 0.6398 D(G(z)): 0.2084 / 0.2423
|
||||
[4/5][150/1583] Loss_D: 0.8390 Loss_G: 1.7737 D(x): 0.6400 D(G(z)): 0.2714 / 0.2181
|
||||
[4/5][200/1583] Loss_D: 0.8608 Loss_G: 2.5683 D(x): 0.7898 D(G(z)): 0.4126 / 0.1009
|
||||
[4/5][250/1583] Loss_D: 0.8651 Loss_G: 1.8416 D(x): 0.6033 D(G(z)): 0.2312 / 0.1954
|
||||
[4/5][300/1583] Loss_D: 0.8790 Loss_G: 1.2224 D(x): 0.5099 D(G(z)): 0.0960 / 0.3501
|
||||
[4/5][350/1583] Loss_D: 2.0809 Loss_G: 0.5006 D(x): 0.1907 D(G(z)): 0.0415 / 0.6501
|
||||
[4/5][400/1583] Loss_D: 1.0178 Loss_G: 2.6912 D(x): 0.7134 D(G(z)): 0.4299 / 0.0977
|
||||
[4/5][450/1583] Loss_D: 0.7773 Loss_G: 1.5577 D(x): 0.6859 D(G(z)): 0.2705 / 0.2527
|
||||
[4/5][500/1583] Loss_D: 1.0217 Loss_G: 2.8968 D(x): 0.8227 D(G(z)): 0.5103 / 0.0755
|
||||
[4/5][550/1583] Loss_D: 0.6428 Loss_G: 2.8346 D(x): 0.8293 D(G(z)): 0.3290 / 0.0793
|
||||
[4/5][600/1583] Loss_D: 1.7683 Loss_G: 4.1924 D(x): 0.9236 D(G(z)): 0.7656 / 0.0211
|
||||
[4/5][650/1583] Loss_D: 0.8692 Loss_G: 2.2491 D(x): 0.7046 D(G(z)): 0.3386 / 0.1336
|
||||
[4/5][700/1583] Loss_D: 0.8933 Loss_G: 1.5814 D(x): 0.6256 D(G(z)): 0.2963 / 0.2476
|
||||
[4/5][750/1583] Loss_D: 1.2154 Loss_G: 2.6798 D(x): 0.8082 D(G(z)): 0.5792 / 0.0862
|
||||
[4/5][800/1583] Loss_D: 0.7252 Loss_G: 1.6059 D(x): 0.6257 D(G(z)): 0.1717 / 0.2486
|
||||
[4/5][850/1583] Loss_D: 0.6888 Loss_G: 2.4141 D(x): 0.7470 D(G(z)): 0.2786 / 0.1207
|
||||
[4/5][900/1583] Loss_D: 1.0490 Loss_G: 1.1737 D(x): 0.4731 D(G(z)): 0.1746 / 0.3528
|
||||
[4/5][950/1583] Loss_D: 1.1517 Loss_G: 0.5954 D(x): 0.4083 D(G(z)): 0.0727 / 0.5876
|
||||
[4/5][1000/1583] Loss_D: 0.7451 Loss_G: 2.1440 D(x): 0.7385 D(G(z)): 0.3118 / 0.1455
|
||||
[4/5][1050/1583] Loss_D: 1.2439 Loss_G: 0.8178 D(x): 0.3806 D(G(z)): 0.0852 / 0.4825
|
||||
[4/5][1100/1583] Loss_D: 0.8468 Loss_G: 3.3432 D(x): 0.8220 D(G(z)): 0.4289 / 0.0484
|
||||
[4/5][1150/1583] Loss_D: 0.9824 Loss_G: 0.8542 D(x): 0.4712 D(G(z)): 0.1120 / 0.4808
|
||||
[4/5][1200/1583] Loss_D: 1.1658 Loss_G: 3.3930 D(x): 0.8771 D(G(z)): 0.5939 / 0.0450
|
||||
[4/5][1250/1583] Loss_D: 0.8152 Loss_G: 1.3158 D(x): 0.5988 D(G(z)): 0.1721 / 0.3111
|
||||
[4/5][1300/1583] Loss_D: 0.7013 Loss_G: 2.0752 D(x): 0.6751 D(G(z)): 0.2173 / 0.1596
|
||||
[4/5][1350/1583] Loss_D: 0.8809 Loss_G: 3.0340 D(x): 0.8292 D(G(z)): 0.4574 / 0.0636
|
||||
[4/5][1400/1583] Loss_D: 0.7911 Loss_G: 2.7713 D(x): 0.7982 D(G(z)): 0.3830 / 0.0829
|
||||
[4/5][1450/1583] Loss_D: 1.0299 Loss_G: 2.8774 D(x): 0.7987 D(G(z)): 0.4941 / 0.0761
|
||||
[4/5][1500/1583] Loss_D: 0.8572 Loss_G: 2.5340 D(x): 0.7273 D(G(z)): 0.3717 / 0.1009
|
||||
[4/5][1550/1583] Loss_D: 0.8135 Loss_G: 1.6428 D(x): 0.5799 D(G(z)): 0.1693 / 0.2267
|
||||
|
||||
```
|
||||
|
||||
## 结果
|
||||
|
||||
最后,让我们看看我们是如何做到的。 在这里,我们将看三个不同的结果。 首先,我们将了解`D`和`G`的损失在训练过程中如何变化。 其次,我们将在每个周期将`G`的输出显示为`fixed_noise`批量。 第三,我们将查看一批真实数据以及来自`G`的一批伪数据。
|
||||
|
||||
**损失与训练迭代**
|
||||
|
||||
下面是`D&G`的损失与训练迭代的关系图。
|
||||
|
||||
```py
|
||||
plt.figure(figsize=(10,5))
|
||||
plt.title("Generator and Discriminator Loss During Training")
|
||||
plt.plot(G_losses,label="G")
|
||||
plt.plot(D_losses,label="D")
|
||||
plt.xlabel("iterations")
|
||||
plt.ylabel("Loss")
|
||||
plt.legend()
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
**可视化`G`的进度**
|
||||
|
||||
请记住,在每次训练之后,我们如何将生成器的输出保存为`fixed_noise`批量。 现在,我们可以用动画形象化`G`的训练进度。 按下播放按钮开始动画。
|
||||
|
||||
```py
|
||||
#%%capture
|
||||
fig = plt.figure(figsize=(8,8))
|
||||
plt.axis("off")
|
||||
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
|
||||
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
|
||||
|
||||
HTML(ani.to_jshtml())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
**真实图像和伪图像**
|
||||
|
||||
最后,让我们并排查看一些真实图像和伪图像。
|
||||
|
||||
```py
|
||||
# Grab a batch of real images from the dataloader
|
||||
real_batch = next(iter(dataloader))
|
||||
|
||||
# Plot the real images
|
||||
plt.figure(figsize=(15,15))
|
||||
plt.subplot(1,2,1)
|
||||
plt.axis("off")
|
||||
plt.title("Real Images")
|
||||
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
|
||||
|
||||
# Plot the fake images from the last epoch
|
||||
plt.subplot(1,2,2)
|
||||
plt.axis("off")
|
||||
plt.title("Fake Images")
|
||||
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 下一步去哪里
|
||||
|
||||
我们已经走到了旅程的尽头,但是您可以从这里到达几个地方。 你可以:
|
||||
|
||||
* 训练更长的时间,看看效果如何
|
||||
* 修改此模型以采用其他数据集,并可能更改图像的大小和模型架构
|
||||
* 查看其他一些不错的 GAN 项目
|
||||
* [创建可生成音乐的 GAN](https://deepmind.com/blog/wavenet-generative-model-raw-audio/)
|
||||
|
||||
**脚本的总运行时间**:(29 分钟 17.480 秒)
|
||||
|
||||
[下载 Python 源码:`dcgan_faces_tutorial.py`](../_downloads/dc0e6f475c6735eb8d233374f8f462eb/dcgan_faces_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`dcgan_faces_tutorial.ipynb`](../_downloads/e9c8374ecc202120dc94db26bf08a00f/dcgan_faces_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
1
pytorch/官方教程/23.md
Normal file
@@ -0,0 +1 @@
|
||||
# 音频
|
||||
499
pytorch/官方教程/24.md
Normal file
@@ -0,0 +1,499 @@
|
||||
# 音频 I/O 和`torchaudio`的预处理
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html>
|
||||
|
||||
PyTorch 是一个开源深度学习平台,提供了从研究原型到具有 GPU 支持的生产部署的无缝路径。
|
||||
|
||||
解决机器学习问题的重要工作是准备数据。 `torchaudio`充分利用了 PyTorch 的 GPU 支持,并提供了许多工具来简化数据加载并使其更具可读性。 在本教程中,我们将看到如何从简单的数据集中加载和预处理数据。 请访问[音频 I/O 和`torchaudio`](https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html)的预处理,以了解更多信息。
|
||||
|
||||
对于本教程,请确保已安装`matplotlib`包,以便于查看。
|
||||
|
||||
```py
|
||||
# Uncomment the following line to run in Google Colab
|
||||
# !pip install torchaudio
|
||||
import torch
|
||||
import torchaudio
|
||||
import requests
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
```
|
||||
|
||||
## 打开文件
|
||||
|
||||
`torchaudio`还支持以 wav 和 mp3 格式加载声音文件。 我们将波形称为原始音频信号。
|
||||
|
||||
```py
|
||||
url = "https://pytorch.org/tutorials/_static/img/steam-train-whistle-daniel_simon-converted-from-mp3.wav"
|
||||
r = requests.get(url)
|
||||
|
||||
with open('steam-train-whistle-daniel_simon-converted-from-mp3.wav', 'wb') as f:
|
||||
f.write(r.content)
|
||||
|
||||
filename = "steam-train-whistle-daniel_simon-converted-from-mp3.wav"
|
||||
waveform, sample_rate = torchaudio.load(filename)
|
||||
|
||||
print("Shape of waveform: {}".format(waveform.size()))
|
||||
print("Sample rate of waveform: {}".format(sample_rate))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(waveform.t().numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of waveform: torch.Size([2, 276858])
|
||||
Sample rate of waveform: 44100
|
||||
|
||||
```
|
||||
|
||||
在`torchaudio`中加载文件时,可以选择指定后端以通过`torchaudio.set_audio_backend`使用 [SoX](https://pypi.org/project/sox/) 或 [SoundFile](https://pypi.org/project/SoundFile/) 。 这些后端在需要时会延迟加载。
|
||||
|
||||
`torchaudio`还使 JIT 编译对于函数是可选的,并在可能的情况下使用`nn.Module`。
|
||||
|
||||
## 转换
|
||||
|
||||
`torchaudio`支持不断增长的[转换列表](https://pytorch.org/audio/stable/transforms.html)。
|
||||
|
||||
* `Resample`:将波形重采样为其他采样率。
|
||||
* `Spectrogram`:从波形创建频谱图。
|
||||
* `GriffinLim`:使用 Griffin-Lim 变换从线性比例幅度谱图计算波形。
|
||||
* `ComputeDeltas`:计算张量(通常是声谱图)的增量系数。
|
||||
* `ComplexNorm`:计算复数张量的范数。
|
||||
* `MelScale`:使用转换矩阵将正常 STFT 转换为 Mel 频率 STFT。
|
||||
* `AmplitudeToDB`:这将频谱图从功率/振幅标度变为分贝标度。
|
||||
* `MFCC`:从波形创建梅尔频率倒谱系数。
|
||||
* `MelSpectrogram`:使用 PyTorch 中的 STFT 特征从波形创建 MEL 频谱图。
|
||||
* `MuLawEncoding`:基于 mu-law 压扩对波形进行编码。
|
||||
* `MuLawDecoding`:解码 mu-law 编码波形。
|
||||
* `TimeStretch`:在不更改给定速率的音调的情况下,及时拉伸频谱图。
|
||||
* `FrequencyMasking`:在频域中对频谱图应用屏蔽。
|
||||
* `TimeMasking`:在时域中对频谱图应用屏蔽。
|
||||
|
||||
每个变换都支持批量:您可以对单个原始音频信号或频谱图或许多相同形状的信号执行变换。
|
||||
|
||||
由于所有变换都是`nn.Modules`或`jit.ScriptModules`,因此它们可以随时用作神经网络的一部分。
|
||||
|
||||
首先,我们可以以对数刻度查看频谱图的对数。
|
||||
|
||||
```py
|
||||
specgram = torchaudio.transforms.Spectrogram()(waveform)
|
||||
|
||||
print("Shape of spectrogram: {}".format(specgram.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.imshow(specgram.log2()[0,:,:].numpy(), cmap='gray')
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of spectrogram: torch.Size([2, 201, 1385])
|
||||
|
||||
```
|
||||
|
||||
或者我们可以以对数刻度查看梅尔光谱图。
|
||||
|
||||
```py
|
||||
specgram = torchaudio.transforms.MelSpectrogram()(waveform)
|
||||
|
||||
print("Shape of spectrogram: {}".format(specgram.size()))
|
||||
|
||||
plt.figure()
|
||||
p = plt.imshow(specgram.log2()[0,:,:].detach().numpy(), cmap='gray')
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of spectrogram: torch.Size([2, 128, 1385])
|
||||
|
||||
```
|
||||
|
||||
我们可以一次对一个通道重新采样波形。
|
||||
|
||||
```py
|
||||
new_sample_rate = sample_rate/10
|
||||
|
||||
# Since Resample applies to a single channel, we resample first channel here
|
||||
channel = 0
|
||||
transformed = torchaudio.transforms.Resample(sample_rate, new_sample_rate)(waveform[channel,:].view(1,-1))
|
||||
|
||||
print("Shape of transformed waveform: {}".format(transformed.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(transformed[0,:].numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of transformed waveform: torch.Size([1, 27686])
|
||||
|
||||
```
|
||||
|
||||
作为变换的另一个示例,我们可以基于 Mu-Law 编码对信号进行编码。 但是要这样做,我们需要信号在 -1 和 1 之间。由于张量只是常规的 PyTorch 张量,因此我们可以在其上应用标准运算符。
|
||||
|
||||
```py
|
||||
# Let's check if the tensor is in the interval [-1,1]
|
||||
print("Min of waveform: {}\nMax of waveform: {}\nMean of waveform: {}".format(waveform.min(), waveform.max(), waveform.mean()))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Min of waveform: -0.572845458984375
|
||||
Max of waveform: 0.575958251953125
|
||||
Mean of waveform: 9.293758921558037e-05
|
||||
|
||||
```
|
||||
|
||||
由于波形已经在 -1 和 1 之间,因此我们不需要对其进行归一化。
|
||||
|
||||
```py
|
||||
def normalize(tensor):
|
||||
# Subtract the mean, and scale to the interval [-1,1]
|
||||
tensor_minusmean = tensor - tensor.mean()
|
||||
return tensor_minusmean/tensor_minusmean.abs().max()
|
||||
|
||||
# Let's normalize to the full interval [-1,1]
|
||||
# waveform = normalize(waveform)
|
||||
|
||||
```
|
||||
|
||||
让我们对波形进行编码。
|
||||
|
||||
```py
|
||||
transformed = torchaudio.transforms.MuLawEncoding()(waveform)
|
||||
|
||||
print("Shape of transformed waveform: {}".format(transformed.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(transformed[0,:].numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of transformed waveform: torch.Size([2, 276858])
|
||||
|
||||
```
|
||||
|
||||
现在解码。
|
||||
|
||||
```py
|
||||
reconstructed = torchaudio.transforms.MuLawDecoding()(transformed)
|
||||
|
||||
print("Shape of recovered waveform: {}".format(reconstructed.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(reconstructed[0,:].numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of recovered waveform: torch.Size([2, 276858])
|
||||
|
||||
```
|
||||
|
||||
我们最终可以将原始波形与其重构版本进行比较。
|
||||
|
||||
```py
|
||||
# Compute median relative difference
|
||||
err = ((waveform-reconstructed).abs() / waveform.abs()).median()
|
||||
|
||||
print("Median relative difference between original and MuLaw reconstucted signals: {:.2%}".format(err))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Median relative difference between original and MuLaw reconstucted signals: 1.28%
|
||||
|
||||
```
|
||||
|
||||
## 函数
|
||||
|
||||
上面看到的转换依赖于较低级别的无状态函数进行计算。 这些函数在`torchaudio.functional`下可用。 完整列表在[此处](https://pytorch.org/audio/functional.html),包括:
|
||||
|
||||
* `istft`:短期傅立叶逆变换。
|
||||
* `gain`:对整个波形进行放大或衰减。
|
||||
* `dither`:增加以特定位深度存储的音频的动态范围。
|
||||
* `compute_deltas`:计算张量的增量系数。
|
||||
* `equalizer_biquad`:设计双二阶峰值均衡器过滤器并执行滤波。
|
||||
* `lowpass_biquad`:设计双二阶低通过滤器并执行滤波。
|
||||
* `highpass_biquad`:设计双二阶高通过滤器并执行滤波。
|
||||
|
||||
例如,让我们尝试`mu_law_encoding`函数:
|
||||
|
||||
```py
|
||||
mu_law_encoding_waveform = torchaudio.functional.mu_law_encoding(waveform, quantization_channels=256)
|
||||
|
||||
print("Shape of transformed waveform: {}".format(mu_law_encoding_waveform.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(mu_law_encoding_waveform[0,:].numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of transformed waveform: torch.Size([2, 276858])
|
||||
|
||||
```
|
||||
|
||||
您可以看到`torchaudio.functional.mu_law_encoding`的输出与`torchaudio.transforms.MuLawEncoding`的输出相同。
|
||||
|
||||
现在让我们尝试其他一些函数,并可视化其输出。 通过我们的频谱图,我们可以计算出其增量:
|
||||
|
||||
```py
|
||||
computed = torchaudio.functional.compute_deltas(specgram.contiguous(), win_length=3)
|
||||
print("Shape of computed deltas: {}".format(computed.shape))
|
||||
|
||||
plt.figure()
|
||||
plt.imshow(computed.log2()[0,:,:].detach().numpy(), cmap='gray')
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of computed deltas: torch.Size([2, 128, 1385])
|
||||
|
||||
```
|
||||
|
||||
我们可以获取原始波形并对其应用不同的效果。
|
||||
|
||||
```py
|
||||
gain_waveform = torchaudio.functional.gain(waveform, gain_db=5.0)
|
||||
print("Min of gain_waveform: {}\nMax of gain_waveform: {}\nMean of gain_waveform: {}".format(gain_waveform.min(), gain_waveform.max(), gain_waveform.mean()))
|
||||
|
||||
dither_waveform = torchaudio.functional.dither(waveform)
|
||||
print("Min of dither_waveform: {}\nMax of dither_waveform: {}\nMean of dither_waveform: {}".format(dither_waveform.min(), dither_waveform.max(), dither_waveform.mean()))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Min of gain_waveform: -1.0186792612075806
|
||||
Max of gain_waveform: 1.024214744567871
|
||||
Mean of gain_waveform: 0.00016526899707969278
|
||||
Min of dither_waveform: -0.572784423828125
|
||||
Max of dither_waveform: 0.575927734375
|
||||
Mean of dither_waveform: 0.00010744280007202178
|
||||
|
||||
```
|
||||
|
||||
`torchaudio.functional`中函数的另一个示例是将过滤器应用于我们的波形。 将低通双二阶过滤器应用于我们的波形,将输出修改了频率信号的新波形。
|
||||
|
||||
```py
|
||||
lowpass_waveform = torchaudio.functional.lowpass_biquad(waveform, sample_rate, cutoff_freq=3000)
|
||||
|
||||
print("Min of lowpass_waveform: {}\nMax of lowpass_waveform: {}\nMean of lowpass_waveform: {}".format(lowpass_waveform.min(), lowpass_waveform.max(), lowpass_waveform.mean()))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(lowpass_waveform.t().numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Min of lowpass_waveform: -0.5595059990882874
|
||||
Max of lowpass_waveform: 0.5595012307167053
|
||||
Mean of lowpass_waveform: 9.293757466366515e-05
|
||||
|
||||
```
|
||||
|
||||
我们还可以使用高通双二阶过滤器可视化波形。
|
||||
|
||||
```py
|
||||
highpass_waveform = torchaudio.functional.highpass_biquad(waveform, sample_rate, cutoff_freq=2000)
|
||||
|
||||
print("Min of highpass_waveform: {}\nMax of highpass_waveform: {}\nMean of highpass_waveform: {}".format(highpass_waveform.min(), highpass_waveform.max(), highpass_waveform.mean()))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(highpass_waveform.t().numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Min of highpass_waveform: -0.11269102990627289
|
||||
Max of highpass_waveform: 0.10451897978782654
|
||||
Mean of highpass_waveform: 1.8138147234170177e-11
|
||||
|
||||
```
|
||||
|
||||
## 从 Kaldi 迁移到`torchaudio`
|
||||
|
||||
用户可能熟悉 [Kaldi](http://github.com/kaldi-asr/kaldi) (一种用于语音识别的工具包)。 `torchaudio`提供与`torchaudio.kaldi_io`中的兼容性。 实际上,它可以通过以下方式从 kaldi scp 或 ark 文件或流中读取:
|
||||
|
||||
* `read_vec_int_ark`
|
||||
* `read_vec_flt_scp`
|
||||
* `read_vec_flt_arkfile`/流
|
||||
* `read_mat_scp`
|
||||
* `read_mat_ark`
|
||||
|
||||
`torchaudio`为`spectrogram`,`fbank`,`mfcc`和 Kaldi [提供兼容的转换](#id2)。 `resample_waveform`受益于 GPU 支持,有关更多信息,请参见[此处](compliance.kaldi.html)。
|
||||
|
||||
```py
|
||||
n_fft = 400.0
|
||||
frame_length = n_fft / sample_rate * 1000.0
|
||||
frame_shift = frame_length / 2.0
|
||||
|
||||
params = {
|
||||
"channel": 0,
|
||||
"dither": 0.0,
|
||||
"window_type": "hanning",
|
||||
"frame_length": frame_length,
|
||||
"frame_shift": frame_shift,
|
||||
"remove_dc_offset": False,
|
||||
"round_to_power_of_two": False,
|
||||
"sample_frequency": sample_rate,
|
||||
}
|
||||
|
||||
specgram = torchaudio.compliance.kaldi.spectrogram(waveform, **params)
|
||||
|
||||
print("Shape of spectrogram: {}".format(specgram.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.imshow(specgram.t().numpy(), cmap='gray')
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of spectrogram: torch.Size([1383, 201])
|
||||
|
||||
```
|
||||
|
||||
我们还支持根据波形计算过滤器组特征,以匹配 Kaldi 的实现。
|
||||
|
||||
```py
|
||||
fbank = torchaudio.compliance.kaldi.fbank(waveform, **params)
|
||||
|
||||
print("Shape of fbank: {}".format(fbank.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.imshow(fbank.t().numpy(), cmap='gray')
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of fbank: torch.Size([1383, 23])
|
||||
|
||||
```
|
||||
|
||||
您可以从原始音频信号创建梅尔频率倒谱系数,这与 Kaldi 的 compute-mfcc-feats 的输入/输出相匹配。
|
||||
|
||||
```py
|
||||
mfcc = torchaudio.compliance.kaldi.mfcc(waveform, **params)
|
||||
|
||||
print("Shape of mfcc: {}".format(mfcc.size()))
|
||||
|
||||
plt.figure()
|
||||
plt.imshow(mfcc.t().numpy(), cmap='gray')
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Shape of mfcc: torch.Size([1383, 13])
|
||||
|
||||
```
|
||||
|
||||
## 可用数据集
|
||||
|
||||
如果您不想创建自己的数据集来训练模型,则`torchaudio`提供了统一的数据集接口。 该接口支持将文件延迟加载到内存,下载和提取函数以及数据集以构建模型。
|
||||
|
||||
当前支持的数据集`torchaudio`为:
|
||||
|
||||
* **VCTK**:109 位以英语为母语的母语者说的语音数据,带有各种重音([在此处详细了解](https://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html))。
|
||||
* **Yesno**:一个人在希伯来语中说是或否的 60 张唱片; 每个记录长 8 个字([在此处了解更多](https://www.openslr.org/1/))。
|
||||
* **Common Voice**:开源的多语言语音数据集,任何人都可以用来训练启用语音的应用([在此处了解更多](https://voice.mozilla.org/en/datasets))。
|
||||
* **LibriSpeech**:阅读英语语音的大型语料库(1000 小时)([在此处详细了解](http://www.openslr.org/12))。
|
||||
|
||||
```py
|
||||
yesno_data = torchaudio.datasets.YESNO('./', download=True)
|
||||
|
||||
# A data point in Yesno is a tuple (waveform, sample_rate, labels) where labels is a list of integers with 1 for yes and 0 for no.
|
||||
|
||||
# Pick data point number 3 to see an example of the the yesno_data:
|
||||
n = 3
|
||||
waveform, sample_rate, labels = yesno_data[n]
|
||||
|
||||
print("Waveform: {}\nSample rate: {}\nLabels: {}".format(waveform, sample_rate, labels))
|
||||
|
||||
plt.figure()
|
||||
plt.plot(waveform.t().numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Waveform: tensor([[ 3.0518e-05, 6.1035e-05, 3.0518e-05, ..., -1.8311e-04,
|
||||
4.2725e-04, 6.7139e-04]])
|
||||
Sample rate: 8000
|
||||
Labels: [0, 0, 1, 0, 0, 0, 1, 0]
|
||||
|
||||
```
|
||||
|
||||
现在,每当您从数据集中请求声音文件时,仅当您请求声音文件时,它才会加载到内存中。 这意味着,数据集仅加载所需的项目并将其保留在内存中,并保存在内存中。
|
||||
|
||||
## 总结
|
||||
|
||||
我们使用示例原始音频信号或波形来说明如何使用`torchaudio`打开音频文件,以及如何对该波形进行预处理,变换和应用函数。 我们还演示了如何使用熟悉的 Kaldi 函数以及如何利用内置数据集构建模型。 鉴于`torchaudio`是基于 PyTorch 构建的,因此这些技术可以在利用 GPU 的同时,用作语音识别等更高级音频应用的构建块。
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 18.821 秒)
|
||||
|
||||
[下载 Python 源码:`audio_preprocessing_tutorial.py`](../_downloads/5ffe15ce830e55b3a9e9c294d04ab41c/audio_preprocessing_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`audio_preprocessing_tutorial.ipynb`](../_downloads/7303ce3181f4dbc9a50bc1ed5bb3218f/audio_preprocessing_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
456
pytorch/官方教程/25.md
Normal file
@@ -0,0 +1,456 @@
|
||||
# 使用`torchaudio`的语音命令识别
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/intermediate/speech_command_recognition_with_torchaudio.html>
|
||||
|
||||
本教程将向您展示如何正确设置音频数据集的格式,然后在数据集上训练/测试音频分类器网络。
|
||||
|
||||
Colab 提供了 GPU 选项。 在菜单选项卡中,选择“运行系统”,然后选择“更改运行系统类型”。 在随后的弹出窗口中,您可以选择 GPU。 更改之后,运行时应自动重新启动(这意味着来自已执行单元的信息会消失)。
|
||||
|
||||
首先,让我们导入常见的 Torch 包,例如[`torchaudio`](https://github.com/pytorch/audio),可以按照网站上的说明进行安装。
|
||||
|
||||
```py
|
||||
# Uncomment the following line to run in Google Colab
|
||||
|
||||
# CPU:
|
||||
# !pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
|
||||
|
||||
# GPU:
|
||||
# !pip install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
|
||||
|
||||
# For interactive demo at the end:
|
||||
# !pip install pydub
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import torch.optim as optim
|
||||
import torchaudio
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import IPython.display as ipd
|
||||
from tqdm.notebook import tqdm
|
||||
|
||||
```
|
||||
|
||||
让我们检查一下 CUDA GPU 是否可用,然后选择我们的设备。 在 GPU 上运行网络将大大减少训练/测试时间。
|
||||
|
||||
```py
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
print(device)
|
||||
|
||||
```
|
||||
|
||||
## 导入数据集
|
||||
|
||||
我们使用`torchaudio`下载并表示数据集。 在这里,我们使用 [SpeechCommands](https://arxiv.org/abs/1804.03209),它是由不同人员说出的 35 个命令的数据集。 数据集`SPEECHCOMMANDS`是数据集的`torch.utils.data.Dataset`版本。 在此数据集中,所有音频文件的长度约为 1 秒(因此约为 16000 个时间帧)。
|
||||
|
||||
实际的加载和格式化步骤是在访问数据点时发生的,`torchaudio`负责将音频文件转换为张量。 如果想直接加载音频文件,可以使用`torchaudio.load()`。 它返回一个包含新创建的张量的元组以及音频文件的采样频率(`SpeechCommands`为 16kHz)。
|
||||
|
||||
回到数据集,这里我们创建一个子类,将其分为标准训练,验证和测试子集。
|
||||
|
||||
```py
|
||||
from torchaudio.datasets import SPEECHCOMMANDS
|
||||
import os
|
||||
|
||||
class SubsetSC(SPEECHCOMMANDS):
|
||||
def __init__(self, subset: str = None):
|
||||
super().__init__("./", download=True)
|
||||
|
||||
def load_list(filename):
|
||||
filepath = os.path.join(self._path, filename)
|
||||
with open(filepath) as fileobj:
|
||||
return [os.path.join(self._path, line.strip()) for line in fileobj]
|
||||
|
||||
if subset == "validation":
|
||||
self._walker = load_list("validation_list.txt")
|
||||
elif subset == "testing":
|
||||
self._walker = load_list("testing_list.txt")
|
||||
elif subset == "training":
|
||||
excludes = load_list("validation_list.txt") + load_list("testing_list.txt")
|
||||
excludes = set(excludes)
|
||||
self._walker = [w for w in self._walker if w not in excludes]
|
||||
|
||||
# Create training and testing split of the data. We do not use validation in this tutorial.
|
||||
train_set = SubsetSC("training")
|
||||
test_set = SubsetSC("testing")
|
||||
|
||||
waveform, sample_rate, label, speaker_id, utterance_number = train_set[0]
|
||||
|
||||
```
|
||||
|
||||
`SPEECHCOMMANDS`数据集中的数据点是一个由波形(音频信号),采样率,发声(标签),讲话者的 ID,发声数组成的元组。
|
||||
|
||||
```py
|
||||
print("Shape of waveform: {}".format(waveform.size()))
|
||||
print("Sample rate of waveform: {}".format(sample_rate))
|
||||
|
||||
plt.plot(waveform.t().numpy());
|
||||
|
||||
```
|
||||
|
||||
让我们找到数据集中可用的标签列表。
|
||||
|
||||
```py
|
||||
labels = sorted(list(set(datapoint[2] for datapoint in train_set)))
|
||||
labels
|
||||
|
||||
```
|
||||
|
||||
35 个音频标签是用户说的命令。 前几个文件是人们所说的`marvin`。
|
||||
|
||||
```py
|
||||
waveform_first, *_ = train_set[0]
|
||||
ipd.Audio(waveform_first.numpy(), rate=sample_rate)
|
||||
|
||||
waveform_second, *_ = train_set[1]
|
||||
ipd.Audio(waveform_second.numpy(), rate=sample_rate)
|
||||
|
||||
```
|
||||
|
||||
最后一个文件是有人说“视觉”。
|
||||
|
||||
```py
|
||||
waveform_last, *_ = train_set[-1]
|
||||
ipd.Audio(waveform_last.numpy(), rate=sample_rate)
|
||||
|
||||
```
|
||||
|
||||
## 格式化数据
|
||||
|
||||
这是将转换应用于数据的好地方。 对于波形,我们对音频进行下采样以进行更快的处理,而不会损失太多的分类能力。
|
||||
|
||||
我们无需在此应用其他转换。 对于某些数据集,通常必须通过沿通道维度取平均值或仅保留其中一个通道来减少通道数量(例如,从立体声到单声道)。 由于`SpeechCommands`使用单个通道进行音频,因此此处不需要。
|
||||
|
||||
```py
|
||||
new_sample_rate = 8000
|
||||
transform = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=new_sample_rate)
|
||||
transformed = transform(waveform)
|
||||
|
||||
ipd.Audio(transformed.numpy(), rate=new_sample_rate)
|
||||
|
||||
```
|
||||
|
||||
我们使用标签列表中的每个索引对每个单词进行编码。
|
||||
|
||||
```py
|
||||
def label_to_index(word):
|
||||
# Return the position of the word in labels
|
||||
return torch.tensor(labels.index(word))
|
||||
|
||||
def index_to_label(index):
|
||||
# Return the word corresponding to the index in labels
|
||||
# This is the inverse of label_to_index
|
||||
return labels[index]
|
||||
|
||||
word_start = "yes"
|
||||
index = label_to_index(word_start)
|
||||
word_recovered = index_to_label(index)
|
||||
|
||||
print(word_start, "-->", index, "-->", word_recovered)
|
||||
|
||||
```
|
||||
|
||||
为了将由录音和语音构成的数据点列表转换为该模型的两个成批张量,我们实现了整理函数,PyTorch `DataLoader`使用了该函数,允许我们分批迭代数据集。 有关使用整理函数的更多信息,请参见[文档](https://pytorch.org/docs/stable/data.html#working-with-collate-fn)。
|
||||
|
||||
在整理函数中,我们还应用了重采样和文本编码。
|
||||
|
||||
```py
|
||||
def pad_sequence(batch):
|
||||
# Make all tensor in a batch the same length by padding with zeros
|
||||
batch = [item.t() for item in batch]
|
||||
batch = torch.nn.utils.rnn.pad_sequence(batch, batch_first=True, padding_value=0.)
|
||||
return batch.permute(0, 2, 1)
|
||||
|
||||
def collate_fn(batch):
|
||||
|
||||
# A data tuple has the form:
|
||||
# waveform, sample_rate, label, speaker_id, utterance_number
|
||||
|
||||
tensors, targets = [], []
|
||||
|
||||
# Gather in lists, and encode labels as indices
|
||||
for waveform, _, label, *_ in batch:
|
||||
tensors += [waveform]
|
||||
targets += [label_to_index(label)]
|
||||
|
||||
# Group the list of tensors into a batched tensor
|
||||
tensors = pad_sequence(tensors)
|
||||
targets = torch.stack(targets)
|
||||
|
||||
return tensors, targets
|
||||
|
||||
batch_size = 256
|
||||
|
||||
if device == "cuda":
|
||||
num_workers = 1
|
||||
pin_memory = True
|
||||
else:
|
||||
num_workers = 0
|
||||
pin_memory = False
|
||||
|
||||
train_loader = torch.utils.data.DataLoader(
|
||||
train_set,
|
||||
batch_size=batch_size,
|
||||
shuffle=True,
|
||||
collate_fn=collate_fn,
|
||||
num_workers=num_workers,
|
||||
pin_memory=pin_memory,
|
||||
)
|
||||
test_loader = torch.utils.data.DataLoader(
|
||||
test_set,
|
||||
batch_size=batch_size,
|
||||
shuffle=False,
|
||||
drop_last=False,
|
||||
collate_fn=collate_fn,
|
||||
num_workers=num_workers,
|
||||
pin_memory=pin_memory,
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
## 定义网络
|
||||
|
||||
在本教程中,我们将使用卷积神经网络来处理原始音频数据。 通常,更高级的转换将应用于音频数据,但是 CNN 可以用于准确处理原始数据。 具体架构是根据[本文](https://arxiv.org/pdf/1610.00087.pdf)中描述的 M5 网络架构建模的。 模型处理原始音频数据的一个重要方面是其第一层过滤器的接收范围。 我们模型的第一个过滤器长度为 80,因此在处理以 8kHz 采样的音频时,接收场约为 10ms(而在 4kHz 时约为 20ms)。 此大小类似于语音处理应用,该应用通常使用 20ms 到 40ms 的接收域。
|
||||
|
||||
```py
|
||||
class M5(nn.Module):
|
||||
def __init__(self, n_input=1, n_output=35, stride=16, n_channel=32):
|
||||
super().__init__()
|
||||
self.conv1 = nn.Conv1d(n_input, n_channel, kernel_size=80, stride=stride)
|
||||
self.bn1 = nn.BatchNorm1d(n_channel)
|
||||
self.pool1 = nn.MaxPool1d(4)
|
||||
self.conv2 = nn.Conv1d(n_channel, n_channel, kernel_size=3)
|
||||
self.bn2 = nn.BatchNorm1d(n_channel)
|
||||
self.pool2 = nn.MaxPool1d(4)
|
||||
self.conv3 = nn.Conv1d(n_channel, 2 * n_channel, kernel_size=3)
|
||||
self.bn3 = nn.BatchNorm1d(2 * n_channel)
|
||||
self.pool3 = nn.MaxPool1d(4)
|
||||
self.conv4 = nn.Conv1d(2 * n_channel, 2 * n_channel, kernel_size=3)
|
||||
self.bn4 = nn.BatchNorm1d(2 * n_channel)
|
||||
self.pool4 = nn.MaxPool1d(4)
|
||||
self.fc1 = nn.Linear(2 * n_channel, n_output)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.conv1(x)
|
||||
x = F.relu(self.bn1(x))
|
||||
x = self.pool1(x)
|
||||
x = self.conv2(x)
|
||||
x = F.relu(self.bn2(x))
|
||||
x = self.pool2(x)
|
||||
x = self.conv3(x)
|
||||
x = F.relu(self.bn3(x))
|
||||
x = self.pool3(x)
|
||||
x = self.conv4(x)
|
||||
x = F.relu(self.bn4(x))
|
||||
x = self.pool4(x)
|
||||
x = F.avg_pool1d(x, x.shape[-1])
|
||||
x = x.permute(0, 2, 1)
|
||||
x = self.fc1(x)
|
||||
return F.log_softmax(x, dim=2)
|
||||
|
||||
model = M5(n_input=transformed.shape[0], n_output=len(labels))
|
||||
model.to(device)
|
||||
print(model)
|
||||
|
||||
def count_parameters(model):
|
||||
return sum(p.numel() for p in model.parameters() if p.requires_grad)
|
||||
|
||||
n = count_parameters(model)
|
||||
print("Number of parameters: %s" % n)
|
||||
|
||||
```
|
||||
|
||||
我们将使用与本文相同的优化技术,将权重衰减设置为 0.0001 的 Adam 优化器。 首先,我们将以 0.01 的学习率进行训练,但是在 20 个周期后的训练过程中,我们将使用`scheduler`将其降低到 0.001。
|
||||
|
||||
```py
|
||||
optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=0.0001)
|
||||
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1) # reduce the learning after 20 epochs by a factor of 10
|
||||
|
||||
```
|
||||
|
||||
## 训练和测试网络
|
||||
|
||||
现在,我们定义一个训练函数,它将训练数据输入模型中,并执行反向传播和优化步骤。 对于训练,我们将使用的损失是负对数可能性。 然后,在每个周期之后将对网络进行测试,以查看训练期间准确率如何变化。
|
||||
|
||||
```py
|
||||
def train(model, epoch, log_interval):
|
||||
model.train()
|
||||
for batch_idx, (data, target) in enumerate(train_loader):
|
||||
|
||||
data = data.to(device)
|
||||
target = target.to(device)
|
||||
|
||||
# apply transform and model on whole batch directly on device
|
||||
data = transform(data)
|
||||
output = model(data)
|
||||
|
||||
# negative log-likelihood for a tensor of size (batch x 1 x n_output)
|
||||
loss = F.nll_loss(output.squeeze(), target)
|
||||
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
# print training stats
|
||||
if batch_idx % log_interval == 0:
|
||||
print(f"Train Epoch: {epoch} [{batch_idx * len(data)}/{len(train_loader.dataset)} ({100\. * batch_idx / len(train_loader):.0f}%)]\tLoss: {loss.item():.6f}")
|
||||
|
||||
# update progress bar
|
||||
pbar.update(pbar_update)
|
||||
# record loss
|
||||
losses.append(loss.item())
|
||||
|
||||
```
|
||||
|
||||
现在我们有了训练函数,我们需要制作一个用于测试网络准确率的函数。 我们将模型设置为`eval()`模式,然后对测试数据集进行推断。 调用`eval()`将网络中所有模块中的训练变量设置为`false`。 某些层(例如批量归一化层和丢弃层)在训练期间的行为会有所不同,因此此步骤对于获取正确的结果至关重要。
|
||||
|
||||
```py
|
||||
def number_of_correct(pred, target):
|
||||
# count number of correct predictions
|
||||
return pred.squeeze().eq(target).sum().item()
|
||||
|
||||
def get_likely_index(tensor):
|
||||
# find most likely label index for each element in the batch
|
||||
return tensor.argmax(dim=-1)
|
||||
|
||||
def test(model, epoch):
|
||||
model.eval()
|
||||
correct = 0
|
||||
for data, target in test_loader:
|
||||
|
||||
data = data.to(device)
|
||||
target = target.to(device)
|
||||
|
||||
# apply transform and model on whole batch directly on device
|
||||
data = transform(data)
|
||||
output = model(data)
|
||||
|
||||
pred = get_likely_index(output)
|
||||
correct += number_of_correct(pred, target)
|
||||
|
||||
# update progress bar
|
||||
pbar.update(pbar_update)
|
||||
|
||||
print(f"\nTest Epoch: {epoch}\tAccuracy: {correct}/{len(test_loader.dataset)} ({100\. * correct / len(test_loader.dataset):.0f}%)\n")
|
||||
|
||||
```
|
||||
|
||||
最后,我们可以训练和测试网络。 我们将训练网络十个周期,然后降低学习率,再训练十个周期。 在每个周期之后将对网络进行测试,以查看训练过程中准确率如何变化。
|
||||
|
||||
```py
|
||||
log_interval = 20
|
||||
n_epoch = 2
|
||||
|
||||
pbar_update = 1 / (len(train_loader) + len(test_loader))
|
||||
losses = []
|
||||
|
||||
# The transform needs to live on the same device as the model and the data.
|
||||
transform = transform.to(device)
|
||||
with tqdm(total=n_epoch) as pbar:
|
||||
for epoch in range(1, n_epoch + 1):
|
||||
train(model, epoch, log_interval)
|
||||
test(model, epoch)
|
||||
scheduler.step()
|
||||
|
||||
# Let's plot the training loss versus the number of iteration.
|
||||
# plt.plot(losses);
|
||||
# plt.title("training loss");
|
||||
|
||||
```
|
||||
|
||||
2 个周期后,测试集的网络准确率应超过 65%,而 21 个周期后,网络应达到 85%。 让我们看一下训练集中的最后几句话,看看模型是如何做到的。
|
||||
|
||||
```py
|
||||
def predict(tensor):
|
||||
# Use the model to predict the label of the waveform
|
||||
tensor = tensor.to(device)
|
||||
tensor = transform(tensor)
|
||||
tensor = model(tensor.unsqueeze(0))
|
||||
tensor = get_likely_index(tensor)
|
||||
tensor = index_to_label(tensor.squeeze())
|
||||
return tensor
|
||||
|
||||
waveform, sample_rate, utterance, *_ = train_set[-1]
|
||||
ipd.Audio(waveform.numpy(), rate=sample_rate)
|
||||
|
||||
print(f"Expected: {utterance}. Predicted: {predict(waveform)}.")
|
||||
|
||||
```
|
||||
|
||||
如果有一个示例,我们来寻找一个分类错误的示例。
|
||||
|
||||
```py
|
||||
for i, (waveform, sample_rate, utterance, *_) in enumerate(test_set):
|
||||
output = predict(waveform)
|
||||
if output != utterance:
|
||||
ipd.Audio(waveform.numpy(), rate=sample_rate)
|
||||
print(f"Data point #{i}. Expected: {utterance}. Predicted: {output}.")
|
||||
break
|
||||
else:
|
||||
print("All examples in this dataset were correctly classified!")
|
||||
print("In this case, let's just look at the last data point")
|
||||
ipd.Audio(waveform.numpy(), rate=sample_rate)
|
||||
print(f"Data point #{i}. Expected: {utterance}. Predicted: {output}.")
|
||||
|
||||
```
|
||||
|
||||
随意尝试使用其中一个标签的自己的录音! 例如,使用 Colab,在执行下面的单元格时说“ Go”。 这将录制一秒钟的音频并尝试对其进行分类。
|
||||
|
||||
```py
|
||||
from google.colab import output as colab_output
|
||||
from base64 import b64decode
|
||||
from io import BytesIO
|
||||
from pydub import AudioSegment
|
||||
|
||||
RECORD = """
|
||||
const sleep = time => new Promise(resolve => setTimeout(resolve, time))
|
||||
const b2text = blob => new Promise(resolve => {
|
||||
const reader = new FileReader()
|
||||
reader.onloadend = e => resolve(e.srcElement.result)
|
||||
reader.readAsDataURL(blob)
|
||||
})
|
||||
var record = time => new Promise(async resolve => {
|
||||
stream = await navigator.mediaDevices.getUserMedia({ audio: true })
|
||||
recorder = new MediaRecorder(stream)
|
||||
chunks = []
|
||||
recorder.ondataavailable = e => chunks.push(e.data)
|
||||
recorder.start()
|
||||
await sleep(time)
|
||||
recorder.onstop = async ()=>{
|
||||
blob = new Blob(chunks)
|
||||
text = await b2text(blob)
|
||||
resolve(text)
|
||||
}
|
||||
recorder.stop()
|
||||
})
|
||||
"""
|
||||
|
||||
def record(seconds=1):
|
||||
display(ipd.Javascript(RECORD))
|
||||
print(f"Recording started for {seconds} seconds.")
|
||||
s = colab_output.eval_js("record(%d)" % (seconds * 1000))
|
||||
print("Recording ended.")
|
||||
b = b64decode(s.split(",")[1])
|
||||
|
||||
fileformat = "wav"
|
||||
filename = f"_audio.{fileformat}"
|
||||
AudioSegment.from_file(BytesIO(b)).export(filename, format=fileformat)
|
||||
return torchaudio.load(filename)
|
||||
|
||||
waveform, sample_rate = record()
|
||||
print(f"Predicted: {predict(waveform)}.")
|
||||
ipd.Audio(waveform.numpy(), rate=sample_rate)
|
||||
|
||||
```
|
||||
|
||||
## 总结
|
||||
|
||||
在本教程中,我们使用了`torchaudio`来加载数据集并对信号进行重新采样。 然后,我们定义了经过训练的神经网络,以识别给定命令。 还有其他数据预处理方法,例如找到梅尔频率倒谱系数(MFCC),可以减小数据集的大小。 此变换也可以在`torchaudio`中作为`torchaudio.transforms.MFCC`使用。
|
||||
|
||||
**脚本的总运行时间**:(0 分钟 0.000 秒)
|
||||
|
||||
[下载 Python 源码:`speech_command_recognition_with_torchaudio.py`](../_downloads/4cbc77c0f631ff7a80a046f57b97a075/speech_command_recognition_with_torchaudio.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`speech_command_recognition_with_torchaudio.ipynb`](../_downloads/d87597d0062580c9ec699193e951e3f4/speech_command_recognition_with_torchaudio.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
1
pytorch/官方教程/26.md
Normal file
@@ -0,0 +1 @@
|
||||
# 文本
|
||||
329
pytorch/官方教程/27.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# 使用`nn.Transformer`和`torchtext`的序列到序列建模
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/transformer_tutorial.html>
|
||||
|
||||
这是一个有关如何训练使用[`nn.Transformer`](https://pytorch.org/docs/master/nn.html?highlight=nn%20transformer#torch.nn.Transformer)模块的序列到序列模型的教程。
|
||||
|
||||
PyTorch 1.2 版本包括一个基于[论文](https://arxiv.org/pdf/1706.03762.pdf)的标准转换器模块。 事实证明,该转换器模型在许多序列间问题上具有较高的质量,同时具有更高的可并行性。 `nn.Transformer`模块完全依赖于注意力机制(另一个最近实现为[`nn.MultiheadAttention`](https://pytorch.org/docs/master/nn.html?highlight=multiheadattention#torch.nn.MultiheadAttention)的模块)来绘制输入和输出之间的全局依存关系。 `nn.Transformer`模块现已高度模块化,因此可以轻松地修改/组成单个组件(如本教程中的[`nn.TransformerEncoder`](https://pytorch.org/docs/master/nn.html?highlight=nn%20transformerencoder#torch.nn.TransformerEncoder))。
|
||||
|
||||

|
||||
|
||||
## 定义模型
|
||||
|
||||
在本教程中,我们将在语言建模任务上训练`nn.TransformerEncoder`模型。 语言建模任务是为给定单词(或单词序列)遵循单词序列的可能性分配概率。 标记序列首先传递到嵌入层,然后传递到位置编码层以说明单词的顺序(有关更多详细信息,请参见下一段)。 `nn.TransformerEncoder`由多层[`nn.TransformerEncoderLayer`](https://pytorch.org/docs/master/nn.html?highlight=transformerencoderlayer#torch.nn.TransformerEncoderLayer)组成。 与输入序列一起,还需要一个正方形的注意掩码,因为`nn.TransformerEncoder`中的自注意层仅允许出现在该序列中的较早位置。 对于语言建模任务,应屏蔽将来头寸上的所有标记。 为了获得实际的单词,将`nn.TransformerEncoder`模型的输出发送到最终的`Linear`层,然后是对数 Softmax 函数。
|
||||
|
||||
```py
|
||||
import math
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
class TransformerModel(nn.Module):
|
||||
|
||||
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
|
||||
super(TransformerModel, self).__init__()
|
||||
from torch.nn import TransformerEncoder, TransformerEncoderLayer
|
||||
self.model_type = 'Transformer'
|
||||
self.pos_encoder = PositionalEncoding(ninp, dropout)
|
||||
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
|
||||
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
|
||||
self.encoder = nn.Embedding(ntoken, ninp)
|
||||
self.ninp = ninp
|
||||
self.decoder = nn.Linear(ninp, ntoken)
|
||||
|
||||
self.init_weights()
|
||||
|
||||
def generate_square_subsequent_mask(self, sz):
|
||||
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
|
||||
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
|
||||
return mask
|
||||
|
||||
def init_weights(self):
|
||||
initrange = 0.1
|
||||
self.encoder.weight.data.uniform_(-initrange, initrange)
|
||||
self.decoder.bias.data.zero_()
|
||||
self.decoder.weight.data.uniform_(-initrange, initrange)
|
||||
|
||||
def forward(self, src, src_mask):
|
||||
src = self.encoder(src) * math.sqrt(self.ninp)
|
||||
src = self.pos_encoder(src)
|
||||
output = self.transformer_encoder(src, src_mask)
|
||||
output = self.decoder(output)
|
||||
return output
|
||||
|
||||
```
|
||||
|
||||
`PositionalEncoding`模块注入一些有关标记在序列中的相对或绝对位置的信息。 位置编码的尺寸与嵌入的尺寸相同,因此可以将两者相加。 在这里,我们使用不同频率的`sine`和`cosine`函数。
|
||||
|
||||
```py
|
||||
class PositionalEncoding(nn.Module):
|
||||
|
||||
def __init__(self, d_model, dropout=0.1, max_len=5000):
|
||||
super(PositionalEncoding, self).__init__()
|
||||
self.dropout = nn.Dropout(p=dropout)
|
||||
|
||||
pe = torch.zeros(max_len, d_model)
|
||||
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
|
||||
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
|
||||
pe[:, 0::2] = torch.sin(position * div_term)
|
||||
pe[:, 1::2] = torch.cos(position * div_term)
|
||||
pe = pe.unsqueeze(0).transpose(0, 1)
|
||||
self.register_buffer('pe', pe)
|
||||
|
||||
def forward(self, x):
|
||||
x = x + self.pe[:x.size(0), :]
|
||||
return self.dropout(x)
|
||||
|
||||
```
|
||||
|
||||
## 加载和批量数据
|
||||
|
||||
本教程使用`torchtext`生成 Wikitext-2 数据集。 `vocab`对象是基于训练数据集构建的,用于将标记数字化为张量。 从序列数据开始,`batchify()`函数将数据集排列为列,以修剪掉数据分成大小为`batch_size`的批量后剩余的所有标记。 例如,以字母为序列(总长度为 26)并且批大小为 4,我们将字母分为 4 个长度为 6 的序列:
|
||||
|
||||

|
||||
|
||||
这些列被模型视为独立的,这意味着无法了解`G`和`F`的依赖性,但可以进行更有效的批量。
|
||||
|
||||
```py
|
||||
import io
|
||||
import torch
|
||||
from torchtext.utils import download_from_url, extract_archive
|
||||
from torchtext.data.utils import get_tokenizer
|
||||
from torchtext.vocab import build_vocab_from_iterator
|
||||
|
||||
url = 'https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip'
|
||||
test_filepath, valid_filepath, train_filepath = extract_archive(download_from_url(url))
|
||||
tokenizer = get_tokenizer('basic_english')
|
||||
vocab = build_vocab_from_iterator(map(tokenizer,
|
||||
iter(io.open(train_filepath,
|
||||
encoding="utf8"))))
|
||||
|
||||
def data_process(raw_text_iter):
|
||||
data = [torch.tensor([vocab[token] for token in tokenizer(item)],
|
||||
dtype=torch.long) for item in raw_text_iter]
|
||||
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
|
||||
|
||||
train_data = data_process(iter(io.open(train_filepath, encoding="utf8")))
|
||||
val_data = data_process(iter(io.open(valid_filepath, encoding="utf8")))
|
||||
test_data = data_process(iter(io.open(test_filepath, encoding="utf8")))
|
||||
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
def batchify(data, bsz):
|
||||
# Divide the dataset into bsz parts.
|
||||
nbatch = data.size(0) // bsz
|
||||
# Trim off any extra elements that wouldn't cleanly fit (remainders).
|
||||
data = data.narrow(0, 0, nbatch * bsz)
|
||||
# Evenly divide the data across the bsz batches.
|
||||
data = data.view(bsz, -1).t().contiguous()
|
||||
return data.to(device)
|
||||
|
||||
batch_size = 20
|
||||
eval_batch_size = 10
|
||||
train_data = batchify(train_data, batch_size)
|
||||
val_data = batchify(val_data, eval_batch_size)
|
||||
test_data = batchify(test_data, eval_batch_size)
|
||||
|
||||
```
|
||||
|
||||
### 生成输入序列和目标序列的函数
|
||||
|
||||
`get_batch()`函数为转换器模型生成输入和目标序列。 它将源数据细分为长度为`bptt`的块。 对于语言建模任务,模型需要以下单词作为`Target`。 例如,如果`bptt`值为 2,则`i = 0`时,我们将获得以下两个变量:
|
||||
|
||||

|
||||
|
||||
应该注意的是,这些块沿着维度 0,与`Transformer`模型中的`S`维度一致。 批量尺寸`N`沿尺寸 1。
|
||||
|
||||
```py
|
||||
bptt = 35
|
||||
def get_batch(source, i):
|
||||
seq_len = min(bptt, len(source) - 1 - i)
|
||||
data = source[i:i+seq_len]
|
||||
target = source[i+1:i+1+seq_len].reshape(-1)
|
||||
return data, target
|
||||
|
||||
```
|
||||
|
||||
## 启动实例
|
||||
|
||||
使用下面的超参数建立模型。 `vocab`的大小等于`vocab`对象的长度。
|
||||
|
||||
```py
|
||||
ntokens = len(vocab.stoi) # the size of vocabulary
|
||||
emsize = 200 # embedding dimension
|
||||
nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder
|
||||
nlayers = 2 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
|
||||
nhead = 2 # the number of heads in the multiheadattention models
|
||||
dropout = 0.2 # the dropout value
|
||||
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)
|
||||
|
||||
```
|
||||
|
||||
## 运行模型
|
||||
|
||||
[`CrossEntropyLoss`](https://pytorch.org/docs/master/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss)用于跟踪损失,[`SGD`](https://pytorch.org/docs/master/optim.html?highlight=sgd#torch.optim.SGD)实现随机梯度下降方法作为优化器。 初始学习率设置为 5.0。 [`StepLR`](https://pytorch.org/docs/master/optim.html?highlight=steplr#torch.optim.lr_scheduler.StepLR)用于通过历时调整学习率。 在训练期间,我们使用[`nn.utils.clip_grad_norm_`](https://pytorch.org/docs/master/nn.html?highlight=nn%20utils%20clip_grad_norm#torch.nn.utils.clip_grad_norm_)函数将所有梯度缩放在一起,以防止爆炸。
|
||||
|
||||
```py
|
||||
criterion = nn.CrossEntropyLoss()
|
||||
lr = 5.0 # learning rate
|
||||
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
|
||||
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
|
||||
|
||||
import time
|
||||
def train():
|
||||
model.train() # Turn on the train mode
|
||||
total_loss = 0.
|
||||
start_time = time.time()
|
||||
src_mask = model.generate_square_subsequent_mask(bptt).to(device)
|
||||
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
|
||||
data, targets = get_batch(train_data, i)
|
||||
optimizer.zero_grad()
|
||||
if data.size(0) != bptt:
|
||||
src_mask = model.generate_square_subsequent_mask(data.size(0)).to(device)
|
||||
output = model(data, src_mask)
|
||||
loss = criterion(output.view(-1, ntokens), targets)
|
||||
loss.backward()
|
||||
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
|
||||
optimizer.step()
|
||||
|
||||
total_loss += loss.item()
|
||||
log_interval = 200
|
||||
if batch % log_interval == 0 and batch > 0:
|
||||
cur_loss = total_loss / log_interval
|
||||
elapsed = time.time() - start_time
|
||||
print('| epoch {:3d} | {:5d}/{:5d} batches | '
|
||||
'lr {:02.2f} | ms/batch {:5.2f} | '
|
||||
'loss {:5.2f} | ppl {:8.2f}'.format(
|
||||
epoch, batch, len(train_data) // bptt, scheduler.get_lr()[0],
|
||||
elapsed * 1000 / log_interval,
|
||||
cur_loss, math.exp(cur_loss)))
|
||||
total_loss = 0
|
||||
start_time = time.time()
|
||||
|
||||
def evaluate(eval_model, data_source):
|
||||
eval_model.eval() # Turn on the evaluation mode
|
||||
total_loss = 0.
|
||||
src_mask = model.generate_square_subsequent_mask(bptt).to(device)
|
||||
with torch.no_grad():
|
||||
for i in range(0, data_source.size(0) - 1, bptt):
|
||||
data, targets = get_batch(data_source, i)
|
||||
if data.size(0) != bptt:
|
||||
src_mask = model.generate_square_subsequent_mask(data.size(0)).to(device)
|
||||
output = eval_model(data, src_mask)
|
||||
output_flat = output.view(-1, ntokens)
|
||||
total_loss += len(data) * criterion(output_flat, targets).item()
|
||||
return total_loss / (len(data_source) - 1)
|
||||
|
||||
```
|
||||
|
||||
循环遍历。 如果验证损失是迄今为止迄今为止最好的,请保存模型。 在每个周期之后调整学习率。
|
||||
|
||||
```py
|
||||
best_val_loss = float("inf")
|
||||
epochs = 3 # The number of epochs
|
||||
best_model = None
|
||||
|
||||
for epoch in range(1, epochs + 1):
|
||||
epoch_start_time = time.time()
|
||||
train()
|
||||
val_loss = evaluate(model, val_data)
|
||||
print('-' * 89)
|
||||
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
|
||||
'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),
|
||||
val_loss, math.exp(val_loss)))
|
||||
print('-' * 89)
|
||||
|
||||
if val_loss < best_val_loss:
|
||||
best_val_loss = val_loss
|
||||
best_model = model
|
||||
|
||||
scheduler.step()
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
| epoch 1 | 200/ 2928 batches | lr 5.00 | ms/batch 30.78 | loss 8.03 | ppl 3085.47
|
||||
| epoch 1 | 400/ 2928 batches | lr 5.00 | ms/batch 29.85 | loss 6.83 | ppl 929.53
|
||||
| epoch 1 | 600/ 2928 batches | lr 5.00 | ms/batch 29.92 | loss 6.41 | ppl 610.71
|
||||
| epoch 1 | 800/ 2928 batches | lr 5.00 | ms/batch 29.88 | loss 6.29 | ppl 539.54
|
||||
| epoch 1 | 1000/ 2928 batches | lr 5.00 | ms/batch 29.95 | loss 6.17 | ppl 479.92
|
||||
| epoch 1 | 1200/ 2928 batches | lr 5.00 | ms/batch 29.95 | loss 6.15 | ppl 468.35
|
||||
| epoch 1 | 1400/ 2928 batches | lr 5.00 | ms/batch 29.95 | loss 6.11 | ppl 450.25
|
||||
| epoch 1 | 1600/ 2928 batches | lr 5.00 | ms/batch 29.95 | loss 6.10 | ppl 445.77
|
||||
| epoch 1 | 1800/ 2928 batches | lr 5.00 | ms/batch 29.97 | loss 6.02 | ppl 409.90
|
||||
| epoch 1 | 2000/ 2928 batches | lr 5.00 | ms/batch 29.92 | loss 6.01 | ppl 408.66
|
||||
| epoch 1 | 2200/ 2928 batches | lr 5.00 | ms/batch 29.94 | loss 5.90 | ppl 363.89
|
||||
| epoch 1 | 2400/ 2928 batches | lr 5.00 | ms/batch 29.94 | loss 5.96 | ppl 388.68
|
||||
| epoch 1 | 2600/ 2928 batches | lr 5.00 | ms/batch 29.94 | loss 5.95 | ppl 382.60
|
||||
| epoch 1 | 2800/ 2928 batches | lr 5.00 | ms/batch 29.95 | loss 5.88 | ppl 358.87
|
||||
-----------------------------------------------------------------------------------------
|
||||
| end of epoch 1 | time: 91.45s | valid loss 5.85 | valid ppl 348.17
|
||||
-----------------------------------------------------------------------------------------
|
||||
| epoch 2 | 200/ 2928 batches | lr 4.51 | ms/batch 30.09 | loss 5.86 | ppl 351.70
|
||||
| epoch 2 | 400/ 2928 batches | lr 4.51 | ms/batch 29.97 | loss 5.85 | ppl 347.85
|
||||
| epoch 2 | 600/ 2928 batches | lr 4.51 | ms/batch 29.98 | loss 5.67 | ppl 288.80
|
||||
| epoch 2 | 800/ 2928 batches | lr 4.51 | ms/batch 29.92 | loss 5.70 | ppl 299.81
|
||||
| epoch 2 | 1000/ 2928 batches | lr 4.51 | ms/batch 29.95 | loss 5.65 | ppl 285.57
|
||||
| epoch 2 | 1200/ 2928 batches | lr 4.51 | ms/batch 29.99 | loss 5.68 | ppl 293.48
|
||||
| epoch 2 | 1400/ 2928 batches | lr 4.51 | ms/batch 29.96 | loss 5.69 | ppl 296.90
|
||||
| epoch 2 | 1600/ 2928 batches | lr 4.51 | ms/batch 29.96 | loss 5.72 | ppl 303.83
|
||||
| epoch 2 | 1800/ 2928 batches | lr 4.51 | ms/batch 29.93 | loss 5.66 | ppl 285.90
|
||||
| epoch 2 | 2000/ 2928 batches | lr 4.51 | ms/batch 29.93 | loss 5.67 | ppl 289.58
|
||||
| epoch 2 | 2200/ 2928 batches | lr 4.51 | ms/batch 29.97 | loss 5.55 | ppl 257.20
|
||||
| epoch 2 | 2400/ 2928 batches | lr 4.51 | ms/batch 29.96 | loss 5.65 | ppl 283.92
|
||||
| epoch 2 | 2600/ 2928 batches | lr 4.51 | ms/batch 29.95 | loss 5.65 | ppl 283.76
|
||||
| epoch 2 | 2800/ 2928 batches | lr 4.51 | ms/batch 29.95 | loss 5.60 | ppl 269.90
|
||||
-----------------------------------------------------------------------------------------
|
||||
| end of epoch 2 | time: 91.37s | valid loss 5.60 | valid ppl 270.66
|
||||
-----------------------------------------------------------------------------------------
|
||||
| epoch 3 | 200/ 2928 batches | lr 4.29 | ms/batch 30.12 | loss 5.60 | ppl 269.95
|
||||
| epoch 3 | 400/ 2928 batches | lr 4.29 | ms/batch 29.92 | loss 5.62 | ppl 274.84
|
||||
| epoch 3 | 600/ 2928 batches | lr 4.29 | ms/batch 29.96 | loss 5.41 | ppl 222.98
|
||||
| epoch 3 | 800/ 2928 batches | lr 4.29 | ms/batch 29.93 | loss 5.48 | ppl 240.15
|
||||
| epoch 3 | 1000/ 2928 batches | lr 4.29 | ms/batch 29.94 | loss 5.43 | ppl 229.16
|
||||
| epoch 3 | 1200/ 2928 batches | lr 4.29 | ms/batch 29.94 | loss 5.48 | ppl 239.42
|
||||
| epoch 3 | 1400/ 2928 batches | lr 4.29 | ms/batch 29.95 | loss 5.49 | ppl 242.87
|
||||
| epoch 3 | 1600/ 2928 batches | lr 4.29 | ms/batch 29.93 | loss 5.52 | ppl 250.16
|
||||
| epoch 3 | 1800/ 2928 batches | lr 4.29 | ms/batch 29.93 | loss 5.47 | ppl 237.70
|
||||
| epoch 3 | 2000/ 2928 batches | lr 4.29 | ms/batch 29.94 | loss 5.49 | ppl 241.36
|
||||
| epoch 3 | 2200/ 2928 batches | lr 4.29 | ms/batch 29.92 | loss 5.36 | ppl 211.91
|
||||
| epoch 3 | 2400/ 2928 batches | lr 4.29 | ms/batch 29.95 | loss 5.47 | ppl 237.16
|
||||
| epoch 3 | 2600/ 2928 batches | lr 4.29 | ms/batch 29.94 | loss 5.47 | ppl 236.47
|
||||
| epoch 3 | 2800/ 2928 batches | lr 4.29 | ms/batch 29.92 | loss 5.41 | ppl 223.08
|
||||
-----------------------------------------------------------------------------------------
|
||||
| end of epoch 3 | time: 91.32s | valid loss 5.61 | valid ppl 272.10
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
|
||||
## 使用测试数据集评估模型
|
||||
|
||||
应用最佳模型以检查测试数据集的结果。
|
||||
|
||||
```py
|
||||
test_loss = evaluate(best_model, test_data)
|
||||
print('=' * 89)
|
||||
print('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(
|
||||
test_loss, math.exp(test_loss)))
|
||||
print('=' * 89)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
=========================================================================================
|
||||
| End of training | test loss 5.52 | test ppl 249.05
|
||||
=========================================================================================
|
||||
|
||||
```
|
||||
|
||||
**脚本的总运行时间**:(4 分钟 50.218 秒)
|
||||
|
||||
[下载 Python 源码:`transformer_tutorial.py`](../_downloads/f53285338820248a7c04a947c5110f7b/transformer_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`transformer_tutorial.ipynb`](../_downloads/dca13261bbb4e9809d1a3aa521d22dd7/transformer_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
555
pytorch/官方教程/28.md
Normal file
@@ -0,0 +1,555 @@
|
||||
# 从零开始的 NLP:使用字符级 RNN 分类名称
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html>
|
||||
|
||||
**作者**: [Sean Robertson](https://github.com/spro/practical-pytorch)
|
||||
|
||||
我们将建立和训练基本的字符级 RNN 对单词进行分类。 本教程与以下两个教程一起,展示了如何“从头开始”进行 NLP 建模的预处理数据,特别是不使用`torchtext`的许多便利函数,因此您可以了解 NLP 建模的预处理如何在低水平上工作。
|
||||
|
||||
字符级 RNN 将单词作为一系列字符读取-在每个步骤输出预测和“隐藏状态”,将其先前的隐藏状态输入到每个下一步。 我们将最终的预测作为输出,即单词属于哪个类别。
|
||||
|
||||
具体来说,我们将训练来自 18 种起源语言的数千种姓氏,并根据拼写方式预测名称的来源:
|
||||
|
||||
```py
|
||||
$ python predict.py Hinton
|
||||
(-0.47) Scottish
|
||||
(-1.52) English
|
||||
(-3.57) Irish
|
||||
|
||||
$ python predict.py Schmidhuber
|
||||
(-0.19) German
|
||||
(-2.48) Czech
|
||||
(-2.68) Dutch
|
||||
|
||||
```
|
||||
|
||||
**推荐读物**:
|
||||
|
||||
我假设您至少已经安装了 PyTorch,Python 和 Tensors:
|
||||
|
||||
* [安装说明](https://pytorch.org/)
|
||||
* [使用 PyTorch 进行深度学习:60 分钟的突击](../beginner/deep_learning_60min_blitz.html)通常开始使用 PyTorch
|
||||
* [使用示例学习 PyTorch](../beginner/pytorch_with_examples.html)
|
||||
* [PyTorch(面向以前的 Torch 用户)](../beginner/former_torchies_tutorial.html)(如果您以前是 Lua Torch 用户)
|
||||
|
||||
了解 RNN 及其工作方式也将很有用:
|
||||
|
||||
* [《循环神经网络的不合理有效性》](https://karpathy.github.io/2015/05/21/rnn-effectiveness/)显示了许多现实生活中的例子
|
||||
* [《了解 LSTM 网络》](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)特别是关于 LSTM 的,但一般来说也有关 RNN 的
|
||||
|
||||
## 准备数据
|
||||
|
||||
注意
|
||||
|
||||
从的下载数据,并将其提取到当前目录。
|
||||
|
||||
`data/names`目录中包含 18 个文本文件,名称为`[Language].txt`。 每个文件包含一堆名称,每行一个名称,大多数是罗马化的(但我们仍然需要从 Unicode 转换为 ASCII)。
|
||||
|
||||
我们将得到一个字典,其中列出了每种语言的名称列表`{language: [names ...]}`。 通用变量“类别”和“行”(在本例中为语言和名称)用于以后的扩展。
|
||||
|
||||
```py
|
||||
from __future__ import unicode_literals, print_function, division
|
||||
from io import open
|
||||
import glob
|
||||
import os
|
||||
|
||||
def findFiles(path): return glob.glob(path)
|
||||
|
||||
print(findFiles('data/names/*.txt'))
|
||||
|
||||
import unicodedata
|
||||
import string
|
||||
|
||||
all_letters = string.ascii_letters + " .,;'"
|
||||
n_letters = len(all_letters)
|
||||
|
||||
# Turn a Unicode string to plain ASCII, thanks to https://stackoverflow.com/a/518232/2809427
|
||||
def unicodeToAscii(s):
|
||||
return ''.join(
|
||||
c for c in unicodedata.normalize('NFD', s)
|
||||
if unicodedata.category(c) != 'Mn'
|
||||
and c in all_letters
|
||||
)
|
||||
|
||||
print(unicodeToAscii('Ślusàrski'))
|
||||
|
||||
# Build the category_lines dictionary, a list of names per language
|
||||
category_lines = {}
|
||||
all_categories = []
|
||||
|
||||
# Read a file and split into lines
|
||||
def readLines(filename):
|
||||
lines = open(filename, encoding='utf-8').read().strip().split('\n')
|
||||
return [unicodeToAscii(line) for line in lines]
|
||||
|
||||
for filename in findFiles('data/names/*.txt'):
|
||||
category = os.path.splitext(os.path.basename(filename))[0]
|
||||
all_categories.append(category)
|
||||
lines = readLines(filename)
|
||||
category_lines[category] = lines
|
||||
|
||||
n_categories = len(all_categories)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
['data/names/French.txt', 'data/names/Czech.txt', 'data/names/Dutch.txt', 'data/names/Polish.txt', 'data/names/Scottish.txt', 'data/names/Chinese.txt', 'data/names/English.txt', 'data/names/Italian.txt', 'data/names/Portuguese.txt', 'data/names/Japanese.txt', 'data/names/German.txt', 'data/names/Russian.txt', 'data/names/Korean.txt', 'data/names/Arabic.txt', 'data/names/Greek.txt', 'data/names/Vietnamese.txt', 'data/names/Spanish.txt', 'data/names/Irish.txt']
|
||||
Slusarski
|
||||
|
||||
```
|
||||
|
||||
现在我们有了`category_lines`,这是一个字典,将每个类别(语言)映射到行(名称)列表。 我们还跟踪了`all_categories`(只是语言列表)和`n_categories`,以供以后参考。
|
||||
|
||||
```py
|
||||
print(category_lines['Italian'][:5])
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
['Abandonato', 'Abatangelo', 'Abatantuono', 'Abate', 'Abategiovanni']
|
||||
|
||||
```
|
||||
|
||||
### 将名称转换为张量
|
||||
|
||||
现在我们已经组织了所有名称,我们需要将它们转换为张量以使用它们。
|
||||
|
||||
为了表示单个字母,我们使用大小为`<1 x n_letters>`的单热向量。 单热向量用 0 填充,但当前字母的索引处的数字为 1,例如 `"b" = <0 1 0 0 0 ...>`。
|
||||
|
||||
为了制造一个单词,我们将其中的一些连接成 2D 矩阵`<line_length x 1 x n_letters>`。
|
||||
|
||||
额外的 1 维是因为 PyTorch 假定所有内容都是成批的-在这里我们仅使用 1 的批量大小。
|
||||
|
||||
```py
|
||||
import torch
|
||||
|
||||
# Find letter index from all_letters, e.g. "a" = 0
|
||||
def letterToIndex(letter):
|
||||
return all_letters.find(letter)
|
||||
|
||||
# Just for demonstration, turn a letter into a <1 x n_letters> Tensor
|
||||
def letterToTensor(letter):
|
||||
tensor = torch.zeros(1, n_letters)
|
||||
tensor[0][letterToIndex(letter)] = 1
|
||||
return tensor
|
||||
|
||||
# Turn a line into a <line_length x 1 x n_letters>,
|
||||
# or an array of one-hot letter vectors
|
||||
def lineToTensor(line):
|
||||
tensor = torch.zeros(len(line), 1, n_letters)
|
||||
for li, letter in enumerate(line):
|
||||
tensor[li][0][letterToIndex(letter)] = 1
|
||||
return tensor
|
||||
|
||||
print(letterToTensor('J'))
|
||||
|
||||
print(lineToTensor('Jones').size())
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0.]])
|
||||
torch.Size([5, 1, 57])
|
||||
|
||||
```
|
||||
|
||||
## 创建网络
|
||||
|
||||
在进行自动微分之前,在 Torch 中创建一个循环神经网络涉及在多个时间步长上克隆层的参数。 层保留了隐藏状态和梯度,这些层现在完全由图本身处理。 这意味着您可以非常“纯”的方式将 RNN 用作常规前馈层。
|
||||
|
||||
该 RNN 模块(主要从[面向 Torch 用户的 PyTorch 教程](https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#example-2-recurrent-net)复制)只有两个线性层,它们在输入和隐藏状态下运行,在输出之后是`LogSoftmax`层。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
import torch.nn as nn
|
||||
|
||||
class RNN(nn.Module):
|
||||
def __init__(self, input_size, hidden_size, output_size):
|
||||
super(RNN, self).__init__()
|
||||
|
||||
self.hidden_size = hidden_size
|
||||
|
||||
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
|
||||
self.i2o = nn.Linear(input_size + hidden_size, output_size)
|
||||
self.softmax = nn.LogSoftmax(dim=1)
|
||||
|
||||
def forward(self, input, hidden):
|
||||
combined = torch.cat((input, hidden), 1)
|
||||
hidden = self.i2h(combined)
|
||||
output = self.i2o(combined)
|
||||
output = self.softmax(output)
|
||||
return output, hidden
|
||||
|
||||
def initHidden(self):
|
||||
return torch.zeros(1, self.hidden_size)
|
||||
|
||||
n_hidden = 128
|
||||
rnn = RNN(n_letters, n_hidden, n_categories)
|
||||
|
||||
```
|
||||
|
||||
要运行此网络的步骤,我们需要传递输入(在本例中为当前字母的张量)和先前的隐藏状态(首先将其初始化为零)。 我们将返回输出(每种语言的概率)和下一个隐藏状态(我们将其保留用于下一步)。
|
||||
|
||||
```py
|
||||
input = letterToTensor('A')
|
||||
hidden =torch.zeros(1, n_hidden)
|
||||
|
||||
output, next_hidden = rnn(input, hidden)
|
||||
|
||||
```
|
||||
|
||||
为了提高效率,我们不想为每个步骤创建一个新的张量,因此我们将使用`lineToTensor`而不是`letterToTensor`并使用切片。 这可以通过预先计算一批张量来进一步优化。
|
||||
|
||||
```py
|
||||
input = lineToTensor('Albert')
|
||||
hidden = torch.zeros(1, n_hidden)
|
||||
|
||||
output, next_hidden = rnn(input[0], hidden)
|
||||
print(output)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
tensor([[-2.8934, -2.7991, -2.8549, -2.8915, -2.9122, -2.9010, -2.8979, -2.8875,
|
||||
-2.8256, -2.8792, -2.8712, -2.8465, -2.9582, -3.0171, -2.8308, -2.9629,
|
||||
-2.9233, -2.8979]], grad_fn=<LogSoftmaxBackward>)
|
||||
|
||||
```
|
||||
|
||||
如您所见,输出为`<1 x n_categories>`张量,其中每个项目都是该类别的可能性(可能性更大)。
|
||||
|
||||
## 训练
|
||||
|
||||
### 准备训练
|
||||
|
||||
在接受训练之前,我们应该做一些辅助函数。 首先是解释网络的输出,我们知道这是每个类别的可能性。 我们可以使用`Tensor.topk`获得最大值的索引:
|
||||
|
||||
```py
|
||||
def categoryFromOutput(output):
|
||||
top_n, top_i = output.topk(1)
|
||||
category_i = top_i[0].item()
|
||||
return all_categories[category_i], category_i
|
||||
|
||||
print(categoryFromOutput(output))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
('Czech', 1)
|
||||
|
||||
```
|
||||
|
||||
我们还将希望有一种快速的方法来获取训练示例(名称及其语言):
|
||||
|
||||
```py
|
||||
import random
|
||||
|
||||
def randomChoice(l):
|
||||
return l[random.randint(0, len(l) - 1)]
|
||||
|
||||
def randomTrainingExample():
|
||||
category = randomChoice(all_categories)
|
||||
line = randomChoice(category_lines[category])
|
||||
category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long)
|
||||
line_tensor = lineToTensor(line)
|
||||
return category, line, category_tensor, line_tensor
|
||||
|
||||
for i in range(10):
|
||||
category, line, category_tensor, line_tensor = randomTrainingExample()
|
||||
print('category =', category, '/ line =', line)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
category = Chinese / line = Jia
|
||||
category = Korean / line = Son
|
||||
category = Czech / line = Matocha
|
||||
category = Dutch / line = Nifterik
|
||||
category = German / line = Dreschner
|
||||
category = Irish / line = Names
|
||||
category = French / line = Charpentier
|
||||
category = Italian / line = Carboni
|
||||
category = Irish / line = Shannon
|
||||
category = German / line = Adam
|
||||
|
||||
```
|
||||
|
||||
### 训练网络
|
||||
|
||||
现在,训练该网络所需要做的就是向它展示大量示例,进行猜测,并告诉它是否错误。
|
||||
|
||||
对于损失函数,`nn.NLLLoss`是适当的,因为 RNN 的最后一层是`nn.LogSoftmax`。
|
||||
|
||||
```py
|
||||
criterion = nn.NLLLoss()
|
||||
|
||||
```
|
||||
|
||||
每个训练循环将:
|
||||
|
||||
* 创建输入和目标张量
|
||||
* 创建归零的初始隐藏状态
|
||||
* 阅读每个字母
|
||||
* 保存下一个字母的隐藏状态
|
||||
* 比较最终输出与目标
|
||||
* 反向传播
|
||||
* 返回输出和损失
|
||||
|
||||
```py
|
||||
learning_rate = 0.005 # If you set this too high, it might explode. If too low, it might not learn
|
||||
|
||||
def train(category_tensor, line_tensor):
|
||||
hidden = rnn.initHidden()
|
||||
|
||||
rnn.zero_grad()
|
||||
|
||||
for i in range(line_tensor.size()[0]):
|
||||
output, hidden = rnn(line_tensor[i], hidden)
|
||||
|
||||
loss = criterion(output, category_tensor)
|
||||
loss.backward()
|
||||
|
||||
# Add parameters' gradients to their values, multiplied by learning rate
|
||||
for p in rnn.parameters():
|
||||
p.data.add_(p.grad.data, alpha=-learning_rate)
|
||||
|
||||
return output, loss.item()
|
||||
|
||||
```
|
||||
|
||||
现在,我们只需要运行大量示例。 由于`train`函数返回输出和损失,因此我们可以打印其猜测并跟踪作图的损失。 因为有 1000 个示例,所以我们仅打印每个`print_every`示例,并对损失进行平均。
|
||||
|
||||
```py
|
||||
import time
|
||||
import math
|
||||
|
||||
n_iters = 100000
|
||||
print_every = 5000
|
||||
plot_every = 1000
|
||||
|
||||
# Keep track of losses for plotting
|
||||
current_loss = 0
|
||||
all_losses = []
|
||||
|
||||
def timeSince(since):
|
||||
now = time.time()
|
||||
s = now - since
|
||||
m = math.floor(s / 60)
|
||||
s -= m * 60
|
||||
return '%dm %ds' % (m, s)
|
||||
|
||||
start = time.time()
|
||||
|
||||
for iter in range(1, n_iters + 1):
|
||||
category, line, category_tensor, line_tensor = randomTrainingExample()
|
||||
output, loss = train(category_tensor, line_tensor)
|
||||
current_loss += loss
|
||||
|
||||
# Print iter number, loss, name and guess
|
||||
if iter % print_every == 0:
|
||||
guess, guess_i = categoryFromOutput(output)
|
||||
correct = '✓' if guess == category else '✗ (%s)' % category
|
||||
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
|
||||
|
||||
# Add current loss avg to list of losses
|
||||
if iter % plot_every == 0:
|
||||
all_losses.append(current_loss / plot_every)
|
||||
current_loss = 0
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
5000 5% (0m 15s) 2.5667 Ly / Chinese ✗ (Vietnamese)
|
||||
10000 10% (0m 26s) 2.3171 Rocha / Japanese ✗ (Portuguese)
|
||||
15000 15% (0m 37s) 2.2941 Gouveia / Spanish ✗ (Portuguese)
|
||||
20000 20% (0m 49s) 1.3015 Lippi / Italian ✓
|
||||
25000 25% (1m 1s) 0.7693 Thuy / Vietnamese ✓
|
||||
30000 30% (1m 13s) 1.9341 Murray / Arabic ✗ (Scottish)
|
||||
35000 35% (1m 25s) 2.3633 Busto / Scottish ✗ (Italian)
|
||||
40000 40% (1m 38s) 1.0401 Chung / Chinese ✗ (Korean)
|
||||
45000 45% (1m 50s) 0.0499 Filipowski / Polish ✓
|
||||
50000 50% (2m 2s) 0.2598 Mccallum / Scottish ✓
|
||||
55000 55% (2m 14s) 4.5375 Mozdzierz / German ✗ (Polish)
|
||||
60000 60% (2m 26s) 1.7194 Talalihin / Irish ✗ (Russian)
|
||||
65000 65% (2m 38s) 0.1150 Ziemniak / Polish ✓
|
||||
70000 70% (2m 51s) 1.8548 Pharlain / Scottish ✗ (Irish)
|
||||
75000 75% (3m 3s) 2.1362 Prehatney / Russian ✗ (Czech)
|
||||
80000 80% (3m 15s) 0.4166 Leclerc / French ✓
|
||||
85000 85% (3m 27s) 1.4189 Elford / English ✓
|
||||
90000 90% (3m 39s) 2.1959 Gagnon / Scottish ✗ (French)
|
||||
95000 95% (3m 51s) 0.1622 Bukoski / Polish ✓
|
||||
100000 100% (4m 3s) 1.3180 Faucheux / French ✓
|
||||
|
||||
```
|
||||
|
||||
### 绘制结果
|
||||
|
||||
从`all_losses`绘制历史损失可显示网络学习情况:
|
||||
|
||||
```py
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.ticker as ticker
|
||||
|
||||
plt.figure()
|
||||
plt.plot(all_losses)
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 评估结果
|
||||
|
||||
为了查看网络在不同类别上的表现如何,我们将创建一个混淆矩阵,为每种实际语言(行)指示网络猜测(列)哪种语言。 为了计算混淆矩阵,使用`evaluate()`通过网络运行一堆样本,该样本等于`train()`减去反向传播器。
|
||||
|
||||
```py
|
||||
# Keep track of correct guesses in a confusion matrix
|
||||
confusion = torch.zeros(n_categories, n_categories)
|
||||
n_confusion = 10000
|
||||
|
||||
# Just return an output given a line
|
||||
def evaluate(line_tensor):
|
||||
hidden = rnn.initHidden()
|
||||
|
||||
for i in range(line_tensor.size()[0]):
|
||||
output, hidden = rnn(line_tensor[i], hidden)
|
||||
|
||||
return output
|
||||
|
||||
# Go through a bunch of examples and record which are correctly guessed
|
||||
for i in range(n_confusion):
|
||||
category, line, category_tensor, line_tensor = randomTrainingExample()
|
||||
output = evaluate(line_tensor)
|
||||
guess, guess_i = categoryFromOutput(output)
|
||||
category_i = all_categories.index(category)
|
||||
confusion[category_i][guess_i] += 1
|
||||
|
||||
# Normalize by dividing every row by its sum
|
||||
for i in range(n_categories):
|
||||
confusion[i] = confusion[i] / confusion[i].sum()
|
||||
|
||||
# Set up plot
|
||||
fig = plt.figure()
|
||||
ax = fig.add_subplot(111)
|
||||
cax = ax.matshow(confusion.numpy())
|
||||
fig.colorbar(cax)
|
||||
|
||||
# Set up axes
|
||||
ax.set_xticklabels([''] + all_categories, rotation=90)
|
||||
ax.set_yticklabels([''] + all_categories)
|
||||
|
||||
# Force label at every tick
|
||||
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
|
||||
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
|
||||
|
||||
# sphinx_gallery_thumbnail_number = 2
|
||||
plt.show()
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
您可以从主轴上挑出一些亮点,以显示它猜错了哪些语言,例如中文(朝鲜语)和西班牙语(意大利语)。 它似乎与希腊语搭配得很好,而与英语搭配得很差(可能是因为与其他语言重叠)。
|
||||
|
||||
### 在用户输入上运行
|
||||
|
||||
```py
|
||||
def predict(input_line, n_predictions=3):
|
||||
print('\n> %s' % input_line)
|
||||
with torch.no_grad():
|
||||
output = evaluate(lineToTensor(input_line))
|
||||
|
||||
# Get top N categories
|
||||
topv, topi = output.topk(n_predictions, 1, True)
|
||||
predictions = []
|
||||
|
||||
for i in range(n_predictions):
|
||||
value = topv[0][i].item()
|
||||
category_index = topi[0][i].item()
|
||||
print('(%.2f) %s' % (value, all_categories[category_index]))
|
||||
predictions.append([value, all_categories[category_index]])
|
||||
|
||||
predict('Dovesky')
|
||||
predict('Jackson')
|
||||
predict('Satoshi')
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
> Dovesky
|
||||
(-0.82) Russian
|
||||
(-1.06) Czech
|
||||
(-2.22) Polish
|
||||
|
||||
> Jackson
|
||||
(-0.63) English
|
||||
(-1.75) Scottish
|
||||
(-1.75) Russian
|
||||
|
||||
> Satoshi
|
||||
(-0.97) Japanese
|
||||
(-1.50) Polish
|
||||
(-2.13) Italian
|
||||
|
||||
```
|
||||
|
||||
实际 PyTorch 存储库中的脚本的[最终版本](https://github.com/spro/practical-pytorch/tree/master/char-rnn-classification)将上述代码分成几个文件:
|
||||
|
||||
* `data.py`(加载文件)
|
||||
* `model.py`(定义 RNN)
|
||||
* `train.py`(进行训练)
|
||||
* `predict.py`(使用命令行参数运行`predict()`)
|
||||
* `server.py`(通过`bottle.py`将预测用作 JSON API)
|
||||
|
||||
运行`train.py`训练并保存网络。
|
||||
|
||||
使用名称运行`predict.py`以查看预测:
|
||||
|
||||
```py
|
||||
$ python predict.py Hazaki
|
||||
(-0.42) Japanese
|
||||
(-1.39) Polish
|
||||
(-3.51) Czech
|
||||
|
||||
```
|
||||
|
||||
运行`server.py`并访问`http://localhost:5533/Yourname`以获取预测的 JSON 输出。
|
||||
|
||||
## 练习
|
||||
|
||||
* 尝试使用行 -> 类别的其他数据集,例如:
|
||||
* 任何单词 -> 语言
|
||||
* 名称 -> 性别
|
||||
* 角色名称 -> 作家
|
||||
* 页面标题 -> 博客或 subreddit
|
||||
* 通过更大和/或形状更好的网络获得更好的结果
|
||||
* 添加更多线性层
|
||||
* 尝试`nn.LSTM`和`nn.GRU`层
|
||||
* 将多个这些 RNN 合并为更高级别的网络
|
||||
|
||||
**脚本的总运行时间**:(4 分钟 15.239 秒)
|
||||
|
||||
[下载 Python 源码:`char_rnn_classification_tutorial.py`](../_downloads/ccb15f8365bdae22a0a019e57216d7c6/char_rnn_classification_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`char_rnn_classification_tutorial.ipynb`](../_downloads/977c14818c75427641ccb85ad21ed6dc/char_rnn_classification_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
426
pytorch/官方教程/29.md
Normal file
@@ -0,0 +1,426 @@
|
||||
# 从零开始的 NLP:使用字符级 RNN 生成名称
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html>
|
||||
|
||||
**作者**: [Sean Robertson](https://github.com/spro/practical-pytorch)
|
||||
|
||||
这是我们关于“从零开始的 NLP”的三个教程中的第二个。 在第一个教程`/intermediate/char_rnn_classification_tutorial`中,我们使用了 RNN 将名称分类为源语言。 这次,我们将转过来并使用语言生成名称。
|
||||
|
||||
```py
|
||||
> python sample.py Russian RUS
|
||||
Rovakov
|
||||
Uantov
|
||||
Shavakov
|
||||
|
||||
> python sample.py German GER
|
||||
Gerren
|
||||
Ereng
|
||||
Rosher
|
||||
|
||||
> python sample.py Spanish SPA
|
||||
Salla
|
||||
Parer
|
||||
Allan
|
||||
|
||||
> python sample.py Chinese CHI
|
||||
Chan
|
||||
Hang
|
||||
Iun
|
||||
|
||||
```
|
||||
|
||||
我们仍在手工制作带有一些线性层的小型 RNN。 最大的区别在于,我们无需输入名称中的所有字母即可预测类别,而是输入类别并一次输出一个字母。 反复预测字符以形成语言(这也可以用单词或其他高阶结构来完成)通常称为“语言模型”。
|
||||
|
||||
**推荐读物**:
|
||||
|
||||
我假设您至少已经安装了 PyTorch,Python 和张量:
|
||||
|
||||
* [安装说明](https://pytorch.org/)
|
||||
* [使用 PyTorch 进行深度学习:60 分钟的突击](../beginner/deep_learning_60min_blitz.html)通常开始使用 PyTorch
|
||||
* [使用示例学习 PyTorch](../beginner/pytorch_with_examples.html)
|
||||
* [PyTorch(面向以前的 Torch 用户)](../beginner/former_torchies_tutorial.html)(如果您以前是 Lua Torch 用户)
|
||||
|
||||
了解 RNN 及其工作方式也将很有用:
|
||||
|
||||
* [《循环神经网络的不合理有效性》](https://karpathy.github.io/2015/05/21/rnn-effectiveness/)显示了许多现实生活中的例子
|
||||
* [《了解 LSTM 网络》](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)特别是关于 LSTM 的,但一般来说也有关 RNN 的
|
||||
|
||||
我还建议上一教程[《从零开始的 NLP:使用字符级 RNN 对名称进行分类》](char_rnn_classification_tutorial.html)
|
||||
|
||||
## 准备数据
|
||||
|
||||
注意
|
||||
|
||||
从的下载数据,并将其提取到当前目录。
|
||||
|
||||
有关此过程的更多详细信息,请参见上一教程。 简而言之,有一堆纯文本文件`data/names/[Language].txt`,每行都有一个名称。 我们将行拆分成一个数组,将 Unicode 转换为 ASCII,最后得到一个字典`{language: [names ...]}`。
|
||||
|
||||
```py
|
||||
from __future__ import unicode_literals, print_function, division
|
||||
from io import open
|
||||
import glob
|
||||
import os
|
||||
import unicodedata
|
||||
import string
|
||||
|
||||
all_letters = string.ascii_letters + " .,;'-"
|
||||
n_letters = len(all_letters) + 1 # Plus EOS marker
|
||||
|
||||
def findFiles(path): return glob.glob(path)
|
||||
|
||||
# Turn a Unicode string to plain ASCII, thanks to https://stackoverflow.com/a/518232/2809427
|
||||
def unicodeToAscii(s):
|
||||
return ''.join(
|
||||
c for c in unicodedata.normalize('NFD', s)
|
||||
if unicodedata.category(c) != 'Mn'
|
||||
and c in all_letters
|
||||
)
|
||||
|
||||
# Read a file and split into lines
|
||||
def readLines(filename):
|
||||
lines = open(filename, encoding='utf-8').read().strip().split('\n')
|
||||
return [unicodeToAscii(line) for line in lines]
|
||||
|
||||
# Build the category_lines dictionary, a list of lines per category
|
||||
category_lines = {}
|
||||
all_categories = []
|
||||
for filename in findFiles('data/names/*.txt'):
|
||||
category = os.path.splitext(os.path.basename(filename))[0]
|
||||
all_categories.append(category)
|
||||
lines = readLines(filename)
|
||||
category_lines[category] = lines
|
||||
|
||||
n_categories = len(all_categories)
|
||||
|
||||
if n_categories == 0:
|
||||
raise RuntimeError('Data not found. Make sure that you downloaded data '
|
||||
'from https://download.pytorch.org/tutorial/data.zip and extract it to '
|
||||
'the current directory.')
|
||||
|
||||
print('# categories:', n_categories, all_categories)
|
||||
print(unicodeToAscii("O'Néàl"))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
# categories: 18 ['French', 'Czech', 'Dutch', 'Polish', 'Scottish', 'Chinese', 'English', 'Italian', 'Portuguese', 'Japanese', 'German', 'Russian', 'Korean', 'Arabic', 'Greek', 'Vietnamese', 'Spanish', 'Irish']
|
||||
O'Neal
|
||||
|
||||
```
|
||||
|
||||
## 创建网络
|
||||
|
||||
该网络扩展[最后一个教程](#Creating-the-Network)的 RNN,并为类别张量附加了一个参数,该参数与其他张量连接在一起。 类别张量就像字母输入一样是一个单向向量。
|
||||
|
||||
我们将输出解释为下一个字母的概率。 采样时,最可能的输出字母用作下一个输入字母。
|
||||
|
||||
我添加了第二个线性层`o2o`(在合并了隐藏和输出之后),以使其有更多的肌肉可以使用。 还有一个丢弃层,[以给定的概率](https://arxiv.org/abs/1207.0580)(此处为 0.1)将输入的部分随机归零,通常用于模糊输入以防止过拟合。 在这里,我们在网络的末端使用它来故意添加一些混乱并增加采样种类。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
|
||||
class RNN(nn.Module):
|
||||
def __init__(self, input_size, hidden_size, output_size):
|
||||
super(RNN, self).__init__()
|
||||
self.hidden_size = hidden_size
|
||||
|
||||
self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size)
|
||||
self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size)
|
||||
self.o2o = nn.Linear(hidden_size + output_size, output_size)
|
||||
self.dropout = nn.Dropout(0.1)
|
||||
self.softmax = nn.LogSoftmax(dim=1)
|
||||
|
||||
def forward(self, category, input, hidden):
|
||||
input_combined = torch.cat((category, input, hidden), 1)
|
||||
hidden = self.i2h(input_combined)
|
||||
output = self.i2o(input_combined)
|
||||
output_combined = torch.cat((hidden, output), 1)
|
||||
output = self.o2o(output_combined)
|
||||
output = self.dropout(output)
|
||||
output = self.softmax(output)
|
||||
return output, hidden
|
||||
|
||||
def initHidden(self):
|
||||
return torch.zeros(1, self.hidden_size)
|
||||
|
||||
```
|
||||
|
||||
## 训练
|
||||
|
||||
### 准备训练
|
||||
|
||||
首先,辅助函数获取随机对(类别,行):
|
||||
|
||||
```py
|
||||
import random
|
||||
|
||||
# Random item from a list
|
||||
def randomChoice(l):
|
||||
return l[random.randint(0, len(l) - 1)]
|
||||
|
||||
# Get a random category and random line from that category
|
||||
def randomTrainingPair():
|
||||
category = randomChoice(all_categories)
|
||||
line = randomChoice(category_lines[category])
|
||||
return category, line
|
||||
|
||||
```
|
||||
|
||||
对于每个时间步(即,对于训练词中的每个字母),网络的输入将为`(category, current letter, hidden state)`,而输出将为`(next letter, next hidden state)`。 因此,对于每个训练集,我们都需要类别,一组输入字母和一组输出/目标字母。
|
||||
|
||||
由于我们正在预测每个时间步中当前字母的下一个字母,因此字母对是该行中连续字母的组-例如对于`"ABCD<EOS>"`,我们将创建`('A', 'B'), ('B', 'C'), ('C', 'D'), ('D', 'EOS')`。
|
||||
|
||||

|
||||
|
||||
类别张量是大小为`<1 x n_categories>`的[单热张量](https://en.wikipedia.org/wiki/One-hot)。 训练时,我们会随时随地将其馈送到网络中-这是一种设计选择,它可能已作为初始隐藏状态或某些其他策略的一部分包含在内。
|
||||
|
||||
```py
|
||||
# One-hot vector for category
|
||||
def categoryTensor(category):
|
||||
li = all_categories.index(category)
|
||||
tensor = torch.zeros(1, n_categories)
|
||||
tensor[0][li] = 1
|
||||
return tensor
|
||||
|
||||
# One-hot matrix of first to last letters (not including EOS) for input
|
||||
def inputTensor(line):
|
||||
tensor = torch.zeros(len(line), 1, n_letters)
|
||||
for li in range(len(line)):
|
||||
letter = line[li]
|
||||
tensor[li][0][all_letters.find(letter)] = 1
|
||||
return tensor
|
||||
|
||||
# LongTensor of second letter to end (EOS) for target
|
||||
def targetTensor(line):
|
||||
letter_indexes = [all_letters.find(line[li]) for li in range(1, len(line))]
|
||||
letter_indexes.append(n_letters - 1) # EOS
|
||||
return torch.LongTensor(letter_indexes)
|
||||
|
||||
```
|
||||
|
||||
为了方便训练,我们将使用`randomTrainingExample`函数来获取随机(类别,行)对,并将其转换为所需的(类别,输入,目标)张量。
|
||||
|
||||
```py
|
||||
# Make category, input, and target tensors from a random category, line pair
|
||||
def randomTrainingExample():
|
||||
category, line = randomTrainingPair()
|
||||
category_tensor = categoryTensor(category)
|
||||
input_line_tensor = inputTensor(line)
|
||||
target_line_tensor = targetTensor(line)
|
||||
return category_tensor, input_line_tensor, target_line_tensor
|
||||
|
||||
```
|
||||
|
||||
### 训练网络
|
||||
|
||||
与仅使用最后一个输出的分类相反,我们在每个步骤进行预测,因此在每个步骤都计算损失。
|
||||
|
||||
Autograd 的神奇之处在于,您可以简单地在每个步骤中对这些损失进行求和,然后在末尾调用。
|
||||
|
||||
```py
|
||||
criterion = nn.NLLLoss()
|
||||
|
||||
learning_rate = 0.0005
|
||||
|
||||
def train(category_tensor, input_line_tensor, target_line_tensor):
|
||||
target_line_tensor.unsqueeze_(-1)
|
||||
hidden = rnn.initHidden()
|
||||
|
||||
rnn.zero_grad()
|
||||
|
||||
loss = 0
|
||||
|
||||
for i in range(input_line_tensor.size(0)):
|
||||
output, hidden = rnn(category_tensor, input_line_tensor[i], hidden)
|
||||
l = criterion(output, target_line_tensor[i])
|
||||
loss += l
|
||||
|
||||
loss.backward()
|
||||
|
||||
for p in rnn.parameters():
|
||||
p.data.add_(p.grad.data, alpha=-learning_rate)
|
||||
|
||||
return output, loss.item() / input_line_tensor.size(0)
|
||||
|
||||
```
|
||||
|
||||
为了跟踪训练需要多长时间,我添加了一个`timeSince(timestamp)`函数,该函数返回人类可读的字符串:
|
||||
|
||||
```py
|
||||
import time
|
||||
import math
|
||||
|
||||
def timeSince(since):
|
||||
now = time.time()
|
||||
s = now - since
|
||||
m = math.floor(s / 60)
|
||||
s -= m * 60
|
||||
return '%dm %ds' % (m, s)
|
||||
|
||||
```
|
||||
|
||||
训练照常进行-召集训练多次并等待几分钟,每`print_every`个示例打印当前时间和损失,并在`all_losses`中保存每个`plot_every`实例的平均损失以供以后绘制。
|
||||
|
||||
```py
|
||||
rnn = RNN(n_letters, 128, n_letters)
|
||||
|
||||
n_iters = 100000
|
||||
print_every = 5000
|
||||
plot_every = 500
|
||||
all_losses = []
|
||||
total_loss = 0 # Reset every plot_every iters
|
||||
|
||||
start = time.time()
|
||||
|
||||
for iter in range(1, n_iters + 1):
|
||||
output, loss = train(*randomTrainingExample())
|
||||
total_loss += loss
|
||||
|
||||
if iter % print_every == 0:
|
||||
print('%s (%d %d%%) %.4f' % (timeSince(start), iter, iter / n_iters * 100, loss))
|
||||
|
||||
if iter % plot_every == 0:
|
||||
all_losses.append(total_loss / plot_every)
|
||||
total_loss = 0
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
0m 26s (5000 5%) 3.2265
|
||||
0m 51s (10000 10%) 3.0171
|
||||
1m 16s (15000 15%) 2.1535
|
||||
1m 41s (20000 20%) 2.0806
|
||||
2m 7s (25000 25%) 2.3842
|
||||
2m 32s (30000 30%) 2.5014
|
||||
2m 57s (35000 35%) 2.2441
|
||||
3m 22s (40000 40%) 2.2113
|
||||
3m 47s (45000 45%) 2.1184
|
||||
4m 13s (50000 50%) 1.3983
|
||||
4m 38s (55000 55%) 2.5881
|
||||
5m 3s (60000 60%) 1.8033
|
||||
5m 29s (65000 65%) 2.4285
|
||||
5m 54s (70000 70%) 2.4198
|
||||
6m 20s (75000 75%) 2.9660
|
||||
6m 45s (80000 80%) 1.9752
|
||||
7m 11s (85000 85%) 3.7507
|
||||
7m 36s (90000 90%) 2.2044
|
||||
8m 2s (95000 95%) 2.8938
|
||||
8m 27s (100000 100%) 2.2471
|
||||
|
||||
```
|
||||
|
||||
### 绘制损失图
|
||||
|
||||
绘制`all_loss`的历史损失可显示网络学习情况:
|
||||
|
||||
```py
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.ticker as ticker
|
||||
|
||||
plt.figure()
|
||||
plt.plot(all_losses)
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 网络采样
|
||||
|
||||
为了示例,我们给网络一个字母,询问下一个字母是什么,将其作为下一个字母输入,并重复直到 EOS 标记。
|
||||
|
||||
* 为输入类别,起始字母和空隐藏状态创建张量
|
||||
* 用起始字母创建一个字符串`output_name`
|
||||
* 直到最大输出长度,
|
||||
* 将当前字母输入网络
|
||||
* 从最高输出中获取下一个字母,以及下一个隐藏状态
|
||||
* 如果字母是`EOS`,请在此处停止
|
||||
* 如果是普通字母,请添加到`output_name`并继续
|
||||
* 返回姓氏
|
||||
|
||||
注意
|
||||
|
||||
不必给它起一个开始字母,另一种策略是在训练中包括一个“字符串开始”标记,并让网络选择自己的开始字母。
|
||||
|
||||
```py
|
||||
max_length = 20
|
||||
|
||||
# Sample from a category and starting letter
|
||||
def sample(category, start_letter='A'):
|
||||
with torch.no_grad(): # no need to track history in sampling
|
||||
category_tensor = categoryTensor(category)
|
||||
input = inputTensor(start_letter)
|
||||
hidden = rnn.initHidden()
|
||||
|
||||
output_name = start_letter
|
||||
|
||||
for i in range(max_length):
|
||||
output, hidden = rnn(category_tensor, input[0], hidden)
|
||||
topv, topi = output.topk(1)
|
||||
topi = topi[0][0]
|
||||
if topi == n_letters - 1:
|
||||
break
|
||||
else:
|
||||
letter = all_letters[topi]
|
||||
output_name += letter
|
||||
input = inputTensor(letter)
|
||||
|
||||
return output_name
|
||||
|
||||
# Get multiple samples from one category and multiple starting letters
|
||||
def samples(category, start_letters='ABC'):
|
||||
for start_letter in start_letters:
|
||||
print(sample(category, start_letter))
|
||||
|
||||
samples('Russian', 'RUS')
|
||||
|
||||
samples('German', 'GER')
|
||||
|
||||
samples('Spanish', 'SPA')
|
||||
|
||||
samples('Chinese', 'CHI')
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Rovanov
|
||||
Uarinov
|
||||
Santovov
|
||||
Gangerten
|
||||
Erer
|
||||
Roure
|
||||
Salla
|
||||
Parera
|
||||
Allan
|
||||
Chin
|
||||
Han
|
||||
Iun
|
||||
|
||||
```
|
||||
|
||||
## 练习
|
||||
|
||||
* 尝试使用类别 -> 行的其他数据集,例如:
|
||||
* 虚构序列 -> 角色名称
|
||||
* 词性 -> 词
|
||||
* 国家 -> 城市
|
||||
* 使用“句子开头”标记,以便无需选择开始字母即可进行采样
|
||||
* 通过更大和/或形状更好的网络获得更好的结果
|
||||
* 尝试`nn.LSTM`和`nn.GRU`层
|
||||
* 将多个这些 RNN 合并为更高级别的网络
|
||||
|
||||
**脚本的总运行时间**:(8 分钟 27.431 秒)
|
||||
|
||||
[下载 Python 源码:`char_rnn_generation_tutorial.py`](../_downloads/8167177b6dd8ddf05bb9fe58744ac406/char_rnn_generation_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`char_rnn_generation_tutorial.ipynb`](../_downloads/a35c00bb5afae3962e1e7869c66872fa/char_rnn_generation_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
794
pytorch/官方教程/30.md
Normal file
@@ -0,0 +1,794 @@
|
||||
# 从零开始的 NLP:使用序列到序列网络和注意力的翻译
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html>
|
||||
|
||||
**作者**: [Sean Robertson](https://github.com/spro/practical-pytorch)
|
||||
|
||||
这是关于“从头开始进行 NLP”的第三篇也是最后一篇教程,我们在其中编写自己的类和函数来预处理数据以完成 NLP 建模任务。 我们希望在完成本教程后,您将继续学习紧接着本教程的三本教程,`torchtext`如何为您处理许多此类预处理。
|
||||
|
||||
在这个项目中,我们将教授将法语翻译成英语的神经网络。
|
||||
|
||||
```py
|
||||
[KEY: > input, = target, < output]
|
||||
|
||||
> il est en train de peindre un tableau .
|
||||
= he is painting a picture .
|
||||
< he is painting a picture .
|
||||
|
||||
> pourquoi ne pas essayer ce vin delicieux ?
|
||||
= why not try that delicious wine ?
|
||||
< why not try that delicious wine ?
|
||||
|
||||
> elle n est pas poete mais romanciere .
|
||||
= she is not a poet but a novelist .
|
||||
< she not not a poet but a novelist .
|
||||
|
||||
> vous etes trop maigre .
|
||||
= you re too skinny .
|
||||
< you re all alone .
|
||||
|
||||
```
|
||||
|
||||
……取得不同程度的成功。
|
||||
|
||||
通过[序列到序列网络](https://arxiv.org/abs/1409.3215)的简单但强大的构想,使这成为可能,其中两个循环神经网络协同工作,将一个序列转换为另一个序列。 编码器网络将输入序列压缩为一个向量,而解码器网络将该向量展开为一个新序列。
|
||||
|
||||

|
||||
|
||||
为了改进此模型,我们将使用[注意力机制](https://arxiv.org/abs/1409.0473),该机制可使解码器学会专注于输入序列的特定范围。
|
||||
|
||||
**推荐读物**:
|
||||
|
||||
我假设您至少已经安装了 PyTorch,Python 和张量:
|
||||
|
||||
* [安装说明](https://pytorch.org/)
|
||||
* [使用 PyTorch 进行深度学习:60 分钟的突击](../beginner/deep_learning_60min_blitz.html)通常开始使用 PyTorch
|
||||
* [使用示例]学习 PyTorch(../beginner/pytorch_with_examples.html)
|
||||
* [PyTorch(面向以前的 Torch 用户)](../beginner/former_torchies_tutorial.html)(如果您以前是 Lua Torch 用户)
|
||||
|
||||
了解序列到序列网络及其工作方式也将很有用:
|
||||
|
||||
* [《使用 RNN 编解码器学习短语表示法进行统计机器翻译》](https://arxiv.org/abs/1406.1078)
|
||||
* [《序列到神经网络的序列学习》](https://arxiv.org/abs/1409.3215)
|
||||
* [《通过共同学习对齐和翻译的神经机器翻译》](https://arxiv.org/abs/1409.0473)
|
||||
* [《神经对话模型》](https://arxiv.org/abs/1506.05869)
|
||||
|
||||
您还将找到有关[《从零开始的 NLP:使用字符级 RNN 分类名称》](char_rnn_classification_tutorial.html)和[《从零开始的 NLP:使用字符级 RNN 生成名称》](char_rnn_generation_tutorial.html)的先前教程。 分别与编码器和解码器模型非常相似。
|
||||
|
||||
有关更多信息,请阅读介绍以下主题的论文:
|
||||
|
||||
* [《使用 RNN 编解码器学习短语表示法进行统计机器翻译》](https://arxiv.org/abs/1406.1078)
|
||||
* [《序列到序列神经网络的学习》](https://arxiv.org/abs/1409.3215)
|
||||
* [《通过共同学习对齐和翻译的神经机器翻译》](https://arxiv.org/abs/1409.0473)
|
||||
* [《神经对话模型》](https://arxiv.org/abs/1506.05869)
|
||||
|
||||
**要求**
|
||||
|
||||
```py
|
||||
from __future__ import unicode_literals, print_function, division
|
||||
from io import open
|
||||
import unicodedata
|
||||
import string
|
||||
import re
|
||||
import random
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from torch import optim
|
||||
import torch.nn.functional as F
|
||||
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
```
|
||||
|
||||
## 加载数据文件
|
||||
|
||||
该项目的数据是成千上万的英语到法语翻译对的集合。
|
||||
|
||||
[开放数据栈交换](https://opendata.stackexchange.com/questions/3888/dataset-of-sentences-translated-into-many-languages)上的这个问题使我指向[开放翻译站点](https://tatoeba.org/) ,该站点可从[这里](https://tatoeba.org/eng/downloads)下载。更好的是,有人在这里做了一些额外的工作,[将语言对拆分为单独的文本文件](https://www.manythings.org/anki/)。
|
||||
|
||||
英文对法文对太大,无法包含在仓库中,因此请先下载到`data/eng-fra.txt`,然后再继续。 该文件是制表符分隔的翻译对列表:
|
||||
|
||||
```py
|
||||
I am cold. J'ai froid.
|
||||
|
||||
```
|
||||
|
||||
注意
|
||||
|
||||
从的下载数据,并将其提取到当前目录。
|
||||
|
||||
与字符级 RNN 教程中使用的字符编码类似,我们将一种语言中的每个单词表示为一个单向向量,或零个大向量(除单个单向索引外)(在单词的索引处)。 与某种语言中可能存在的数十个字符相比,单词更多很多,因此编码向量要大得多。 但是,我们将作弊并整理数据以使每种语言仅使用几千个单词。
|
||||
|
||||

|
||||
|
||||
我们将需要每个单词一个唯一的索引,以便以后用作网络的输入和目标。 为了跟踪所有这些,我们将使用一个名为`Lang`的帮助程序类,该类具有单词→索引(`word2index`)和索引→单词(`index2word`)字典,以及每个要使用的单词`word2count`的计数,以便以后替换稀有词。
|
||||
|
||||
```py
|
||||
SOS_token = 0
|
||||
EOS_token = 1
|
||||
|
||||
class Lang:
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
self.word2index = {}
|
||||
self.word2count = {}
|
||||
self.index2word = {0: "SOS", 1: "EOS"}
|
||||
self.n_words = 2 # Count SOS and EOS
|
||||
|
||||
def addSentence(self, sentence):
|
||||
for word in sentence.split(' '):
|
||||
self.addWord(word)
|
||||
|
||||
def addWord(self, word):
|
||||
if word not in self.word2index:
|
||||
self.word2index[word] = self.n_words
|
||||
self.word2count[word] = 1
|
||||
self.index2word[self.n_words] = word
|
||||
self.n_words += 1
|
||||
else:
|
||||
self.word2count[word] += 1
|
||||
|
||||
```
|
||||
|
||||
文件全部为 Unicode,为简化起见,我们将 Unicode 字符转换为 ASCII,将所有内容都转换为小写,并修剪大多数标点符号。
|
||||
|
||||
```py
|
||||
# Turn a Unicode string to plain ASCII, thanks to
|
||||
# https://stackoverflow.com/a/518232/2809427
|
||||
def unicodeToAscii(s):
|
||||
return ''.join(
|
||||
c for c in unicodedata.normalize('NFD', s)
|
||||
if unicodedata.category(c) != 'Mn'
|
||||
)
|
||||
|
||||
# Lowercase, trim, and remove non-letter characters
|
||||
|
||||
def normalizeString(s):
|
||||
s = unicodeToAscii(s.lower().strip())
|
||||
s = re.sub(r"([.!?])", r" \1", s)
|
||||
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
|
||||
return s
|
||||
|
||||
```
|
||||
|
||||
要读取数据文件,我们将文件拆分为几行,然后将几行拆分为两对。 这些文件都是英语→其他语言的,因此,如果我们要从其他语言→英语进行翻译,我添加了`reverse`标志来反转对。
|
||||
|
||||
```py
|
||||
def readLangs(lang1, lang2, reverse=False):
|
||||
print("Reading lines...")
|
||||
|
||||
# Read the file and split into lines
|
||||
lines = open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\
|
||||
read().strip().split('\n')
|
||||
|
||||
# Split every line into pairs and normalize
|
||||
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
|
||||
|
||||
# Reverse pairs, make Lang instances
|
||||
if reverse:
|
||||
pairs = [list(reversed(p)) for p in pairs]
|
||||
input_lang = Lang(lang2)
|
||||
output_lang = Lang(lang1)
|
||||
else:
|
||||
input_lang = Lang(lang1)
|
||||
output_lang = Lang(lang2)
|
||||
|
||||
return input_lang, output_lang, pairs
|
||||
|
||||
```
|
||||
|
||||
由于示例句子有很多,并且我们想快速训练一些东西,因此我们将数据集修剪为仅相对简短的句子。 在这里,最大长度为 10 个字(包括结尾的标点符号),我们正在过滤翻译成“我是”或“他是”等形式的句子(考虑到前面已替换掉撇号的情况)。
|
||||
|
||||
```py
|
||||
MAX_LENGTH = 10
|
||||
|
||||
eng_prefixes = (
|
||||
"i am ", "i m ",
|
||||
"he is", "he s ",
|
||||
"she is", "she s ",
|
||||
"you are", "you re ",
|
||||
"we are", "we re ",
|
||||
"they are", "they re "
|
||||
)
|
||||
|
||||
def filterPair(p):
|
||||
return len(p[0].split(' ')) < MAX_LENGTH and \
|
||||
len(p[1].split(' ')) < MAX_LENGTH and \
|
||||
p[1].startswith(eng_prefixes)
|
||||
|
||||
def filterPairs(pairs):
|
||||
return [pair for pair in pairs if filterPair(pair)]
|
||||
|
||||
```
|
||||
|
||||
准备数据的完整过程是:
|
||||
|
||||
* 读取文本文件并拆分为行,将行拆分为偶对
|
||||
* 规范文本,按长度和内容过滤
|
||||
* 成对建立句子中的单词列表
|
||||
|
||||
```py
|
||||
def prepareData(lang1, lang2, reverse=False):
|
||||
input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)
|
||||
print("Read %s sentence pairs" % len(pairs))
|
||||
pairs = filterPairs(pairs)
|
||||
print("Trimmed to %s sentence pairs" % len(pairs))
|
||||
print("Counting words...")
|
||||
for pair in pairs:
|
||||
input_lang.addSentence(pair[0])
|
||||
output_lang.addSentence(pair[1])
|
||||
print("Counted words:")
|
||||
print(input_lang.name, input_lang.n_words)
|
||||
print(output_lang.name, output_lang.n_words)
|
||||
return input_lang, output_lang, pairs
|
||||
|
||||
input_lang, output_lang, pairs = prepareData('eng', 'fra', True)
|
||||
print(random.choice(pairs))
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Reading lines...
|
||||
Read 135842 sentence pairs
|
||||
Trimmed to 10599 sentence pairs
|
||||
Counting words...
|
||||
Counted words:
|
||||
fra 4345
|
||||
eng 2803
|
||||
['il a l habitude des ordinateurs .', 'he is familiar with computers .']
|
||||
|
||||
```
|
||||
|
||||
## Seq2Seq 模型
|
||||
|
||||
循环神经网络(RNN)是在序列上运行并将其自身的输出用作后续步骤的输入的网络。
|
||||
|
||||
[序列到序列网络](https://arxiv.org/abs/1409.3215)或 seq2seq 网络或[编码器解码器网络](https://arxiv.org/pdf/1406.1078v3.pdf)是由两个称为编码器和解码器的 RNN 组成的模型。 编码器读取输入序列并输出单个向量,而解码器读取该向量以产生输出序列。
|
||||
|
||||

|
||||
|
||||
与使用单个 RNN 进行序列预测(每个输入对应一个输出)不同,seq2seq 模型使我们摆脱了序列长度和顺序的限制,这使其非常适合两种语言之间的翻译。
|
||||
|
||||
考虑一下句子`Je ne suis pas le chat noir -> I am not the black cat`。 输入句子中的大多数单词在输出句子中具有直接翻译,但是顺序略有不同,例如`chat noir`和`black cat`。 由于采用`ne/pas`结构,因此在输入句子中还有一个单词。 直接从输入单词的序列中产生正确的翻译将是困难的。
|
||||
|
||||
使用 seq2seq 模型,编码器创建单个向量,在理想情况下,该向量将输入序列的“含义”编码为单个向量—在句子的 N 维空间中的单个点。
|
||||
|
||||
### 编码器
|
||||
|
||||
seq2seq 网络的编码器是 RNN,它为输入句子中的每个单词输出一些值。 对于每个输入字,编码器输出一个向量和一个隐藏状态,并将隐藏状态用于下一个输入字。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
class EncoderRNN(nn.Module):
|
||||
def __init__(self, input_size, hidden_size):
|
||||
super(EncoderRNN, self).__init__()
|
||||
self.hidden_size = hidden_size
|
||||
|
||||
self.embedding = nn.Embedding(input_size, hidden_size)
|
||||
self.gru = nn.GRU(hidden_size, hidden_size)
|
||||
|
||||
def forward(self, input, hidden):
|
||||
embedded = self.embedding(input).view(1, 1, -1)
|
||||
output = embedded
|
||||
output, hidden = self.gru(output, hidden)
|
||||
return output, hidden
|
||||
|
||||
def initHidden(self):
|
||||
return torch.zeros(1, 1, self.hidden_size, device=device)
|
||||
|
||||
```
|
||||
|
||||
### 解码器
|
||||
|
||||
解码器是另一个 RNN,它采用编码器输出向量并输出单词序列来创建翻译。
|
||||
|
||||
#### 简单解码器
|
||||
|
||||
在最简单的 seq2seq 解码器中,我们仅使用编码器的最后一个输出。 该最后的输出有时称为*上下文向量*,因为它从整个序列中编码上下文。 该上下文向量用作解码器的初始隐藏状态。
|
||||
|
||||
在解码的每个步骤中,为解码器提供输入标记和隐藏状态。 初始输入标记是字符串开始`<SOS>`标记,第一个隐藏状态是上下文向量(编码器的最后一个隐藏状态)。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
class DecoderRNN(nn.Module):
|
||||
def __init__(self, hidden_size, output_size):
|
||||
super(DecoderRNN, self).__init__()
|
||||
self.hidden_size = hidden_size
|
||||
|
||||
self.embedding = nn.Embedding(output_size, hidden_size)
|
||||
self.gru = nn.GRU(hidden_size, hidden_size)
|
||||
self.out = nn.Linear(hidden_size, output_size)
|
||||
self.softmax = nn.LogSoftmax(dim=1)
|
||||
|
||||
def forward(self, input, hidden):
|
||||
output = self.embedding(input).view(1, 1, -1)
|
||||
output = F.relu(output)
|
||||
output, hidden = self.gru(output, hidden)
|
||||
output = self.softmax(self.out(output[0]))
|
||||
return output, hidden
|
||||
|
||||
def initHidden(self):
|
||||
return torch.zeros(1, 1, self.hidden_size, device=device)
|
||||
|
||||
```
|
||||
|
||||
我鼓励您训练并观察该模型的结果,但是为了节省空间,我们将直接努力并引入注意力机制。
|
||||
|
||||
#### 注意力解码器
|
||||
|
||||
如果仅上下文向量在编码器和解码器之间传递,则该单个向量承担对整个句子进行编码的负担。
|
||||
|
||||
注意使解码器网络可以针对解码器自身输出的每一步,“专注”于编码器输出的不同部分。 首先,我们计算一组*注意力权重*。 将这些与编码器输出向量相乘以创建加权组合。 结果(在代码中称为`attn_applied`)应包含有关输入序列特定部分的信息,从而帮助解码器选择正确的输出字。
|
||||
|
||||

|
||||
|
||||
另一个前馈层`attn`使用解码器的输入和隐藏状态作为输入来计算注意力权重。 由于训练数据中包含各种大小的句子,因此要实际创建和训练该层,我们必须选择可以应用的最大句子长度(输入长度,用于编码器输出)。 最大长度的句子将使用所有注意权重,而较短的句子将仅使用前几个。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
class AttnDecoderRNN(nn.Module):
|
||||
def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
|
||||
super(AttnDecoderRNN, self).__init__()
|
||||
self.hidden_size = hidden_size
|
||||
self.output_size = output_size
|
||||
self.dropout_p = dropout_p
|
||||
self.max_length = max_length
|
||||
|
||||
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
|
||||
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
|
||||
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
|
||||
self.dropout = nn.Dropout(self.dropout_p)
|
||||
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
|
||||
self.out = nn.Linear(self.hidden_size, self.output_size)
|
||||
|
||||
def forward(self, input, hidden, encoder_outputs):
|
||||
embedded = self.embedding(input).view(1, 1, -1)
|
||||
embedded = self.dropout(embedded)
|
||||
|
||||
attn_weights = F.softmax(
|
||||
self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)
|
||||
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
|
||||
encoder_outputs.unsqueeze(0))
|
||||
|
||||
output = torch.cat((embedded[0], attn_applied[0]), 1)
|
||||
output = self.attn_combine(output).unsqueeze(0)
|
||||
|
||||
output = F.relu(output)
|
||||
output, hidden = self.gru(output, hidden)
|
||||
|
||||
output = F.log_softmax(self.out(output[0]), dim=1)
|
||||
return output, hidden, attn_weights
|
||||
|
||||
def initHidden(self):
|
||||
return torch.zeros(1, 1, self.hidden_size, device=device)
|
||||
|
||||
```
|
||||
|
||||
注意
|
||||
|
||||
还有其他形式的注意,可以通过使用相对位置方法来解决长度限制问题。 在[《基于注意力的神经机器翻译的有效方法》](https://arxiv.org/abs/1508.04025)中阅读“本地注意力”。
|
||||
|
||||
## 训练
|
||||
|
||||
### 准备训练数据
|
||||
|
||||
为了训练,对于每一对,我们将需要一个输入张量(输入句子中单词的索引)和目标张量(目标句子中单词的索引)。 创建这些向量时,我们会将`EOS`标记附加到两个序列上。
|
||||
|
||||
```py
|
||||
def indexesFromSentence(lang, sentence):
|
||||
return [lang.word2index[word] for word in sentence.split(' ')]
|
||||
|
||||
def tensorFromSentence(lang, sentence):
|
||||
indexes = indexesFromSentence(lang, sentence)
|
||||
indexes.append(EOS_token)
|
||||
return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
|
||||
|
||||
def tensorsFromPair(pair):
|
||||
input_tensor = tensorFromSentence(input_lang, pair[0])
|
||||
target_tensor = tensorFromSentence(output_lang, pair[1])
|
||||
return (input_tensor, target_tensor)
|
||||
|
||||
```
|
||||
|
||||
### 训练模型
|
||||
|
||||
为了训练,我们通过编码器运行输入语句,并跟踪每个输出和最新的隐藏状态。 然后,为解码器提供`<SOS>`标记作为其第一个输入,为编码器提供最后的隐藏状态作为其第一个隐藏状态。
|
||||
|
||||
“教师强制”的概念是使用实际目标输出作为每个下一个输入,而不是使用解码器的猜测作为下一个输入。 使用教师强制会导致其收敛更快,但是当使用受过训练的网络时,[可能会显示不稳定](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.378.4095&rep=rep1&type=pdf)。
|
||||
|
||||
您可以观察到以教师为主导的网络的输出,这些输出阅读的是连贯的语法,但是却偏离了正确的翻译-直观地,它已经学会了代表输出语法,并且一旦老师说了最初的几个单词就可以“理解”含义,但是首先,它还没有正确地学习如何从翻译中创建句子。
|
||||
|
||||
由于 PyTorch 的 Autograd 具有给我们的自由,我们可以通过简单的`if`语句随意选择是否使用教师强迫。 调高`teacher_forcing_ratio`以使用更多。
|
||||
|
||||
```py
|
||||
teacher_forcing_ratio = 0.5
|
||||
|
||||
def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
|
||||
encoder_hidden = encoder.initHidden()
|
||||
|
||||
encoder_optimizer.zero_grad()
|
||||
decoder_optimizer.zero_grad()
|
||||
|
||||
input_length = input_tensor.size(0)
|
||||
target_length = target_tensor.size(0)
|
||||
|
||||
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)
|
||||
|
||||
loss = 0
|
||||
|
||||
for ei in range(input_length):
|
||||
encoder_output, encoder_hidden = encoder(
|
||||
input_tensor[ei], encoder_hidden)
|
||||
encoder_outputs[ei] = encoder_output[0, 0]
|
||||
|
||||
decoder_input = torch.tensor([[SOS_token]], device=device)
|
||||
|
||||
decoder_hidden = encoder_hidden
|
||||
|
||||
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
|
||||
|
||||
if use_teacher_forcing:
|
||||
# Teacher forcing: Feed the target as the next input
|
||||
for di in range(target_length):
|
||||
decoder_output, decoder_hidden, decoder_attention = decoder(
|
||||
decoder_input, decoder_hidden, encoder_outputs)
|
||||
loss += criterion(decoder_output, target_tensor[di])
|
||||
decoder_input = target_tensor[di] # Teacher forcing
|
||||
|
||||
else:
|
||||
# Without teacher forcing: use its own predictions as the next input
|
||||
for di in range(target_length):
|
||||
decoder_output, decoder_hidden, decoder_attention = decoder(
|
||||
decoder_input, decoder_hidden, encoder_outputs)
|
||||
topv, topi = decoder_output.topk(1)
|
||||
decoder_input = topi.squeeze().detach() # detach from history as input
|
||||
|
||||
loss += criterion(decoder_output, target_tensor[di])
|
||||
if decoder_input.item() == EOS_token:
|
||||
break
|
||||
|
||||
loss.backward()
|
||||
|
||||
encoder_optimizer.step()
|
||||
decoder_optimizer.step()
|
||||
|
||||
return loss.item() / target_length
|
||||
|
||||
```
|
||||
|
||||
这是一个帮助函数,用于在给定当前时间和进度% 的情况下打印经过的时间和估计的剩余时间。
|
||||
|
||||
```py
|
||||
import time
|
||||
import math
|
||||
|
||||
def asMinutes(s):
|
||||
m = math.floor(s / 60)
|
||||
s -= m * 60
|
||||
return '%dm %ds' % (m, s)
|
||||
|
||||
def timeSince(since, percent):
|
||||
now = time.time()
|
||||
s = now - since
|
||||
es = s / (percent)
|
||||
rs = es - s
|
||||
return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
|
||||
|
||||
```
|
||||
|
||||
整个训练过程如下所示:
|
||||
|
||||
* 启动计时器
|
||||
* 初始化优化器和标准
|
||||
* 创建一组训练对
|
||||
* 启动空损失数组进行绘图
|
||||
|
||||
然后,我们多次调用`train`,并偶尔打印进度(示例的百分比,到目前为止的时间,估计的时间)和平均损失。
|
||||
|
||||
```py
|
||||
def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.01):
|
||||
start = time.time()
|
||||
plot_losses = []
|
||||
print_loss_total = 0 # Reset every print_every
|
||||
plot_loss_total = 0 # Reset every plot_every
|
||||
|
||||
encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)
|
||||
decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)
|
||||
training_pairs = [tensorsFromPair(random.choice(pairs))
|
||||
for i in range(n_iters)]
|
||||
criterion = nn.NLLLoss()
|
||||
|
||||
for iter in range(1, n_iters + 1):
|
||||
training_pair = training_pairs[iter - 1]
|
||||
input_tensor = training_pair[0]
|
||||
target_tensor = training_pair[1]
|
||||
|
||||
loss = train(input_tensor, target_tensor, encoder,
|
||||
decoder, encoder_optimizer, decoder_optimizer, criterion)
|
||||
print_loss_total += loss
|
||||
plot_loss_total += loss
|
||||
|
||||
if iter % print_every == 0:
|
||||
print_loss_avg = print_loss_total / print_every
|
||||
print_loss_total = 0
|
||||
print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters),
|
||||
iter, iter / n_iters * 100, print_loss_avg))
|
||||
|
||||
if iter % plot_every == 0:
|
||||
plot_loss_avg = plot_loss_total / plot_every
|
||||
plot_losses.append(plot_loss_avg)
|
||||
plot_loss_total = 0
|
||||
|
||||
showPlot(plot_losses)
|
||||
|
||||
```
|
||||
|
||||
### 绘制结果
|
||||
|
||||
使用训练时保存的损失值数组`plot_losses`,使用 matplotlib 进行绘制。
|
||||
|
||||
```py
|
||||
import matplotlib.pyplot as plt
|
||||
plt.switch_backend('agg')
|
||||
import matplotlib.ticker as ticker
|
||||
import numpy as np
|
||||
|
||||
def showPlot(points):
|
||||
plt.figure()
|
||||
fig, ax = plt.subplots()
|
||||
# this locator puts ticks at regular intervals
|
||||
loc = ticker.MultipleLocator(base=0.2)
|
||||
ax.yaxis.set_major_locator(loc)
|
||||
plt.plot(points)
|
||||
|
||||
```
|
||||
|
||||
## 评估
|
||||
|
||||
评估与训练基本相同,但是没有目标,因此我们只需将解码器的预测反馈给每一步。 每当它预测一个单词时,我们都会将其添加到输出字符串中,如果它预测到`EOS`标记,我们将在此处停止。 我们还将存储解码器的注意输出,以供以后显示。
|
||||
|
||||
```py
|
||||
def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):
|
||||
with torch.no_grad():
|
||||
input_tensor = tensorFromSentence(input_lang, sentence)
|
||||
input_length = input_tensor.size()[0]
|
||||
encoder_hidden = encoder.initHidden()
|
||||
|
||||
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)
|
||||
|
||||
for ei in range(input_length):
|
||||
encoder_output, encoder_hidden = encoder(input_tensor[ei],
|
||||
encoder_hidden)
|
||||
encoder_outputs[ei] += encoder_output[0, 0]
|
||||
|
||||
decoder_input = torch.tensor([[SOS_token]], device=device) # SOS
|
||||
|
||||
decoder_hidden = encoder_hidden
|
||||
|
||||
decoded_words = []
|
||||
decoder_attentions = torch.zeros(max_length, max_length)
|
||||
|
||||
for di in range(max_length):
|
||||
decoder_output, decoder_hidden, decoder_attention = decoder(
|
||||
decoder_input, decoder_hidden, encoder_outputs)
|
||||
decoder_attentions[di] = decoder_attention.data
|
||||
topv, topi = decoder_output.data.topk(1)
|
||||
if topi.item() == EOS_token:
|
||||
decoded_words.append('<EOS>')
|
||||
break
|
||||
else:
|
||||
decoded_words.append(output_lang.index2word[topi.item()])
|
||||
|
||||
decoder_input = topi.squeeze().detach()
|
||||
|
||||
return decoded_words, decoder_attentions[:di + 1]
|
||||
|
||||
```
|
||||
|
||||
我们可以从训练集中评估随机句子,并打印出输入,目标和输出以做出一些主观的质量判断:
|
||||
|
||||
```py
|
||||
def evaluateRandomly(encoder, decoder, n=10):
|
||||
for i in range(n):
|
||||
pair = random.choice(pairs)
|
||||
print('>', pair[0])
|
||||
print('=', pair[1])
|
||||
output_words, attentions = evaluate(encoder, decoder, pair[0])
|
||||
output_sentence = ' '.join(output_words)
|
||||
print('<', output_sentence)
|
||||
print('')
|
||||
|
||||
```
|
||||
|
||||
## 训练和评估
|
||||
|
||||
有了所有这些辅助函数(看起来像是额外的工作,但它使运行多个实验更加容易),我们实际上可以初始化网络并开始训练。
|
||||
|
||||
请记住,输入语句已被大量过滤。 对于这个小的数据集,我们可以使用具有 256 个隐藏节点和单个 GRU 层的相对较小的网络。 在 MacBook CPU 上运行约 40 分钟后,我们会得到一些合理的结果。
|
||||
|
||||
注意
|
||||
|
||||
如果运行此笔记本,则可以进行训练,中断内核,评估并在以后继续训练。 注释掉编码器和解码器已初始化的行,然后再次运行`trainIters`。
|
||||
|
||||
```py
|
||||
hidden_size = 256
|
||||
encoder1 = EncoderRNN(input_lang.n_words, hidden_size).to(device)
|
||||
attn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words, dropout_p=0.1).to(device)
|
||||
|
||||
trainIters(encoder1, attn_decoder1, 75000, print_every=5000)
|
||||
|
||||
```
|
||||
|
||||
* 
|
||||
* 
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
2m 6s (- 29m 28s) (5000 6%) 2.8538
|
||||
4m 7s (- 26m 49s) (10000 13%) 2.3035
|
||||
6m 10s (- 24m 40s) (15000 20%) 1.9812
|
||||
8m 13s (- 22m 37s) (20000 26%) 1.7083
|
||||
10m 15s (- 20m 31s) (25000 33%) 1.5199
|
||||
12m 17s (- 18m 26s) (30000 40%) 1.3580
|
||||
14m 18s (- 16m 20s) (35000 46%) 1.2002
|
||||
16m 18s (- 14m 16s) (40000 53%) 1.0832
|
||||
18m 21s (- 12m 14s) (45000 60%) 0.9719
|
||||
20m 22s (- 10m 11s) (50000 66%) 0.8879
|
||||
22m 23s (- 8m 8s) (55000 73%) 0.8130
|
||||
24m 25s (- 6m 6s) (60000 80%) 0.7509
|
||||
26m 27s (- 4m 4s) (65000 86%) 0.6524
|
||||
28m 27s (- 2m 1s) (70000 93%) 0.6007
|
||||
30m 30s (- 0m 0s) (75000 100%) 0.5699
|
||||
|
||||
```
|
||||
|
||||
```py
|
||||
evaluateRandomly(encoder1, attn_decoder1)
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
> nous sommes desolees .
|
||||
= we re sorry .
|
||||
< we re sorry . <EOS>
|
||||
|
||||
> tu plaisantes bien sur .
|
||||
= you re joking of course .
|
||||
< you re joking of course . <EOS>
|
||||
|
||||
> vous etes trop stupide pour vivre .
|
||||
= you re too stupid to live .
|
||||
< you re too stupid to live . <EOS>
|
||||
|
||||
> c est un scientifique de niveau international .
|
||||
= he s a world class scientist .
|
||||
< he is a successful person . <EOS>
|
||||
|
||||
> j agis pour mon pere .
|
||||
= i am acting for my father .
|
||||
< i m trying to my father . <EOS>
|
||||
|
||||
> ils courent maintenant .
|
||||
= they are running now .
|
||||
< they are running now . <EOS>
|
||||
|
||||
> je suis tres heureux d etre ici .
|
||||
= i m very happy to be here .
|
||||
< i m very happy to be here . <EOS>
|
||||
|
||||
> vous etes bonne .
|
||||
= you re good .
|
||||
< you re good . <EOS>
|
||||
|
||||
> il a peur de la mort .
|
||||
= he is afraid of death .
|
||||
< he is afraid of death . <EOS>
|
||||
|
||||
> je suis determine a devenir un scientifique .
|
||||
= i am determined to be a scientist .
|
||||
< i m ready to make a cold . <EOS>
|
||||
|
||||
```
|
||||
|
||||
### 可视化注意力
|
||||
|
||||
注意力机制的一个有用特性是其高度可解释的输出。 因为它用于加权输入序列的特定编码器输出,所以我们可以想象一下在每个时间步长上网络最关注的位置。
|
||||
|
||||
您可以简单地运行`plt.matshow(attentions)`以将注意力输出显示为矩阵,其中列为输入步骤,行为输出步骤:
|
||||
|
||||
```py
|
||||
output_words, attentions = evaluate(
|
||||
encoder1, attn_decoder1, "je suis trop froid .")
|
||||
plt.matshow(attentions.numpy())
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
为了获得更好的观看体验,我们将做一些额外的工作来添加轴和标签:
|
||||
|
||||
```py
|
||||
def showAttention(input_sentence, output_words, attentions):
|
||||
# Set up figure with colorbar
|
||||
fig = plt.figure()
|
||||
ax = fig.add_subplot(111)
|
||||
cax = ax.matshow(attentions.numpy(), cmap='bone')
|
||||
fig.colorbar(cax)
|
||||
|
||||
# Set up axes
|
||||
ax.set_xticklabels([''] + input_sentence.split(' ') +
|
||||
['<EOS>'], rotation=90)
|
||||
ax.set_yticklabels([''] + output_words)
|
||||
|
||||
# Show label at every tick
|
||||
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
|
||||
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
|
||||
|
||||
plt.show()
|
||||
|
||||
def evaluateAndShowAttention(input_sentence):
|
||||
output_words, attentions = evaluate(
|
||||
encoder1, attn_decoder1, input_sentence)
|
||||
print('input =', input_sentence)
|
||||
print('output =', ' '.join(output_words))
|
||||
showAttention(input_sentence, output_words, attentions)
|
||||
|
||||
evaluateAndShowAttention("elle a cinq ans de moins que moi .")
|
||||
|
||||
evaluateAndShowAttention("elle est trop petit .")
|
||||
|
||||
evaluateAndShowAttention("je ne crains pas de mourir .")
|
||||
|
||||
evaluateAndShowAttention("c est un jeune directeur plein de talent .")
|
||||
|
||||
```
|
||||
|
||||
* 
|
||||
* 
|
||||
* 
|
||||
* 
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
input = elle a cinq ans de moins que moi .
|
||||
output = she s five years younger than i am . <EOS>
|
||||
input = elle est trop petit .
|
||||
output = she s too loud . <EOS>
|
||||
input = je ne crains pas de mourir .
|
||||
output = i m not scared to die . <EOS>
|
||||
input = c est un jeune directeur plein de talent .
|
||||
output = he s a talented young writer . <EOS>
|
||||
|
||||
```
|
||||
|
||||
## 练习
|
||||
|
||||
* 尝试使用其他数据集
|
||||
* 另一对语言
|
||||
* 人机 → 机器(例如 IOT 命令)
|
||||
* 聊天 → 回复
|
||||
* 问题 → 答案
|
||||
* 用预训练的单词嵌入(例如 word2vec 或 GloVe)替换嵌入
|
||||
* 尝试使用更多的层,更多的隐藏单元和更多的句子。 比较训练时间和结果。
|
||||
* 如果您使用翻译对,其中成对具有两个相同的词组(`I am test \t I am test`),则可以将其用作自编码器。 尝试这个:
|
||||
* 训练为自编码器
|
||||
* 仅保存编码器网络
|
||||
* 从那里训练新的解码器进行翻译
|
||||
|
||||
**脚本的总运行时间**:(30 分钟 37.929 秒)
|
||||
|
||||
[下载 Python 源码:`seq2seq_translation_tutorial.py`](../_downloads/a96a2daac1918ec72f68233dfe3f2c47/seq2seq_translation_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`seq2seq_translation_tutorial.ipynb`](../_downloads/a60617788061539b5449701ae76aee56/seq2seq_translation_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||
348
pytorch/官方教程/31.md
Normal file
@@ -0,0 +1,348 @@
|
||||
# 使用`torchtext`的文本分类
|
||||
|
||||
> 原文:<https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html>
|
||||
|
||||
本教程说明如何使用`torchtext`中的文本分类数据集,包括
|
||||
|
||||
```py
|
||||
- AG_NEWS,
|
||||
- SogouNews,
|
||||
- DBpedia,
|
||||
- YelpReviewPolarity,
|
||||
- YelpReviewFull,
|
||||
- YahooAnswers,
|
||||
- AmazonReviewPolarity,
|
||||
- AmazonReviewFull
|
||||
|
||||
```
|
||||
|
||||
此示例显示了如何使用这些`TextClassification`数据集之一训练用于分类的监督学习算法。
|
||||
|
||||
## 使用 N 元组加载数据
|
||||
|
||||
一袋 N 元组特征用于捕获有关本地单词顺序的一些部分信息。 在实践中,应用二元语法或三元语法作为单词组比仅一个单词提供更多的好处。 一个例子:
|
||||
|
||||
```py
|
||||
"load data with ngrams"
|
||||
Bi-grams results: "load data", "data with", "with ngrams"
|
||||
Tri-grams results: "load data with", "data with ngrams"
|
||||
|
||||
```
|
||||
|
||||
`TextClassification`数据集支持`ngrams`方法。 通过将`ngrams`设置为 2,数据集中的示例文本将是一个单字加二元组字符串的列表。
|
||||
|
||||
```py
|
||||
import torch
|
||||
import torchtext
|
||||
from torchtext.datasets import text_classification
|
||||
NGRAMS = 2
|
||||
import os
|
||||
if not os.path.isdir('./.data'):
|
||||
os.mkdir('./.data')
|
||||
train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](
|
||||
root='./.data', ngrams=NGRAMS, vocab=None)
|
||||
BATCH_SIZE = 16
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
```
|
||||
|
||||
## 定义模型
|
||||
|
||||
该模型由[`EmbeddingBag`](https://pytorch.org/docs/stable/nn.html?highlight=embeddingbag#torch.nn.EmbeddingBag)层和线性层组成(请参见下图)。 `nn.EmbeddingBag`计算嵌入“袋”的平均值。 此处的文本条目具有不同的长度。 `nn.EmbeddingBag`此处不需要填充,因为文本长度以偏移量保存。
|
||||
|
||||
另外,由于`nn.EmbeddingBag`会动态累积嵌入中的平均值,因此`nn.EmbeddingBag`可以提高性能和存储效率,以处理张量序列。
|
||||
|
||||

|
||||
|
||||
```py
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
class TextSentiment(nn.Module):
|
||||
def __init__(self, vocab_size, embed_dim, num_class):
|
||||
super().__init__()
|
||||
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
|
||||
self.fc = nn.Linear(embed_dim, num_class)
|
||||
self.init_weights()
|
||||
|
||||
def init_weights(self):
|
||||
initrange = 0.5
|
||||
self.embedding.weight.data.uniform_(-initrange, initrange)
|
||||
self.fc.weight.data.uniform_(-initrange, initrange)
|
||||
self.fc.bias.data.zero_()
|
||||
|
||||
def forward(self, text, offsets):
|
||||
embedded = self.embedding(text, offsets)
|
||||
return self.fc(embedded)
|
||||
|
||||
```
|
||||
|
||||
## 启动实例
|
||||
|
||||
`AG_NEWS`数据集具有四个标签,因此类别数是四个。
|
||||
|
||||
```py
|
||||
1 : World
|
||||
2 : Sports
|
||||
3 : Business
|
||||
4 : Sci/Tec
|
||||
|
||||
```
|
||||
|
||||
词汇的大小等于词汇的长度(包括单个单词和 N 元组)。 类的数量等于标签的数量,在`AG_NEWS`情况下为 4。
|
||||
|
||||
```py
|
||||
VOCAB_SIZE = len(train_dataset.get_vocab())
|
||||
EMBED_DIM = 32
|
||||
NUN_CLASS = len(train_dataset.get_labels())
|
||||
model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUN_CLASS).to(device)
|
||||
|
||||
```
|
||||
|
||||
## 用于生成批量的函数
|
||||
|
||||
由于文本条目的长度不同,因此使用自定义函数`generate_batch()`生成数据批和偏移量。 该函数被传递到`torch.utils.data.DataLoader`中的`collate_fn`。 `collate_fn`的输入是张量列表,其大小为`batch_size`,`collate_fn`函数将它们打包成一个小批量。 请注意此处,并确保将`collate_fn`声明为顶级`def`。 这样可以确保该函数在每个工作程序中均可用。
|
||||
|
||||
原始数据批量输入中的文本条目打包到一个列表中,并作为单个张量级联,作为`nn.EmbeddingBag`的输入。 偏移量是定界符的张量,表示文本张量中各个序列的起始索引。 `Label`是一个张量,用于保存单个文本条目的标签。
|
||||
|
||||
```py
|
||||
def generate_batch(batch):
|
||||
label = torch.tensor([entry[0] for entry in batch])
|
||||
text = [entry[1] for entry in batch]
|
||||
offsets = [0] + [len(entry) for entry in text]
|
||||
# torch.Tensor.cumsum returns the cumulative sum
|
||||
# of elements in the dimension dim.
|
||||
# torch.Tensor([1.0, 2.0, 3.0]).cumsum(dim=0)
|
||||
|
||||
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
|
||||
text = torch.cat(text)
|
||||
return text, offsets, label
|
||||
|
||||
```
|
||||
|
||||
## 定义函数来训练模型并评估结果
|
||||
|
||||
建议 PyTorch 用户使用[`torch.utils.data.DataLoader`](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader),它可以轻松地并行加载数据([教程在这里](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html))。 我们在此处使用`DataLoader`加载`AG_NEWS`数据集,并将其发送到模型以进行训练/验证。
|
||||
|
||||
```py
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
def train_func(sub_train_):
|
||||
|
||||
# Train the model
|
||||
train_loss = 0
|
||||
train_acc = 0
|
||||
data = DataLoader(sub_train_, batch_size=BATCH_SIZE, shuffle=True,
|
||||
collate_fn=generate_batch)
|
||||
for i, (text, offsets, cls) in enumerate(data):
|
||||
optimizer.zero_grad()
|
||||
text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
|
||||
output = model(text, offsets)
|
||||
loss = criterion(output, cls)
|
||||
train_loss += loss.item()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
train_acc += (output.argmax(1) == cls).sum().item()
|
||||
|
||||
# Adjust the learning rate
|
||||
scheduler.step()
|
||||
|
||||
return train_loss / len(sub_train_), train_acc / len(sub_train_)
|
||||
|
||||
def test(data_):
|
||||
loss = 0
|
||||
acc = 0
|
||||
data = DataLoader(data_, batch_size=BATCH_SIZE, collate_fn=generate_batch)
|
||||
for text, offsets, cls in data:
|
||||
text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
|
||||
with torch.no_grad():
|
||||
output = model(text, offsets)
|
||||
loss = criterion(output, cls)
|
||||
loss += loss.item()
|
||||
acc += (output.argmax(1) == cls).sum().item()
|
||||
|
||||
return loss / len(data_), acc / len(data_)
|
||||
|
||||
```
|
||||
|
||||
## 分割数据集并运行模型
|
||||
|
||||
由于原始的`AG_NEWS`没有有效的数据集,因此我们将训练数据集分为训练/有效集,其分割比率为 0.95(训练)和 0.05(有效)。 在这里,我们在 PyTorch 核心库中使用[`torch.utils.data.dataset.random_split`](https://pytorch.org/docs/stable/data.html?highlight=random_split#torch.utils.data.random_split)函数。
|
||||
|
||||
[`CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss)标准将`nn.LogSoftmax()`和`nn.NLLLoss()`合并到一个类中。 在训练带有`C`类的分类问题时很有用。 [`SGD`](https://pytorch.org/docs/stable/_modules/torch/optim/sgd.html)实现了随机梯度下降方法作为优化程序。 初始学习率设置为 4.0。 [`StepLR`](https://pytorch.org/docs/master/_modules/torch/optim/lr_scheduler.html#StepLR)在此处用于通过历时调整学习率。
|
||||
|
||||
```py
|
||||
import time
|
||||
from torch.utils.data.dataset import random_split
|
||||
N_EPOCHS = 5
|
||||
min_valid_loss = float('inf')
|
||||
|
||||
criterion = torch.nn.CrossEntropyLoss().to(device)
|
||||
optimizer = torch.optim.SGD(model.parameters(), lr=4.0)
|
||||
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
|
||||
|
||||
train_len = int(len(train_dataset) * 0.95)
|
||||
sub_train_, sub_valid_ = \
|
||||
random_split(train_dataset, [train_len, len(train_dataset) - train_len])
|
||||
|
||||
for epoch in range(N_EPOCHS):
|
||||
|
||||
start_time = time.time()
|
||||
train_loss, train_acc = train_func(sub_train_)
|
||||
valid_loss, valid_acc = test(sub_valid_)
|
||||
|
||||
secs = int(time.time() - start_time)
|
||||
mins = secs / 60
|
||||
secs = secs % 60
|
||||
|
||||
print('Epoch: %d' %(epoch + 1), " | time in %d minutes, %d seconds" %(mins, secs))
|
||||
print(f'\tLoss: {train_loss:.4f}(train)\t|\tAcc: {train_acc * 100:.1f}%(train)')
|
||||
print(f'\tLoss: {valid_loss:.4f}(valid)\t|\tAcc: {valid_acc * 100:.1f}%(valid)')
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Epoch: 1 | time in 0 minutes, 11 seconds
|
||||
Loss: 0.0262(train) | Acc: 84.7%(train)
|
||||
Loss: 0.0002(valid) | Acc: 89.3%(valid)
|
||||
Epoch: 2 | time in 0 minutes, 11 seconds
|
||||
Loss: 0.0119(train) | Acc: 93.6%(train)
|
||||
Loss: 0.0002(valid) | Acc: 89.6%(valid)
|
||||
Epoch: 3 | time in 0 minutes, 11 seconds
|
||||
Loss: 0.0069(train) | Acc: 96.3%(train)
|
||||
Loss: 0.0000(valid) | Acc: 91.8%(valid)
|
||||
Epoch: 4 | time in 0 minutes, 11 seconds
|
||||
Loss: 0.0038(train) | Acc: 98.1%(train)
|
||||
Loss: 0.0000(valid) | Acc: 91.5%(valid)
|
||||
Epoch: 5 | time in 0 minutes, 11 seconds
|
||||
Loss: 0.0022(train) | Acc: 99.0%(train)
|
||||
Loss: 0.0000(valid) | Acc: 91.4%(valid)
|
||||
|
||||
```
|
||||
|
||||
使用以下信息在 GPU 上运行模型:
|
||||
|
||||
周期:1 | 时间在 0 分 11 秒内
|
||||
|
||||
```py
|
||||
Loss: 0.0263(train) | Acc: 84.5%(train)
|
||||
Loss: 0.0001(valid) | Acc: 89.0%(valid)
|
||||
|
||||
```
|
||||
|
||||
周期:2 | 时间在 0 分钟 10 秒内
|
||||
|
||||
```py
|
||||
Loss: 0.0119(train) | Acc: 93.6%(train)
|
||||
Loss: 0.0000(valid) | Acc: 89.6%(valid)
|
||||
|
||||
```
|
||||
|
||||
周期:3 | 时间在 0 分钟 9 秒内
|
||||
|
||||
```py
|
||||
Loss: 0.0069(train) | Acc: 96.4%(train)
|
||||
Loss: 0.0000(valid) | Acc: 90.5%(valid)
|
||||
|
||||
```
|
||||
|
||||
周期:4 | 时间在 0 分 11 秒内
|
||||
|
||||
```py
|
||||
Loss: 0.0038(train) | Acc: 98.2%(train)
|
||||
Loss: 0.0000(valid) | Acc: 90.4%(valid)
|
||||
|
||||
```
|
||||
|
||||
周期:5 | 时间在 0 分 11 秒内
|
||||
|
||||
```py
|
||||
Loss: 0.0022(train) | Acc: 99.0%(train)
|
||||
Loss: 0.0000(valid) | Acc: 91.0%(valid)
|
||||
|
||||
```
|
||||
|
||||
## 使用测试数据集评估模型
|
||||
|
||||
```py
|
||||
print('Checking the results of test dataset...')
|
||||
test_loss, test_acc = test(test_dataset)
|
||||
print(f'\tLoss: {test_loss:.4f}(test)\t|\tAcc: {test_acc * 100:.1f}%(test)')
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
Checking the results of test dataset...
|
||||
Loss: 0.0002(test) | Acc: 90.9%(test)
|
||||
|
||||
```
|
||||
|
||||
正在检查测试数据集的结果…
|
||||
|
||||
```py
|
||||
Loss: 0.0237(test) | Acc: 90.5%(test)
|
||||
|
||||
```
|
||||
|
||||
## 测试随机新闻
|
||||
|
||||
使用到目前为止最好的模型并测试高尔夫新闻。 标签信息在[这里](https://pytorch.org/text/datasets.html?highlight=ag_news#torchtext.datasets.AG_NEWS)。
|
||||
|
||||
```py
|
||||
import re
|
||||
from torchtext.data.utils import ngrams_iterator
|
||||
from torchtext.data.utils import get_tokenizer
|
||||
|
||||
ag_news_label = {1 : "World",
|
||||
2 : "Sports",
|
||||
3 : "Business",
|
||||
4 : "Sci/Tec"}
|
||||
|
||||
def predict(text, model, vocab, ngrams):
|
||||
tokenizer = get_tokenizer("basic_english")
|
||||
with torch.no_grad():
|
||||
text = torch.tensor([vocab[token]
|
||||
for token in ngrams_iterator(tokenizer(text), ngrams)])
|
||||
output = model(text, torch.tensor([0]))
|
||||
return output.argmax(1).item() + 1
|
||||
|
||||
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
|
||||
enduring the season's worst weather conditions on Sunday at The \
|
||||
Open on his way to a closing 75 at Royal Portrush, which \
|
||||
considering the wind and the rain was a respectable showing. \
|
||||
Thursday's first round at the WGC-FedEx St. Jude Invitational \
|
||||
was another story. With temperatures in the mid-80s and hardly any \
|
||||
wind, the Spaniard was 13 strokes better in a flawless round. \
|
||||
Thanks to his best putting performance on the PGA Tour, Rahm \
|
||||
finished with an 8-under 62 for a three-stroke lead, which \
|
||||
was even more impressive considering he'd never played the \
|
||||
front nine at TPC Southwind."
|
||||
|
||||
vocab = train_dataset.get_vocab()
|
||||
model = model.to("cpu")
|
||||
|
||||
print("This is a %s news" %ag_news_label[predict(ex_text_str, model, vocab, 2)])
|
||||
|
||||
```
|
||||
|
||||
出:
|
||||
|
||||
```py
|
||||
This is a Sports news
|
||||
|
||||
```
|
||||
|
||||
这是体育新闻
|
||||
|
||||
[您可以在此处找到本说明中显示的代码示例](https://github.com/pytorch/text/tree/master/examples/text_classification)。
|
||||
|
||||
**脚本的总运行时间**:(1 分 38.483 秒)
|
||||
|
||||
[下载 Python 源码:`text_sentiment_ngrams_tutorial.py`](../_downloads/1824f32965271d21829e1739cc434729/text_sentiment_ngrams_tutorial.py)
|
||||
|
||||
[下载 Jupyter 笔记本:`text_sentiment_ngrams_tutorial.ipynb`](../_downloads/27bd42079e7f46673b53e90153168529/text_sentiment_ngrams_tutorial.ipynb)
|
||||
|
||||
[由 Sphinx 画廊](https://sphinx-gallery.readthedocs.io)生成的画廊
|
||||