mirror of
https://github.com/apachecn/ailearning.git
synced 2026-04-24 02:23:45 +08:00
2020-10-19 21:48:57
This commit is contained in:
@@ -10,7 +10,7 @@
|
||||
|
||||

|
||||
|
||||
```
|
||||
```py
|
||||
import torch
|
||||
from torch.autograd import Variable
|
||||
import matplotlib.pyplot as plt
|
||||
@@ -42,7 +42,7 @@ plt.show()
|
||||
|
||||
我们在这里搭建两个神经网络, 一个没有 dropout, 一个有 dropout. 没有 dropout 的容易出现 过拟合, 那我们就命名为 net_overfitting, 另一个就是 net_dropped. torch.nn.Dropout(0.5) 这里的 0.5 指的是随机有 50% 的神经元会被关闭/丢弃.
|
||||
|
||||
```
|
||||
```py
|
||||
net_overfitting = torch.nn.Sequential(
|
||||
torch.nn.Linear(1, N_HIDDEN),
|
||||
torch.nn.ReLU(),
|
||||
@@ -66,7 +66,7 @@ net_dropped = torch.nn.Sequential(
|
||||
|
||||
训练的时候, 这两个神经网络分开训练. 训练的环境都一样.
|
||||
|
||||
```
|
||||
```py
|
||||
optimizer_ofit = torch.optim.Adam(net_overfitting.parameters(), lr=0.01)
|
||||
optimizer_drop = torch.optim.Adam(net_dropped.parameters(), lr=0.01)
|
||||
loss_func = torch.nn.MSELoss()
|
||||
@@ -92,7 +92,7 @@ for t in range(500):
|
||||
|
||||

|
||||
|
||||
```
|
||||
```py
|
||||
...
|
||||
|
||||
optimizer_ofit.step()
|
||||
|
||||
Reference in New Issue
Block a user