diff --git a/docs/dl/CNN原理.md b/docs/dl/CNN原理.md index ccb11305..7223b4aa 100644 --- a/docs/dl/CNN原理.md +++ b/docs/dl/CNN原理.md @@ -50,7 +50,7 @@   前面说到在图像领域,用传统的神经网络并不合适。我们知道,图像是由一个个像素点构成,每个像素点有三个通道,分别代表RGB颜色,那么,如果一个图像的尺寸是(28,28,1),即代表这个图像的是一个长宽均为28,channel为1的图像(channel也叫depth,此处1代表灰色图像)。如果使用全连接的网络结构,即,网络中的神经与与相邻层上的每个神经元均连接,那就意味着我们的网络有 `28 * 28 =784` 个神经元,hidden层采用了15个神经元,那么简单计算一下,我们需要的参数个数(w和b)就有: `784*15*10+15+10=117625` 个,这个参数太多了,随便进行一次反向传播计算量都是巨大的,从计算资源和调参的角度都不建议用传统的神经网络。(评论中有同学对这个参数计算不太理解,我简单说一下: 图片是由像素点组成的,用矩阵表示的, `28*28` 的矩阵,肯定是没法直接放到神经元里的,我们得把它“拍平”,变成一个`28*28=784` 的一列向量,这一列向量和隐含层的15个神经元连接,就有 `784*15=11760` 个权重w,隐含层和最后的输出层的10个神经元连接,就有 `11760*10=117600` 个权重w,再加上隐含层的偏置项15个和输出层的偏置项10个,就是: 117625个参数了) -![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171031123650574-11330636.png) +![](img/CNN原理/853467-20171031123650574-11330636.png)                                     图1 三层神经网络识别手写数字 @@ -62,7 +62,7 @@   上文提到我们用传统的三层神经网络需要大量的参数,原因在于每个神经元都和相邻层的神经元相连接,但是思考一下,这种连接方式是必须的吗?全连接层的方式对于图像数据来说似乎显得不这么友好,因为图像本身具有“二维空间特征”,通俗点说就是局部特性。譬如我们看一张猫的图片,可能看到猫的眼镜或者嘴巴就知道这是张猫片,而不需要说每个部分都看完了才知道,啊,原来这个是猫啊。所以如果我们可以用某种方式对一张图片的某个典型特征识别,那么这张图片的类别也就知道了。这个时候就产生了卷积的概念。举个例子,现在有一个4*4的图像,我们设计两个卷积核,看看运用卷积核后图片会变成什么样。 -![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171104142033154-1330878114.png) +![](img/CNN原理/853467-20171104142033154-1330878114.png)  图2 4*4 image与两个2*2的卷积核操作结果 @@ -532,7 +532,7 @@ class LayerHelper(object):   通过上一层2*2的卷积核操作后,我们将原始图像由4*4的尺寸变为了3*3的一个新的图片。池化层的主要目的是通过降采样的方式,在不影响图像质量的情况下,压缩图片,减少参数。简单来说,假设现在设定池化层采用MaxPooling,大小为2*2,步长为1,取每个窗口最大的数值重新,那么图片的尺寸就会由3*3变为2*2: (3-2)+1=2。从上例来看,会有如下变换: -![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171104142056685-2048616836.png) +![](img/CNN原理/853467-20171104142056685-2048616836.png)        图3 Max Pooling结果 @@ -553,7 +553,7 @@ class LayerHelper(object):       所以到现在为止,我们的图片由4*4,通过卷积层变为3*3,再通过池化层变化2*2,如果我们再添加层,那么图片岂不是会越变越小?这个时候我们就会引出“Zero Padding”(补零),它可以帮助我们保证每次经过卷积或池化输出后图片的大小不变,如,上述例子我们如果加入Zero Padding,再采用3*3的卷积核,那么变换后的图片尺寸与原图片尺寸相同,如下图所示: -![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171031215017701-495180034.png) +![](img/CNN原理/853467-20171031215017701-495180034.png)   图4 zero padding结果 @@ -565,7 +565,7 @@ class LayerHelper(object): 到这一步,其实我们的一个完整的“卷积部分”就算完成了,如果想要叠加层数,一般也是叠加“Conv-MaxPooing",通过不断的设计卷积核的尺寸,数量,提取更多的特征,最后识别不同类别的物体。做完Max Pooling后,我们就会把这些数据“拍平”,丢到Flatten层,然后把Flatten层的output放到full connected Layer里,采用softmax对其进行分类。 -![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171104142200763-1912037434.png) +![](img/CNN原理/853467-20171104142200763-1912037434.png)     图5 Flatten过程 @@ -586,7 +586,7 @@ class LayerHelper(object):   1.卷积核的尺寸不一定非得为正方形。长方形也可以,只不过通常情况下为正方形。如果要设置为长方形,那么首先得保证这层的输出形状是整数,不能是小数。如果你的图像是边长为 28 的正方形。那么卷积层的输出就满足 [ (28 - kernel_size)/ stride ] + 1 ,这个数值得是整数才行,否则没有物理意义。譬如,你算得一个边长为 3.6 的 feature map 是没有物理意义的。 pooling 层同理。FC 层的输出形状总是满足整数,其唯一的要求就是整个训练过程中 FC 层的输入得是定长的。如果你的图像不是正方形。那么在制作数据时,可以缩放到统一大小(非正方形),再使用非正方形的 kernel_size 来使得卷积层的输出依然是整数。总之,撇开网络结果设定的好坏不谈,其本质上就是在做算术应用题: 如何使得各层的输出是整数。 -  2.由经验确定。通常情况下,靠近输入的卷积层,譬如第一层卷积层,会找出一些共性的特征,如手写数字识别中第一层我们设定卷积核个数为5个,一般是找出诸如"横线"、“竖线”、“斜线”等共性特征,我们称之为basic feature,经过max pooling后,在第二层卷积层,设定卷积核个数为20个,可以找出一些相对复杂的特征,如“横折”、“左半圆”、“右半圆”等特征,越往后,卷积核设定的数目越多,越能体现label的特征就越细致,就越容易分类出来,打个比方,如果你想分类出“0”的数字,你看到![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171031231438107-1902818098.png)这个特征,能推测是什么数字呢?只有越往后,检测识别的特征越多,试过能识别![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171101085737623-1572944193.png)这几个特征,那么我就能够确定这个数字是“0”。 +  2.由经验确定。通常情况下,靠近输入的卷积层,譬如第一层卷积层,会找出一些共性的特征,如手写数字识别中第一层我们设定卷积核个数为5个,一般是找出诸如"横线"、“竖线”、“斜线”等共性特征,我们称之为basic feature,经过max pooling后,在第二层卷积层,设定卷积核个数为20个,可以找出一些相对复杂的特征,如“横折”、“左半圆”、“右半圆”等特征,越往后,卷积核设定的数目越多,越能体现label的特征就越细致,就越容易分类出来,打个比方,如果你想分类出“0”的数字,你看到![](img/CNN原理/853467-20171031231438107-1902818098.png)这个特征,能推测是什么数字呢?只有越往后,检测识别的特征越多,试过能识别![](img/CNN原理/853467-20171101085737623-1572944193.png)这几个特征,那么我就能够确定这个数字是“0”。   3.有stride_w和stride_h,后者表示的就是上下步长。如果用stride,则表示stride_h=stride_w=stride。 @@ -632,7 +632,7 @@ def convolutional_neural_network_org(img):   那么这个时候我考虑的问题是,既然上面我们已经了解了卷积核,改变卷积核的大小是否会对我的结果造成影响?增多卷积核的数目能够提高准确率?于是我做了个实验: -![](http://data.apachecn.org/img/AiLearning/dl/CNN原理/853467-20171031232805748-157396975.png) +![](img/CNN原理/853467-20171031232805748-157396975.png) * 第一次改进: 仅改变第一层与第二层的卷积核数目的大小,其他保持不变。可以看到结果提升了0.06% *  第二次改进: 保持3*3的卷积核大小,仅改变第二层的卷积核数目,其他保持不变,可以看到结果相较于原始参数提升了0.08% diff --git a/docs/dl/LSTM原理.md b/docs/dl/LSTM原理.md index 1b254173..ca9293e1 100644 --- a/docs/dl/LSTM原理.md +++ b/docs/dl/LSTM原理.md @@ -4,35 +4,35 @@ **LSTM**(Long Short-Term Memory)长短期记忆网络,是一种时间递归神经网络,**适合于处理和预测时间序列中间隔和延迟相对较长的重要事件**。LSTM是解决循环神经网络RNN结构中存在的“梯度消失”问题而提出的,是一种特殊的循环神经网络。最常见的一个例子就是: 当我们要预测“the clouds are in the (...)"的时候, 这种情况下,相关的信息和预测的词位置之间的间隔很小,RNN会使用先前的信息预测出词是”sky“。但是如果想要预测”I grew up in France ... I speak fluent (...)”,语言模型推测下一个词可能是一种语言的名字,但是具体是什么语言,需要用到间隔很长的前文中France,在这种情况下,RNN因为“梯度消失”的问题,并不能利用间隔很长的信息,然而,LSTM在设计上明确避免了长期依赖的问题,这主要归功于LSTM精心设计的“门”结构(输入门、遗忘门和输出门)消除或者增加信息到细胞状态的能力,使得LSTM能够记住长期的信息。 -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180704173253439.jpg)  vs   ![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180704173230785.jpg)   +![](img/LSTM原理/20180704173253439.jpg)  vs   ![](img/LSTM原理/20180704173230785.jpg)   标准的RNN结构都具有一种重复神经网络模块的链式形式,一般是一个tanh层进行重复的学习(如上图左边图),而在LSTM中(上图右边图),重复的模块中有四个特殊的结构。**贯穿在图上方的水平线为细胞状态(cell),黄色的矩阵是学习得到的神经网络层,粉色的圆圈表示运算操作,黑色的箭头表示向量的传输**,整体看来,不仅仅是h在随着时间流动,细胞状态c也在随着时间流动,细胞状态c代表着长期记忆。 -上面我们提到LSTM之所以能够记住长期的信息,在于设计的“门”结构,“门”结构是一种让信息选择式通过的方法,包括一个sigmoid神经网络层和一个pointwise乘法操作,如下图所示结构。复习一下sigmoid函数,![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705153027598.jpg),sigmoid输出为0到1之间的数组,一般用在二分类问题,输出值接近0代表“不允许通过”,趋向1代表“允许通过”。 +上面我们提到LSTM之所以能够记住长期的信息,在于设计的“门”结构,“门”结构是一种让信息选择式通过的方法,包括一个sigmoid神经网络层和一个pointwise乘法操作,如下图所示结构。复习一下sigmoid函数,![](img/LSTM原理/20180705153027598.jpg),sigmoid输出为0到1之间的数组,一般用在二分类问题,输出值接近0代表“不允许通过”,趋向1代表“允许通过”。 -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705152515679.jpg) +![](img/LSTM原理/20180705152515679.jpg) **在LSTM中,第一阶段是遗忘门,遗忘层决定哪些信息需要从细胞状态中被遗忘,下一阶段是输入门,输入门确定哪些新信息能够被存放到细胞状态中,最后一个阶段是输出门,输出门确定输出什么值**。下面我们把LSTM就着各个门的子结构和数学表达式进行分析。 -* 遗忘门: 遗忘门是以上一层的输出![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705154943659.jpg)和本层要输入的序列数据![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705155022656.jpg)作为输入,通过一个激活函数sigmoid,得到输出为![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/201807051551130.jpg)。![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705155135748.jpg)的输出取值在[0,1]区间,表示上一层细胞状态被遗忘的概率,1是“完全保留”,0是“完全舍弃” +* 遗忘门: 遗忘门是以上一层的输出![](img/LSTM原理/20180705154943659.jpg)和本层要输入的序列数据![](img/LSTM原理/20180705155022656.jpg)作为输入,通过一个激活函数sigmoid,得到输出为![](img/LSTM原理/201807051551130.jpg)。![](img/LSTM原理/20180705155135748.jpg)的输出取值在[0,1]区间,表示上一层细胞状态被遗忘的概率,1是“完全保留”,0是“完全舍弃” -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705154117297.jpg) +![](img/LSTM原理/20180705154117297.jpg) -* 输入门: 输入门包含两个部分,第一部分使用sigmoid激活函数,输出为![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705160829424.jpg),第二部分使用tanh激活函数,输出为![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705161911316.jpg)。**【个人通俗理解: ![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705162106120.jpg)在RNN网络中就是本层的输出,![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705162239540.jpg)是在[0,1]区间取值,表示![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705162835994.jpg)中的信息被保留的程度,![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705162518689.jpg)表示该层被保留的新信息】** +* 输入门: 输入门包含两个部分,第一部分使用sigmoid激活函数,输出为![](img/LSTM原理/20180705160829424.jpg),第二部分使用tanh激活函数,输出为![](img/LSTM原理/20180705161911316.jpg)。**【个人通俗理解: ![](img/LSTM原理/20180705162106120.jpg)在RNN网络中就是本层的输出,![](img/LSTM原理/20180705162239540.jpg)是在[0,1]区间取值,表示![](img/LSTM原理/20180705162835994.jpg)中的信息被保留的程度,![](img/LSTM原理/20180705162518689.jpg)表示该层被保留的新信息】** -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705154140100.jpg) +![](img/LSTM原理/20180705154140100.jpg) -到目前为止,![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705162951402.jpg)是遗忘门的输出,控制着上一层细胞状态![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705163019968.jpg)被遗忘的程度,![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705163047274.jpg)为输入门的两个输出乘法运算,表示有多少新信息被保留,基于此,我们就可以把新信息更新这一层的细胞状态![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705163146715.jpg)。 +到目前为止,![](img/LSTM原理/20180705162951402.jpg)是遗忘门的输出,控制着上一层细胞状态![](img/LSTM原理/20180705163019968.jpg)被遗忘的程度,![](img/LSTM原理/20180705163047274.jpg)为输入门的两个输出乘法运算,表示有多少新信息被保留,基于此,我们就可以把新信息更新这一层的细胞状态![](img/LSTM原理/20180705163146715.jpg)。 -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705154157781.jpg) +![](img/LSTM原理/20180705154157781.jpg) -* 输出门: 输出门用来控制该层的细胞状态有多少被过滤。首先使用sigmoid激活函数得到一个[0,1]区间取值的![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705163549770.jpg),接着将细胞状态![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705164009353.jpg)通过tanh激活函数处理后与![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705164029948.jpg)相乘,即是本层的输出![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705164102617.jpg)。 +* 输出门: 输出门用来控制该层的细胞状态有多少被过滤。首先使用sigmoid激活函数得到一个[0,1]区间取值的![](img/LSTM原理/20180705163549770.jpg),接着将细胞状态![](img/LSTM原理/20180705164009353.jpg)通过tanh激活函数处理后与![](img/LSTM原理/20180705164029948.jpg)相乘,即是本层的输出![](img/LSTM原理/20180705164102617.jpg)。 -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180705154210768.jpg) +![](img/LSTM原理/20180705154210768.jpg) 至此,终于将LSTM的结构理解了,现在有很多LSTM结构的变形,只要把这个母体结构理解了,再去理解变形的结构应该不会再有多麻烦了。 @@ -41,21 +41,21 @@ 双向RNN由两个普通的RNN所组成,一个正向的RNN,利用过去的信息,一个逆序的RNN,利用未来的信息,这样在时刻t,既能够使用t-1时刻的信息,又能够利用到t+1时刻的信息。一般来说,由于双向LSTM能够同时利用过去时刻和未来时刻的信息,会比单向LSTM最终的预测更加准确。下图为双向LSTM的结构。 -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713200802779.jpg) +![](img/LSTM原理/20180713200802779.jpg) -* ![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204707320.jpg)为正向的RNN,参与正向计算,t时刻的输入为t时刻的序列数据![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204850377.jpg)和t-1时刻的输出![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204838867.jpg) -* ![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204802532.jpg)为逆向的RNN,参与反向计算,t时刻的输入为t时刻的序列数据![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204852425.jpg)和t+1时刻的输出![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204825347.jpg) -* t时刻的最终输出值取决于![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204838867.jpg)和![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713204913638.jpg) +* ![](img/LSTM原理/20180713204707320.jpg)为正向的RNN,参与正向计算,t时刻的输入为t时刻的序列数据![](img/LSTM原理/20180713204850377.jpg)和t-1时刻的输出![](img/LSTM原理/20180713204838867.jpg) +* ![](img/LSTM原理/20180713204802532.jpg)为逆向的RNN,参与反向计算,t时刻的输入为t时刻的序列数据![](img/LSTM原理/20180713204852425.jpg)和t+1时刻的输出![](img/LSTM原理/20180713204825347.jpg) +* t时刻的最终输出值取决于![](img/LSTM原理/20180713204838867.jpg)和![](img/LSTM原理/20180713204913638.jpg) **GRU(Gated Recurrent Unit)**是LSTM最流行的一个变体,比LSTM模型要简单 -![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713200829571.jpg) +![](img/LSTM原理/20180713200829571.jpg) -GRU包括两个门,一个重置门![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713205653854.jpg)和更新门![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713205710503.jpg)。这两个门的激活函数为sigmoid函数,在[0,1]区间取值。 +GRU包括两个门,一个重置门![](img/LSTM原理/20180713205653854.jpg)和更新门![](img/LSTM原理/20180713205710503.jpg)。这两个门的激活函数为sigmoid函数,在[0,1]区间取值。 -候选隐含状态![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713210203944.jpg)使用重置门![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713205653854.jpg)来控制t-1时刻信息的输入,如果![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713205653854.jpg)结果为0,那么上一个隐含状态的输出信息![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713211322965.jpg)将被丢弃。也就是说,**重置门决定过去有多少信息被遗忘,有助于捕捉时序数据中短期的依赖关系**。 +候选隐含状态![](img/LSTM原理/20180713210203944.jpg)使用重置门![](img/LSTM原理/20180713205653854.jpg)来控制t-1时刻信息的输入,如果![](img/LSTM原理/20180713205653854.jpg)结果为0,那么上一个隐含状态的输出信息![](img/LSTM原理/20180713211322965.jpg)将被丢弃。也就是说,**重置门决定过去有多少信息被遗忘,有助于捕捉时序数据中短期的依赖关系**。 -隐含状态使用更新门![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713205710503.jpg)对上一时刻隐含状态![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713210738834.jpg)和候选隐含状态![](http://data.apachecn.org/img/AiLearning/dl/LSTM原理/20180713210203944.jpg)进行更新。更新门控制过去的隐含状态在当前时刻的重要性,**如果更新门一直趋近于1,t时刻之前的隐含状态将一直保存下来并全传递到t时刻,****更新门有助于捕捉时序数据中中长期的依赖关系**。 +隐含状态使用更新门![](img/LSTM原理/20180713205710503.jpg)对上一时刻隐含状态![](img/LSTM原理/20180713210738834.jpg)和候选隐含状态![](img/LSTM原理/20180713210203944.jpg)进行更新。更新门控制过去的隐含状态在当前时刻的重要性,**如果更新门一直趋近于1,t时刻之前的隐含状态将一直保存下来并全传递到t时刻,****更新门有助于捕捉时序数据中中长期的依赖关系**。 diff --git a/docs/dl/RNN原理.md b/docs/dl/RNN原理.md index 7fc142c7..4c187fd8 100644 --- a/docs/dl/RNN原理.md +++ b/docs/dl/RNN原理.md @@ -7,7 +7,7 @@ 循环神经网络的应用场景比较多,比如暂时能写论文,写程序,写诗,但是,(总是会有但是的),但是他们现在还不能正常使用,学习出来的东西没有逻辑,所以要想真正让它更有用,路还很远。 这是一般的神经网络应该有的结构:   -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171119130251741.jpg) +![这里写图片描述](img/RNN原理/20171119130251741.jpg) 既然我们已经有了人工神经网络和卷积神经网络,为什么还要循环神经网络?  原因很简单,无论是卷积神经网络,还是人工神经网络,他们的前提假设都是: 元素之间是相互独立的,**输入与输出也是独立的**,比如猫和狗。  @@ -16,18 +16,18 @@ ## 2.RNN的网络结构及原理 它的网络结构如下:   -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171129184524844.jpg) +![这里写图片描述](img/RNN原理/20171129184524844.jpg) 其中每个圆圈可以看作是一个单元,而且每个单元做的事情也是一样的,因此可以折叠呈左半图的样子。用一句话解释RNN,就是**一个单元结构重复使用**。 -RNN是一个序列到序列的模型,假设![-w88](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570321772488.jpg)是一个输入: “我是中国“,那么![-w54](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570322195709.jpg)就应该对应”是”,”中国”这两个,预测下一个词最有可能是什么?就是![-w31](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570322451341.jpg)应该是”人”的概率比较大。 +RNN是一个序列到序列的模型,假设![-w88](img/RNN原理/15570321772488.jpg)是一个输入: “我是中国“,那么![-w54](img/RNN原理/15570322195709.jpg)就应该对应”是”,”中国”这两个,预测下一个词最有可能是什么?就是![-w31](img/RNN原理/15570322451341.jpg)应该是”人”的概率比较大。 因此,我们可以做这样的定义: -![-w416](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570322822857.jpg) +![-w416](img/RNN原理/15570322822857.jpg) 。因为我们当前时刻的输出是由记忆和当前时刻的输入决定的,就像你现在大四,你的知识是由大四学到的知识(当前输入)和大三以及大三以前学到的东西的(记忆)的结合,RNN在这点上也类似,神经网络最擅长做的就是通过一系列参数把很多内容整合到一起,然后学习这个参数,因此就定义了RNN的基础: -![-w200](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570322981095.jpg) +![-w200](img/RNN原理/15570322981095.jpg) 大家可能会很好奇,为什么还要加一个f()函数,其实这个函数是神经网络中的激活函数,但为什么要加上它呢?  举个例子,假如你在大学学了非常好的解题方法,那你初中那时候的解题方法还要用吗?显然是不用了的。RNN的想法也一样,既然我能记忆了,那我当然是只记重要的信息啦,其他不重要的,就肯定会忘记,是吧。但是在神经网络中什么最适合过滤信息呀?肯定是激活函数嘛,因此在这里就套用一个激活函数,来做一个非线性映射,来过滤信息,这个激活函数可能为tanh,也可为其他。 @@ -35,10 +35,10 @@ RNN是一个序列到序列的模型,假设![-w88](http://data.apachecn.org/im 假设你大四快毕业了,要参加考研,请问你参加考研是不是先记住你学过的内容然后去考研,还是直接带几本书去参加考研呢?很显然嘛,那RNN的想法就是预测的时候带着当前时刻的记忆 去预测。假如你要预测“我是中国“的下一个词出现的概率,这里已经很显然了,运用softmax来预测每个词出现的概率再合适不过了,但预测不能直接带用一个矩阵来预测呀,所有预测的时候还要带一个权重矩阵V,用公式表示为: -![-w160](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570323546017.jpg) +![-w160](img/RNN原理/15570323546017.jpg) -其中![-w21](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570323768890.jpg)就表示时刻t的输出。 +其中![-w21](img/RNN原理/15570323768890.jpg)就表示时刻t的输出。 RNN中的结构细节:   1.可以把St当作隐状态,捕捉了之前时间点上的信息。就像你去考研一样,考的时候记住了你能记住的所有信息。  @@ -50,10 +50,10 @@ RNN中的结构细节:   ## 3.RNN的改进1: 双向RNN 在有些情况,比如有一部电视剧,在第三集的时候才出现的人物,现在让预测一下在第三集中出现的人物名字,你用前面两集的内容是预测不出来的,所以你需要用到第四,第五集的内容来预测第三集的内容,这就是双向RNN的想法。如图是双向RNN的图解:   -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/bi-directional-rnn.png)  -![-w347](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570324711246.jpg) +![这里写图片描述](img/RNN原理/bi-directional-rnn.png)  +![-w347](img/RNN原理/15570324711246.jpg) -这里的![-w50](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570324937386.jpg)做的是一个拼接,如果他们都是1000x1维的,拼接在一起就是1000x2维的了。 +这里的![-w50](img/RNN原理/15570324937386.jpg)做的是一个拼接,如果他们都是1000x1维的,拼接在一起就是1000x2维的了。 双向RNN需要的内存是单向RNN的两倍,因为在同一时间点,双向RNN需要保存两个方向上的权重参数,在分类的时候,需要同时输入两个隐藏层输出的信息。 @@ -61,18 +61,18 @@ RNN中的结构细节:   深层双向RNN 与双向RNN相比,多了几个隐藏层,因为他的想法是很多信息记一次记不下来,比如你去考研,复习考研英语的时候,背英语单词一定不会就看一次就记住了所有要考的考研单词吧,你应该也是带着先前几次背过的单词,然后选择那些背过,但不熟的内容,或者没背过的单词来背吧。 -深层双向RNN就是基于这么一个想法,他的输入有两方面,第一就是前一时刻的隐藏层传过来的信息![-w41](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570325271812.jpg),和当前时刻上一隐藏层传过来的信息![-w167](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570325458791.jpg),包括前向和后向的。  -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/deep-bi-directional-rnn.png) +深层双向RNN就是基于这么一个想法,他的输入有两方面,第一就是前一时刻的隐藏层传过来的信息![-w41](img/RNN原理/15570325271812.jpg),和当前时刻上一隐藏层传过来的信息![-w167](img/RNN原理/15570325458791.jpg),包括前向和后向的。  +![这里写图片描述](img/RNN原理/deep-bi-directional-rnn.png) 我们用公式来表示是这样的:  -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/deep-bi-directional-rnn-hidden-layer.png)  +![这里写图片描述](img/RNN原理/deep-bi-directional-rnn-hidden-layer.png)  然后再利用最后一层来进行分类,分类公式如下:   -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/deep-bi-directional-rnn-classification.png) +![这里写图片描述](img/RNN原理/deep-bi-directional-rnn-classification.png) ### 4.1 Pyramidal RNN 其他类似的网络还有Pyramidal RNN:   -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171221152506461.jpg) +![这里写图片描述](img/RNN原理/20171221152506461.jpg) 我们现在有一个很长的输入序列,可以看到这是一个双向的RNN,上图是谷歌的W.Chan做的一个测试,它原先要做的是语音识别,他要用序列到序列的模型做语音识别,序列到序列就是说,输入一个序列然后就输出一个序列。 由图我们发现,上一层的两个输出,作为当前层的输入,如果是非常长的序列的话,这样做的话,每一层的序列都比上一层要短,但当前层的输入f(x)也会随之增多,貌似看一起相互抵消,运算量并没有什么改进。 @@ -84,55 +84,55 @@ RNN中的结构细节:   如前面我们讲的,如果要预测t时刻的输出,我们必须先利用上一时刻(t-1)的记忆和当前时刻的输入,得到t时刻的记忆: -![-w202](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570325921406.jpg) +![-w202](img/RNN原理/15570325921406.jpg) 然后利用当前时刻的记忆,通过softmax分类器输出每个词出现的概率: -![-w144](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570326059642.jpg) +![-w144](img/RNN原理/15570326059642.jpg) 为了找出模型最好的参数,U,W,V,我们就要知道当前参数得到的结果怎么样,因此就要定义我们的损失函数,用交叉熵损失函数: -![-w252](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570326336949.jpg) +![-w252](img/RNN原理/15570326336949.jpg) -其中![-w14](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570326853547.jpg) - t时刻的标准答案,是一个只有一个是1,其他都是0的向量; ![-w19](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570326727679.jpg)是我们预测出来的结果,与![-w14](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570327422935.jpg) +其中![-w14](img/RNN原理/15570326853547.jpg) + t时刻的标准答案,是一个只有一个是1,其他都是0的向量; ![-w19](img/RNN原理/15570326727679.jpg)是我们预测出来的结果,与![-w14](img/RNN原理/15570327422935.jpg) 的维度一样,但它是一个概率向量,里面是每个词出现的概率。因为对结果的影响,肯定不止一个时刻,因此需要把所有时刻的造成的损失都加起来: -![-w300](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570327570018.jpg) +![-w300](img/RNN原理/15570327570018.jpg) -![](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171130091040277.jpg) +![](img/RNN原理/20171130091040277.jpg) 如图所示,你会发现每个cell都会有一个损失,我们已经定义好了损失函数,接下来就是熟悉的一步了,那就是根据损失函数利用SGD来求解最优参数,在CNN中使用反向传播BP算法来求解最优参数,但在RNN就要用到BPTT,它和BP算法的本质区别,也是CNN和RNN的本质区别: CNN没有记忆功能,它的输出仅依赖与输入,但RNN有记忆功能,它的输出不仅依赖与当前输入,还依赖与当前的记忆。这个记忆是序列到序列的,也就是当前时刻收到上一时刻的影响,比如股市的变化。 因此,在对参数求偏导的时候,对当前时刻求偏导,一定会涉及前一时刻,我们用例子看一下: -![](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171130091956686.jpg) +![](img/RNN原理/20171130091956686.jpg) -假设我们对E3的W求偏导: 它的损失首先来源于预测的输出![-w19](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570327881131.jpg) -,预测的输出又是来源于当前时刻的记忆s3,当前的记忆又是来源于当前的输出和截止到上一时刻的记忆: ![-w170](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570328132196.jpg) +假设我们对E3的W求偏导: 它的损失首先来源于预测的输出![-w19](img/RNN原理/15570327881131.jpg) +,预测的输出又是来源于当前时刻的记忆s3,当前的记忆又是来源于当前的输出和截止到上一时刻的记忆: ![-w170](img/RNN原理/15570328132196.jpg) 因此,根据链式法则可以有: -![-w172](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570328255432.jpg) +![-w172](img/RNN原理/15570328255432.jpg) -但是,你会发现,![-w145](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570328436386.jpg) +但是,你会发现,![-w145](img/RNN原理/15570328436386.jpg) ,也就是s2里面的函数还包含了W,因此,这个链式法则还没到底,就像图上画的那样,所以真正的链式法则是这样的:   -![这里写图片描述](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171130094236429.jpg)  +![这里写图片描述](img/RNN原理/20171130094236429.jpg)  我们要把当前时刻造成的损失,和以往每个时刻造成的损失加起来,因为我们每一个时刻都用到了权重参数W。和以往的网络不同,一般的网络,比如人工神经网络,参数是不同享的,但在循环神经网络,和CNN一样,设立了参数共享机制,来降低模型的计算量。 ## 6.RNN与CNN的结合应用: 看图说话 在图像处理中,目前做的最好的是CNN,而自然语言处理中,表现比较好的是RNN,因此,我们能否把他们结合起来,一起用呢?那就是看图说话了,这个原理也比较简单,举个小栗子: 假设我们有CNN的模型训练了一个网络结构,比如是这个 -![](http://data.apachecn.org/img/AiLearning/dl/RNN原理/20171129213601819.jpg) +![](img/RNN原理/20171129213601819.jpg) 最后我们不是要分类嘛,那在分类前,是不是已经拿到了图像的特征呀,那我们能不能把图像的特征拿出来,放到RNN的输入里,让他学习呢? 之前的RNN是这样的: -![-w238](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570328705596.jpg) +![-w238](img/RNN原理/15570328705596.jpg) 我们把图像的特征加在里面,可以得到: -![-w266](http://data.apachecn.org/img/AiLearning/dl/RNN原理/15570328817086.jpg) +![-w266](img/RNN原理/15570328817086.jpg) 其中的X就是图像的特征。如果用的是上面的CNN网络,X应该是一个4096X1的向量。 diff --git a/docs/dl/img/CNN原理/853467-20171031123650574-11330636.png b/docs/dl/img/CNN原理/853467-20171031123650574-11330636.png new file mode 100644 index 00000000..e0089883 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171031123650574-11330636.png differ diff --git a/docs/dl/img/CNN原理/853467-20171031215017701-495180034.png b/docs/dl/img/CNN原理/853467-20171031215017701-495180034.png new file mode 100644 index 00000000..33ffdd54 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171031215017701-495180034.png differ diff --git a/docs/dl/img/CNN原理/853467-20171031231438107-1902818098.png b/docs/dl/img/CNN原理/853467-20171031231438107-1902818098.png new file mode 100644 index 00000000..eab88a88 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171031231438107-1902818098.png differ diff --git a/docs/dl/img/CNN原理/853467-20171031232805748-157396975.png b/docs/dl/img/CNN原理/853467-20171031232805748-157396975.png new file mode 100644 index 00000000..7965e405 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171031232805748-157396975.png differ diff --git a/docs/dl/img/CNN原理/853467-20171101085737623-1572944193.png b/docs/dl/img/CNN原理/853467-20171101085737623-1572944193.png new file mode 100644 index 00000000..1ca09d01 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171101085737623-1572944193.png differ diff --git a/docs/dl/img/CNN原理/853467-20171104142033154-1330878114.png b/docs/dl/img/CNN原理/853467-20171104142033154-1330878114.png new file mode 100644 index 00000000..d9aead18 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171104142033154-1330878114.png differ diff --git a/docs/dl/img/CNN原理/853467-20171104142056685-2048616836.png b/docs/dl/img/CNN原理/853467-20171104142056685-2048616836.png new file mode 100644 index 00000000..5f7aa040 Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171104142056685-2048616836.png differ diff --git a/docs/dl/img/CNN原理/853467-20171104142200763-1912037434.png b/docs/dl/img/CNN原理/853467-20171104142200763-1912037434.png new file mode 100644 index 00000000..090db6bb Binary files /dev/null and b/docs/dl/img/CNN原理/853467-20171104142200763-1912037434.png differ diff --git a/docs/dl/img/LSTM原理/20180704173230785.jpg b/docs/dl/img/LSTM原理/20180704173230785.jpg new file mode 100644 index 00000000..0a1db25f Binary files /dev/null and b/docs/dl/img/LSTM原理/20180704173230785.jpg differ diff --git a/docs/dl/img/LSTM原理/20180704173253439.jpg b/docs/dl/img/LSTM原理/20180704173253439.jpg new file mode 100644 index 00000000..ece71431 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180704173253439.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705152515679.jpg b/docs/dl/img/LSTM原理/20180705152515679.jpg new file mode 100644 index 00000000..05e6737c Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705152515679.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705153027598.jpg b/docs/dl/img/LSTM原理/20180705153027598.jpg new file mode 100644 index 00000000..eb9cff60 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705153027598.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705154117297.jpg b/docs/dl/img/LSTM原理/20180705154117297.jpg new file mode 100644 index 00000000..b53f722f Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705154117297.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705154140100.jpg b/docs/dl/img/LSTM原理/20180705154140100.jpg new file mode 100644 index 00000000..f89c086b Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705154140100.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705154157781.jpg b/docs/dl/img/LSTM原理/20180705154157781.jpg new file mode 100644 index 00000000..db5085e9 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705154157781.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705154210768.jpg b/docs/dl/img/LSTM原理/20180705154210768.jpg new file mode 100644 index 00000000..56a1d6b0 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705154210768.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705154943659.jpg b/docs/dl/img/LSTM原理/20180705154943659.jpg new file mode 100644 index 00000000..7c729ae6 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705154943659.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705155022656.jpg b/docs/dl/img/LSTM原理/20180705155022656.jpg new file mode 100644 index 00000000..fcde3bc6 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705155022656.jpg differ diff --git a/docs/dl/img/LSTM原理/201807051551130.jpg b/docs/dl/img/LSTM原理/201807051551130.jpg new file mode 100644 index 00000000..db04d5bc Binary files /dev/null and b/docs/dl/img/LSTM原理/201807051551130.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705155135748.jpg b/docs/dl/img/LSTM原理/20180705155135748.jpg new file mode 100644 index 00000000..db04d5bc Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705155135748.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705160829424.jpg b/docs/dl/img/LSTM原理/20180705160829424.jpg new file mode 100644 index 00000000..db67ef59 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705160829424.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705161911316.jpg b/docs/dl/img/LSTM原理/20180705161911316.jpg new file mode 100644 index 00000000..d4c8dd30 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705161911316.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705162106120.jpg b/docs/dl/img/LSTM原理/20180705162106120.jpg new file mode 100644 index 00000000..d4c8dd30 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705162106120.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705162239540.jpg b/docs/dl/img/LSTM原理/20180705162239540.jpg new file mode 100644 index 00000000..db67ef59 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705162239540.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705162518689.jpg b/docs/dl/img/LSTM原理/20180705162518689.jpg new file mode 100644 index 00000000..d107cc29 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705162518689.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705162835994.jpg b/docs/dl/img/LSTM原理/20180705162835994.jpg new file mode 100644 index 00000000..d4c8dd30 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705162835994.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705162951402.jpg b/docs/dl/img/LSTM原理/20180705162951402.jpg new file mode 100644 index 00000000..db04d5bc Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705162951402.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705163019968.jpg b/docs/dl/img/LSTM原理/20180705163019968.jpg new file mode 100644 index 00000000..57e51a8c Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705163019968.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705163047274.jpg b/docs/dl/img/LSTM原理/20180705163047274.jpg new file mode 100644 index 00000000..d107cc29 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705163047274.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705163146715.jpg b/docs/dl/img/LSTM原理/20180705163146715.jpg new file mode 100644 index 00000000..25cf02d0 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705163146715.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705163549770.jpg b/docs/dl/img/LSTM原理/20180705163549770.jpg new file mode 100644 index 00000000..6fb49056 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705163549770.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705164009353.jpg b/docs/dl/img/LSTM原理/20180705164009353.jpg new file mode 100644 index 00000000..25cf02d0 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705164009353.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705164029948.jpg b/docs/dl/img/LSTM原理/20180705164029948.jpg new file mode 100644 index 00000000..6fb49056 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705164029948.jpg differ diff --git a/docs/dl/img/LSTM原理/20180705164102617.jpg b/docs/dl/img/LSTM原理/20180705164102617.jpg new file mode 100644 index 00000000..31b68219 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180705164102617.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713200802779.jpg b/docs/dl/img/LSTM原理/20180713200802779.jpg new file mode 100644 index 00000000..76b3a10f Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713200802779.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713200829571.jpg b/docs/dl/img/LSTM原理/20180713200829571.jpg new file mode 100644 index 00000000..f775961f Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713200829571.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204707320.jpg b/docs/dl/img/LSTM原理/20180713204707320.jpg new file mode 100644 index 00000000..575adfab Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204707320.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204802532.jpg b/docs/dl/img/LSTM原理/20180713204802532.jpg new file mode 100644 index 00000000..f18e4aed Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204802532.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204825347.jpg b/docs/dl/img/LSTM原理/20180713204825347.jpg new file mode 100644 index 00000000..30b46158 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204825347.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204838867.jpg b/docs/dl/img/LSTM原理/20180713204838867.jpg new file mode 100644 index 00000000..f398c783 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204838867.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204850377.jpg b/docs/dl/img/LSTM原理/20180713204850377.jpg new file mode 100644 index 00000000..fcde3bc6 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204850377.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204852425.jpg b/docs/dl/img/LSTM原理/20180713204852425.jpg new file mode 100644 index 00000000..fcde3bc6 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204852425.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713204913638.jpg b/docs/dl/img/LSTM原理/20180713204913638.jpg new file mode 100644 index 00000000..30b46158 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713204913638.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713205653854.jpg b/docs/dl/img/LSTM原理/20180713205653854.jpg new file mode 100644 index 00000000..740eea32 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713205653854.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713205710503.jpg b/docs/dl/img/LSTM原理/20180713205710503.jpg new file mode 100644 index 00000000..c8cfc488 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713205710503.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713210203944.jpg b/docs/dl/img/LSTM原理/20180713210203944.jpg new file mode 100644 index 00000000..68556777 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713210203944.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713210738834.jpg b/docs/dl/img/LSTM原理/20180713210738834.jpg new file mode 100644 index 00000000..7c729ae6 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713210738834.jpg differ diff --git a/docs/dl/img/LSTM原理/20180713211322965.jpg b/docs/dl/img/LSTM原理/20180713211322965.jpg new file mode 100644 index 00000000..7c729ae6 Binary files /dev/null and b/docs/dl/img/LSTM原理/20180713211322965.jpg differ diff --git a/docs/dl/img/RNN原理/15570321772488.jpg b/docs/dl/img/RNN原理/15570321772488.jpg new file mode 100644 index 00000000..f1a364a3 Binary files /dev/null and b/docs/dl/img/RNN原理/15570321772488.jpg differ diff --git a/docs/dl/img/RNN原理/15570322195709.jpg b/docs/dl/img/RNN原理/15570322195709.jpg new file mode 100644 index 00000000..6b02b3b7 Binary files /dev/null and b/docs/dl/img/RNN原理/15570322195709.jpg differ diff --git a/docs/dl/img/RNN原理/15570322451341.jpg b/docs/dl/img/RNN原理/15570322451341.jpg new file mode 100644 index 00000000..cb07aa40 Binary files /dev/null and b/docs/dl/img/RNN原理/15570322451341.jpg differ diff --git a/docs/dl/img/RNN原理/15570322822857.jpg b/docs/dl/img/RNN原理/15570322822857.jpg new file mode 100644 index 00000000..aab623ef Binary files /dev/null and b/docs/dl/img/RNN原理/15570322822857.jpg differ diff --git a/docs/dl/img/RNN原理/15570322981095.jpg b/docs/dl/img/RNN原理/15570322981095.jpg new file mode 100644 index 00000000..b392a031 Binary files /dev/null and b/docs/dl/img/RNN原理/15570322981095.jpg differ diff --git a/docs/dl/img/RNN原理/15570323546017.jpg b/docs/dl/img/RNN原理/15570323546017.jpg new file mode 100644 index 00000000..7d612498 Binary files /dev/null and b/docs/dl/img/RNN原理/15570323546017.jpg differ diff --git a/docs/dl/img/RNN原理/15570323768890.jpg b/docs/dl/img/RNN原理/15570323768890.jpg new file mode 100644 index 00000000..0b457227 Binary files /dev/null and b/docs/dl/img/RNN原理/15570323768890.jpg differ diff --git a/docs/dl/img/RNN原理/15570324711246.jpg b/docs/dl/img/RNN原理/15570324711246.jpg new file mode 100644 index 00000000..96d8b8e6 Binary files /dev/null and b/docs/dl/img/RNN原理/15570324711246.jpg differ diff --git a/docs/dl/img/RNN原理/15570324937386.jpg b/docs/dl/img/RNN原理/15570324937386.jpg new file mode 100644 index 00000000..df1ff62a Binary files /dev/null and b/docs/dl/img/RNN原理/15570324937386.jpg differ diff --git a/docs/dl/img/RNN原理/15570325271812.jpg b/docs/dl/img/RNN原理/15570325271812.jpg new file mode 100644 index 00000000..5bd29b3b Binary files /dev/null and b/docs/dl/img/RNN原理/15570325271812.jpg differ diff --git a/docs/dl/img/RNN原理/15570325458791.jpg b/docs/dl/img/RNN原理/15570325458791.jpg new file mode 100644 index 00000000..09b69200 Binary files /dev/null and b/docs/dl/img/RNN原理/15570325458791.jpg differ diff --git a/docs/dl/img/RNN原理/15570325921406.jpg b/docs/dl/img/RNN原理/15570325921406.jpg new file mode 100644 index 00000000..4f32abc2 Binary files /dev/null and b/docs/dl/img/RNN原理/15570325921406.jpg differ diff --git a/docs/dl/img/RNN原理/15570326059642.jpg b/docs/dl/img/RNN原理/15570326059642.jpg new file mode 100644 index 00000000..6f26063f Binary files /dev/null and b/docs/dl/img/RNN原理/15570326059642.jpg differ diff --git a/docs/dl/img/RNN原理/15570326336949.jpg b/docs/dl/img/RNN原理/15570326336949.jpg new file mode 100644 index 00000000..36059c1b Binary files /dev/null and b/docs/dl/img/RNN原理/15570326336949.jpg differ diff --git a/docs/dl/img/RNN原理/15570326727679.jpg b/docs/dl/img/RNN原理/15570326727679.jpg new file mode 100644 index 00000000..73517e5a Binary files /dev/null and b/docs/dl/img/RNN原理/15570326727679.jpg differ diff --git a/docs/dl/img/RNN原理/15570326853547.jpg b/docs/dl/img/RNN原理/15570326853547.jpg new file mode 100644 index 00000000..d259f4fa Binary files /dev/null and b/docs/dl/img/RNN原理/15570326853547.jpg differ diff --git a/docs/dl/img/RNN原理/15570327422935.jpg b/docs/dl/img/RNN原理/15570327422935.jpg new file mode 100644 index 00000000..5b839572 Binary files /dev/null and b/docs/dl/img/RNN原理/15570327422935.jpg differ diff --git a/docs/dl/img/RNN原理/15570327570018.jpg b/docs/dl/img/RNN原理/15570327570018.jpg new file mode 100644 index 00000000..8649481a Binary files /dev/null and b/docs/dl/img/RNN原理/15570327570018.jpg differ diff --git a/docs/dl/img/RNN原理/15570327881131.jpg b/docs/dl/img/RNN原理/15570327881131.jpg new file mode 100644 index 00000000..a5154f27 Binary files /dev/null and b/docs/dl/img/RNN原理/15570327881131.jpg differ diff --git a/docs/dl/img/RNN原理/15570328132196.jpg b/docs/dl/img/RNN原理/15570328132196.jpg new file mode 100644 index 00000000..f500c53f Binary files /dev/null and b/docs/dl/img/RNN原理/15570328132196.jpg differ diff --git a/docs/dl/img/RNN原理/15570328255432.jpg b/docs/dl/img/RNN原理/15570328255432.jpg new file mode 100644 index 00000000..18921ccd Binary files /dev/null and b/docs/dl/img/RNN原理/15570328255432.jpg differ diff --git a/docs/dl/img/RNN原理/15570328436386.jpg b/docs/dl/img/RNN原理/15570328436386.jpg new file mode 100644 index 00000000..7213f9e3 Binary files /dev/null and b/docs/dl/img/RNN原理/15570328436386.jpg differ diff --git a/docs/dl/img/RNN原理/15570328705596.jpg b/docs/dl/img/RNN原理/15570328705596.jpg new file mode 100644 index 00000000..93396252 Binary files /dev/null and b/docs/dl/img/RNN原理/15570328705596.jpg differ diff --git a/docs/dl/img/RNN原理/15570328817086.jpg b/docs/dl/img/RNN原理/15570328817086.jpg new file mode 100644 index 00000000..dfc0c8c2 Binary files /dev/null and b/docs/dl/img/RNN原理/15570328817086.jpg differ diff --git a/docs/dl/img/RNN原理/20171119130251741.jpg b/docs/dl/img/RNN原理/20171119130251741.jpg new file mode 100644 index 00000000..90368cb5 Binary files /dev/null and b/docs/dl/img/RNN原理/20171119130251741.jpg differ diff --git a/docs/dl/img/RNN原理/20171129184524844.jpg b/docs/dl/img/RNN原理/20171129184524844.jpg new file mode 100644 index 00000000..4299f4ee Binary files /dev/null and b/docs/dl/img/RNN原理/20171129184524844.jpg differ diff --git a/docs/dl/img/RNN原理/20171129213601819.jpg b/docs/dl/img/RNN原理/20171129213601819.jpg new file mode 100644 index 00000000..0509c9cc Binary files /dev/null and b/docs/dl/img/RNN原理/20171129213601819.jpg differ diff --git a/docs/dl/img/RNN原理/20171130091040277.jpg b/docs/dl/img/RNN原理/20171130091040277.jpg new file mode 100644 index 00000000..dfb18f63 Binary files /dev/null and b/docs/dl/img/RNN原理/20171130091040277.jpg differ diff --git a/docs/dl/img/RNN原理/20171130091956686.jpg b/docs/dl/img/RNN原理/20171130091956686.jpg new file mode 100644 index 00000000..df303fba Binary files /dev/null and b/docs/dl/img/RNN原理/20171130091956686.jpg differ diff --git a/docs/dl/img/RNN原理/20171130094236429.jpg b/docs/dl/img/RNN原理/20171130094236429.jpg new file mode 100644 index 00000000..244dfd8a Binary files /dev/null and b/docs/dl/img/RNN原理/20171130094236429.jpg differ diff --git a/docs/dl/img/RNN原理/20171221152506461.jpg b/docs/dl/img/RNN原理/20171221152506461.jpg new file mode 100644 index 00000000..a157a5da Binary files /dev/null and b/docs/dl/img/RNN原理/20171221152506461.jpg differ diff --git a/docs/dl/img/RNN原理/bi-directional-rnn.png b/docs/dl/img/RNN原理/bi-directional-rnn.png new file mode 100644 index 00000000..c0f8e353 Binary files /dev/null and b/docs/dl/img/RNN原理/bi-directional-rnn.png differ diff --git a/docs/dl/img/RNN原理/deep-bi-directional-rnn-classification.png b/docs/dl/img/RNN原理/deep-bi-directional-rnn-classification.png new file mode 100644 index 00000000..4225be21 Binary files /dev/null and b/docs/dl/img/RNN原理/deep-bi-directional-rnn-classification.png differ diff --git a/docs/dl/img/RNN原理/deep-bi-directional-rnn-hidden-layer.png b/docs/dl/img/RNN原理/deep-bi-directional-rnn-hidden-layer.png new file mode 100644 index 00000000..82a6b7e4 Binary files /dev/null and b/docs/dl/img/RNN原理/deep-bi-directional-rnn-hidden-layer.png differ diff --git a/docs/dl/img/RNN原理/deep-bi-directional-rnn.png b/docs/dl/img/RNN原理/deep-bi-directional-rnn.png new file mode 100644 index 00000000..26632f8c Binary files /dev/null and b/docs/dl/img/RNN原理/deep-bi-directional-rnn.png differ diff --git a/docs/dl/img/反向传递/853467-20160630140644406-409859737.png b/docs/dl/img/反向传递/853467-20160630140644406-409859737.png new file mode 100644 index 00000000..d0d3d405 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630140644406-409859737.png differ diff --git a/docs/dl/img/反向传递/853467-20160630141449671-1058672778.png b/docs/dl/img/反向传递/853467-20160630141449671-1058672778.png new file mode 100644 index 00000000..64874848 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630141449671-1058672778.png differ diff --git a/docs/dl/img/反向传递/853467-20160630142019140-402363317.png b/docs/dl/img/反向传递/853467-20160630142019140-402363317.png new file mode 100644 index 00000000..38533470 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630142019140-402363317.png differ diff --git a/docs/dl/img/反向传递/853467-20160630142915359-294460310.png b/docs/dl/img/反向传递/853467-20160630142915359-294460310.png new file mode 100644 index 00000000..f526f66c Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630142915359-294460310.png differ diff --git a/docs/dl/img/反向传递/853467-20160630150115390-1035378028.png b/docs/dl/img/反向传递/853467-20160630150115390-1035378028.png new file mode 100644 index 00000000..4c2351ad Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630150115390-1035378028.png differ diff --git a/docs/dl/img/反向传递/853467-20160630150244265-1128303244.png b/docs/dl/img/反向传递/853467-20160630150244265-1128303244.png new file mode 100644 index 00000000..2a6505cb Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630150244265-1128303244.png differ diff --git a/docs/dl/img/反向传递/853467-20160630150517109-389457135.png b/docs/dl/img/反向传递/853467-20160630150517109-389457135.png new file mode 100644 index 00000000..a27172d3 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630150517109-389457135.png differ diff --git a/docs/dl/img/反向传递/853467-20160630150638390-1210364296.png b/docs/dl/img/反向传递/853467-20160630150638390-1210364296.png new file mode 100644 index 00000000..c72d8f62 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630150638390-1210364296.png differ diff --git a/docs/dl/img/反向传递/853467-20160630151201812-1014280864.png b/docs/dl/img/反向传递/853467-20160630151201812-1014280864.png new file mode 100644 index 00000000..1acfcc68 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630151201812-1014280864.png differ diff --git a/docs/dl/img/反向传递/853467-20160630151457593-1250510503.png b/docs/dl/img/反向传递/853467-20160630151457593-1250510503.png new file mode 100644 index 00000000..a5add0ff Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630151457593-1250510503.png differ diff --git a/docs/dl/img/反向传递/853467-20160630151508999-1967746600.png b/docs/dl/img/反向传递/853467-20160630151508999-1967746600.png new file mode 100644 index 00000000..3ae6b973 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630151508999-1967746600.png differ diff --git a/docs/dl/img/反向传递/853467-20160630151516093-1257166735.png b/docs/dl/img/反向传递/853467-20160630151516093-1257166735.png new file mode 100644 index 00000000..6373f15e Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630151516093-1257166735.png differ diff --git a/docs/dl/img/反向传递/853467-20160630151916796-1001638091.png b/docs/dl/img/反向传递/853467-20160630151916796-1001638091.png new file mode 100644 index 00000000..2cc221e1 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630151916796-1001638091.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152018906-1524325812.png b/docs/dl/img/反向传递/853467-20160630152018906-1524325812.png new file mode 100644 index 00000000..203a0ce6 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152018906-1524325812.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152206781-7976168.png b/docs/dl/img/反向传递/853467-20160630152206781-7976168.png new file mode 100644 index 00000000..d25b6528 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152206781-7976168.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152258437-1960839452.png b/docs/dl/img/反向传递/853467-20160630152258437-1960839452.png new file mode 100644 index 00000000..8b775631 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152258437-1960839452.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152417109-711077078.png b/docs/dl/img/反向传递/853467-20160630152417109-711077078.png new file mode 100644 index 00000000..d445fe13 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152417109-711077078.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152511937-1667481051.png b/docs/dl/img/反向传递/853467-20160630152511937-1667481051.png new file mode 100644 index 00000000..e491938d Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152511937-1667481051.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152625593-2083321635.png b/docs/dl/img/反向传递/853467-20160630152625593-2083321635.png new file mode 100644 index 00000000..7183ae14 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152625593-2083321635.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152658109-214239362.png b/docs/dl/img/反向传递/853467-20160630152658109-214239362.png new file mode 100644 index 00000000..522734c9 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152658109-214239362.png differ diff --git a/docs/dl/img/反向传递/853467-20160630152811640-888140287.png b/docs/dl/img/反向传递/853467-20160630152811640-888140287.png new file mode 100644 index 00000000..0805975d Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630152811640-888140287.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153103187-515052589.png b/docs/dl/img/反向传递/853467-20160630153103187-515052589.png new file mode 100644 index 00000000..732ab27b Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153103187-515052589.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153202812-585186566.png b/docs/dl/img/反向传递/853467-20160630153202812-585186566.png new file mode 100644 index 00000000..6a89e1e7 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153202812-585186566.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153251234-1144531293.png b/docs/dl/img/反向传递/853467-20160630153251234-1144531293.png new file mode 100644 index 00000000..2f3d4600 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153251234-1144531293.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153405296-436656179.png b/docs/dl/img/反向传递/853467-20160630153405296-436656179.png new file mode 100644 index 00000000..dbc43523 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153405296-436656179.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153514734-1544628024.png b/docs/dl/img/反向传递/853467-20160630153514734-1544628024.png new file mode 100644 index 00000000..0ec3df1c Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153514734-1544628024.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153614374-1624035276.png b/docs/dl/img/反向传递/853467-20160630153614374-1624035276.png new file mode 100644 index 00000000..9bc920e9 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153614374-1624035276.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153700093-743859667.png b/docs/dl/img/反向传递/853467-20160630153700093-743859667.png new file mode 100644 index 00000000..95ecc920 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153700093-743859667.png differ diff --git a/docs/dl/img/反向传递/853467-20160630153807624-1231975059.png b/docs/dl/img/反向传递/853467-20160630153807624-1231975059.png new file mode 100644 index 00000000..7a229bc1 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630153807624-1231975059.png differ diff --git a/docs/dl/img/反向传递/853467-20160630154317562-311369571.png b/docs/dl/img/反向传递/853467-20160630154317562-311369571.png new file mode 100644 index 00000000..792b2a2e Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630154317562-311369571.png differ diff --git a/docs/dl/img/反向传递/853467-20160630154712202-1906007645.png b/docs/dl/img/反向传递/853467-20160630154712202-1906007645.png new file mode 100644 index 00000000..9ef3de99 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630154712202-1906007645.png differ diff --git a/docs/dl/img/反向传递/853467-20160630154758531-934861299.png b/docs/dl/img/反向传递/853467-20160630154758531-934861299.png new file mode 100644 index 00000000..6658b3d0 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630154758531-934861299.png differ diff --git a/docs/dl/img/反向传递/853467-20160630154958296-1922097086.png b/docs/dl/img/反向传递/853467-20160630154958296-1922097086.png new file mode 100644 index 00000000..23826c03 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630154958296-1922097086.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155015546-1106216279.png b/docs/dl/img/反向传递/853467-20160630155015546-1106216279.png new file mode 100644 index 00000000..23953197 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155015546-1106216279.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155036406-964647962.png b/docs/dl/img/反向传递/853467-20160630155036406-964647962.png new file mode 100644 index 00000000..690a02ca Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155036406-964647962.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155117656-1905928379.png b/docs/dl/img/反向传递/853467-20160630155117656-1905928379.png new file mode 100644 index 00000000..50f4e27a Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155117656-1905928379.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155158468-157032005.png b/docs/dl/img/反向传递/853467-20160630155158468-157032005.png new file mode 100644 index 00000000..4d6c5117 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155158468-157032005.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155310937-2103938446.png b/docs/dl/img/反向传递/853467-20160630155310937-2103938446.png new file mode 100644 index 00000000..66222a2a Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155310937-2103938446.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155435218-396769942.png b/docs/dl/img/反向传递/853467-20160630155435218-396769942.png new file mode 100644 index 00000000..cb8d760c Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155435218-396769942.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155555562-1422254830.png b/docs/dl/img/反向传递/853467-20160630155555562-1422254830.png new file mode 100644 index 00000000..7fa1b2be Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155555562-1422254830.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155628046-229505495.png b/docs/dl/img/反向传递/853467-20160630155628046-229505495.png new file mode 100644 index 00000000..e2f974fe Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155628046-229505495.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155706437-964861747.png b/docs/dl/img/反向传递/853467-20160630155706437-964861747.png new file mode 100644 index 00000000..4e2c81a0 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155706437-964861747.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155731421-239852713.png b/docs/dl/img/反向传递/853467-20160630155731421-239852713.png new file mode 100644 index 00000000..0b508a8b Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155731421-239852713.png differ diff --git a/docs/dl/img/反向传递/853467-20160630155827718-189457408.png b/docs/dl/img/反向传递/853467-20160630155827718-189457408.png new file mode 100644 index 00000000..7c51363b Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630155827718-189457408.png differ diff --git a/docs/dl/img/反向传递/853467-20160630160345281-679307550.png b/docs/dl/img/反向传递/853467-20160630160345281-679307550.png new file mode 100644 index 00000000..ca8ae241 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630160345281-679307550.png differ diff --git a/docs/dl/img/反向传递/853467-20160630160523437-1906004593.png b/docs/dl/img/反向传递/853467-20160630160523437-1906004593.png new file mode 100644 index 00000000..ddacb1f1 Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630160523437-1906004593.png differ diff --git a/docs/dl/img/反向传递/853467-20160630160603484-1471434475.png b/docs/dl/img/反向传递/853467-20160630160603484-1471434475.png new file mode 100644 index 00000000..4f3caaba Binary files /dev/null and b/docs/dl/img/反向传递/853467-20160630160603484-1471434475.png differ diff --git a/docs/dl/反向传递.md b/docs/dl/反向传递.md index bb4aa223..056f125f 100644 --- a/docs/dl/反向传递.md +++ b/docs/dl/反向传递.md @@ -8,7 +8,7 @@   说到神经网络,大家看到这个图应该不陌生: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630140644406-409859737.png) +![](img/反向传递/853467-20160630140644406-409859737.png)   这是典型的三层神经网络的基本构成,Layer L1是输入层,Layer L2是隐含层,Layer L3是隐含层,我们现在手里有一堆数据{x1,x2,x3,...,xn},输出也是一堆数据{y1,y2,y3,...,yn},现在要他们在隐含层做某种变换,让你把数据灌进去后得到你期望的输出。如果你希望你的输出和原始输入一样,那么就是最常见的自编码模型(Auto-Encoder)。可能有人会问,为什么要输入输出都一样呢?有什么用啊?其实应用挺广的,在图像识别,文本分类等等都会用到,我会专门再写一篇Auto-Encoder的文章来说明,包括一些变种之类的。如果你的输出和原始输入不一样,那么就是很常见的人工神经网络了,相当于让原始数据通过一个映射来得到我们想要的输出数据,也就是我们今天要讲的话题。 @@ -16,13 +16,13 @@   假设,你有这样一个网络层: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630141449671-1058672778.png) +![](img/反向传递/853467-20160630141449671-1058672778.png)   第一层是输入层,包含两个神经元i1,i2,和截距项b1;第二层是隐含层,包含两个神经元h1,h2和截距项b2,第三层是输出o1,o2,每条线上标的wi是层与层之间连接的权重,激活函数我们默认为sigmoid函数。   现在对他们赋上初值,如下图: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630142019140-402363317.png) +![](img/反向传递/853467-20160630142019140-402363317.png)   其中,输入数据  i1=0.05,i2=0.10; @@ -40,23 +40,23 @@   计算神经元h1的输入加权和: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630142915359-294460310.png) +![](img/反向传递/853467-20160630142915359-294460310.png) 神经元h1的输出o1:(此处用到激活函数为sigmoid函数): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630150115390-1035378028.png) +![](img/反向传递/853467-20160630150115390-1035378028.png)   同理,可计算出神经元h2的输出o2: -  ![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630150244265-1128303244.png) +  ![](img/反向传递/853467-20160630150244265-1128303244.png)   2.隐含层---->输出层:   计算输出层神经元o1和o2的值: -  ![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630150517109-389457135.png) +  ![](img/反向传递/853467-20160630150517109-389457135.png) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630150638390-1210364296.png) +![](img/反向传递/853467-20160630150638390-1210364296.png) 这样前向传播的过程就结束了,我们得到输出值为[0.75136079 , 0.772928465],与实际值[0.01 , 0.99]相差还很远,现在我们对误差进行反向传播,更新权值,重新计算输出。 @@ -66,125 +66,125 @@ 总误差: (square error) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630151201812-1014280864.png) +![](img/反向传递/853467-20160630151201812-1014280864.png) 但是有两个输出,所以分别计算o1和o2的误差,总误差为两者之和: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630151457593-1250510503.png) +![](img/反向传递/853467-20160630151457593-1250510503.png) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630151508999-1967746600.png) +![](img/反向传递/853467-20160630151508999-1967746600.png) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630151516093-1257166735.png) +![](img/反向传递/853467-20160630151516093-1257166735.png) 2.隐含层---->输出层的权值更新: 以权重参数w5为例,如果我们想知道w5对整体误差产生了多少影响,可以用整体误差对w5求偏导求出: (链式法则) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630151916796-1001638091.png) +![](img/反向传递/853467-20160630151916796-1001638091.png) 下面的图可以更直观的看清楚误差是怎样反向传播的: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152018906-1524325812.png) +![](img/反向传递/853467-20160630152018906-1524325812.png) 现在我们来分别计算每个式子的值: -计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152206781-7976168.png): +计算![](img/反向传递/853467-20160630152206781-7976168.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152258437-1960839452.png) +![](img/反向传递/853467-20160630152258437-1960839452.png) -计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152417109-711077078.png): +计算![](img/反向传递/853467-20160630152417109-711077078.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152511937-1667481051.png) +![](img/反向传递/853467-20160630152511937-1667481051.png) (这一步实际上就是对sigmoid函数求导,比较简单,可以自己推导一下) -计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152625593-2083321635.png): +计算![](img/反向传递/853467-20160630152625593-2083321635.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152658109-214239362.png) +![](img/反向传递/853467-20160630152658109-214239362.png) 最后三者相乘: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630152811640-888140287.png) +![](img/反向传递/853467-20160630152811640-888140287.png) 这样我们就计算出整体误差E(total)对w5的偏导值。 回过头来再看看上面的公式,我们发现: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153103187-515052589.png) +![](img/反向传递/853467-20160630153103187-515052589.png) -为了表达方便,用![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153202812-585186566.png)来表示输出层的误差: +为了表达方便,用![](img/反向传递/853467-20160630153202812-585186566.png)来表示输出层的误差: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153251234-1144531293.png) +![](img/反向传递/853467-20160630153251234-1144531293.png) 因此,整体误差E(total)对w5的偏导公式可以写成: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153405296-436656179.png) +![](img/反向传递/853467-20160630153405296-436656179.png) 如果输出层误差计为负的话,也可以写成: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153514734-1544628024.png) +![](img/反向传递/853467-20160630153514734-1544628024.png) 最后我们来更新w5的值: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153614374-1624035276.png) +![](img/反向传递/853467-20160630153614374-1624035276.png) -(其中,![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153700093-743859667.png)是学习速率,这里我们取0.5) +(其中,![](img/反向传递/853467-20160630153700093-743859667.png)是学习速率,这里我们取0.5) 同理,可更新w6,w7,w8: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630153807624-1231975059.png) +![](img/反向传递/853467-20160630153807624-1231975059.png) 3.隐含层---->隐含层的权值更新:  方法其实与上面说的差不多,但是有个地方需要变一下,在上文计算总误差对w5的偏导时,是从out(o1)---->net(o1)---->w5,但是在隐含层之间的权值更新时,是out(h1)---->net(h1)---->w1,而out(h1)会接受E(o1)和E(o2)两个地方传来的误差,所以这个地方两个都要计算。 -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630154317562-311369571.png) +![](img/反向传递/853467-20160630154317562-311369571.png) -计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630154712202-1906007645.png): +计算![](img/反向传递/853467-20160630154712202-1906007645.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630154758531-934861299.png) +![](img/反向传递/853467-20160630154758531-934861299.png) -先计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630154958296-1922097086.png): +先计算![](img/反向传递/853467-20160630154958296-1922097086.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155015546-1106216279.png) +![](img/反向传递/853467-20160630155015546-1106216279.png) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155036406-964647962.png) +![](img/反向传递/853467-20160630155036406-964647962.png) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155117656-1905928379.png) +![](img/反向传递/853467-20160630155117656-1905928379.png) -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155158468-157032005.png) +![](img/反向传递/853467-20160630155158468-157032005.png) 同理,计算出: -          ![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155310937-2103938446.png) +          ![](img/反向传递/853467-20160630155310937-2103938446.png) 两者相加得到总值: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155435218-396769942.png) +![](img/反向传递/853467-20160630155435218-396769942.png) -再计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155555562-1422254830.png): +再计算![](img/反向传递/853467-20160630155555562-1422254830.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155628046-229505495.png) +![](img/反向传递/853467-20160630155628046-229505495.png) -再计算![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155731421-239852713.png): +再计算![](img/反向传递/853467-20160630155731421-239852713.png): -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155706437-964861747.png) +![](img/反向传递/853467-20160630155706437-964861747.png) 最后,三者相乘: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630155827718-189457408.png) +![](img/反向传递/853467-20160630155827718-189457408.png)  为了简化公式,用sigma(h1)表示隐含层单元h1的误差: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630160345281-679307550.png) +![](img/反向传递/853467-20160630160345281-679307550.png) 最后,更新w1的权值: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630160523437-1906004593.png) +![](img/反向传递/853467-20160630160523437-1906004593.png) 同理,额可更新w2,w3,w4的权值: -![](http://data.apachecn.org/img/AiLearning/dl/反向传递/853467-20160630160603484-1471434475.png) +![](img/反向传递/853467-20160630160603484-1471434475.png)   这样误差反向传播法就完成了,最后我们再把更新的权值重新计算,不停地迭代,在这个例子中第一次迭代之后,总误差E(total)由0.298371109下降至0.291027924。迭代10000次后,总误差为0.000035085,输出为[0.015912196,0.984065734](原输入为[0.01,0.99]),证明效果还是不错的。