神经网络:学习(二)

参数展开

在Octave中,若我们需要使用fminuc函数来计算使得代价函数最小化的权重矩阵,我们需要将参数矩阵展开为向量,利用相关算法计算时再将其转换回矩阵。

例如:

  1. 创建Θ(1),Θ(2),Θ(3)矩阵
octave:1> Theta1 = ones(10, 11);
octave:2> Theta2 = ones(10, 11);
octave:3> Theta3 = ones(1, 11);
octave:4> whos
Variables in the current scope:

   Attr Name        Size                     Bytes  Class
   ==== ====        ====                     =====  =====
        Theta1     10x11                       880  double
        Theta2     10x11                       880  double
        Theta3      1x11                        24  double

Total is 231 elements using 1784 bytes
  1. 展开参数矩阵
octave:5> thetaVec = [Theta1(:); Theta2(:); Theta3(:)];
octave:6> whos
Variables in the current scope:

   Attr Name          Size                     Bytes  Class
   ==== ====          ====                     =====  =====
        Theta1       10x11                       880  double
        Theta2       10x11                       880  double
        Theta3        1x11                        24  double
        thetaVec    231x1                       1848  double

Total is 462 elements using 3632 bytes
  1. 计算时,转回Θ(1),Θ(2),Θ(3)矩阵
octave:7> Theta1_r = reshape(thetaVec(1:110), 10, 11);
octave:8> Theta2_r = reshape(thetaVec(111:220), 10, 11);
octave:9> Theta3_r = reshape(thetaVec(221:231), 1, 11);
octave:10> whos
Variables in the current scope:

   Attr Name          Size                     Bytes  Class
   ==== ====          ====                     =====  =====
        Theta1       10x11                       880  double
        Theta1_r     10x11                       880  double
        Theta2       10x11                       880  double
        Theta2_r     10x11                       880  double
        Theta3        1x11                        24  double
        Theta3_r      1x11                        88  double
        thetaVec    231x1                       1848  double

Total is 693 elements using 5480 bytes
  1. 使用向前或向后传播算法,计算代价函数和下降梯度
补充笔记
Implementation Note: Unrolling Parameters

With neural networks, we are working with sets of matrices:

In order to use optimizing functions such as "fminunc()", we will want to "unroll" all the elements and put them into one long vector:

thetaVector = [ Theta1(:); Theta2(:); Theta3(:); ]
deltaVector = [ D1(:); D2(:); D3(:) ]

If the dimensions of Theta1 is 10x11, Theta2 is 10x11 and Theta3 is 1x11, then we can get back our original matrices from the "unrolled" versions as follows:

Theta1 = reshape(thetaVector(1:110),10,11)
Theta2 = reshape(thetaVector(111:220),10,11)
Theta3 = reshape(thetaVector(221:231),1,11)

To summarize:

梯度检验

当我们对处理一些复杂的模型(如神经网络模型)使用梯度下降算法时,可能存在一些不易发现的错误,虽然代价函数在不断减小,但其最终结果不一定为最优解。

为了避免这样的问题,我们引入梯度的数值检验方法。在介绍梯度下降算法时,我们知道对代价函数J(θ)求θj的偏导数,实质上是θj该点处的切线的斜率。因此,我们可以使用如下方法对该点处的切线的斜率进行近似操作:

其中ε足够小,通常选取10-4。通过该近似值与使用向后传播算法得出的下降梯度值相比较。

对于向量θ而言,其计算方法为:

在Octave中,其代码基本为:

补充笔记
Gradient Checking

Gradient checking will assure that our backpropagation works as intended. We can approximate the derivative of our cost function with:

With multiple theta matrices, we can approximate the derivative with respect to Θj as follows:

A small value for ϵ (epsilon) such as ϵ=10−4, guarantees that the math works out properly. If the value for ϵ is too small, we can end up with numerical problems.

Hence, we are only adding or subtracting epsilon to the Θj matrix. In octave we can do it as follows:

epsilon = 1e-4;
for i = 1:n,
  thetaPlus = theta;
  thetaPlus(i) += epsilon;
  thetaMinus = theta;
  thetaMinus(i) -= epsilon;
  gradApprox(i) = (J(thetaPlus) - J(thetaMinus))/(2*epsilon)
end;

We previously saw how to calculate the deltaVector. So once we compute our gradApprox vector, we can check that gradApprox ≈ deltaVector.

Once you have verified once that your backpropagation algorithm is correct, you don't need to compute gradApprox again. The code to compute gradApprox can be very slow.

随机初始化

在线性回归和逻辑回归中,我们在使用梯度下降算法或其他高级算法时,我们通常将参数θ初始化为0。但在神经网络模型中,如若我们将参数(权重)θ都初始化为0,那么我们不能得到一个正确的结果。

即使我们将权重θ初始化一个非0的数值,其结果仍旧是错误的。以上图为例,隐藏层的所有激活单元其数值仍然相同。

因此,我们需要对权重θ进行随机初始化。

其中,上图中ε与梯度下降检验中的ε无关。

补充笔记
Random Initialization

Initializing all theta weights to zero does not work with neural networks. When we backpropagate, all nodes will update to the same value repeatedly. Instead we can randomly initialize our weights for our Θ matrices using the following method:

Hence, we initialize each Θij(l) to a random value between[−ϵ,ϵ]. Using the above formula guarantees that we get the desired bound. The same procedure applies to all the Θ's. Below is some working code you could use to experiment.

If the dimensions of Theta1 is 10x11, Theta2 is 10x11 and Theta3 is 1x11.

Theta1 = rand(10,11) * (2 * INIT_EPSILON) - INIT_EPSILON;
Theta2 = rand(10,11) * (2 * INIT_EPSILON) - INIT_EPSILON;
Theta3 = rand(1,11) * (2 * INIT_EPSILON) - INIT_EPSILON;

rand(x,y) is just a function in octave that will initialize a matrix of random real numbers between 0 and 1.

(Note: the epsilon used above is unrelated to the epsilon from Gradient Checking)

神经网络算法总结

首先,我们需要选择神经网络模型的结构,即决定神经网络的层数及每一层的激活单元数。

  • 输入层(即第一层)的激活单元数为训练集的特征变量数
  • 输出层(即最后一层)的激活单元数为分类数
  • 隐藏层数大于1时,应确保隐藏层的每层的激活单元数相同,通常情况下隐藏层的激活单元越多越好

神经网络模型训练步骤:

  1. 随机初始化权重矩阵Θ;
  2. 利用正向传播算法计算假设函数hΘ(x);
  3. 编写计算代价函数J(Θ)的代码;
  4. 利用方向传播算法计算代价函数J(Θ)的偏导数,从而计算出下降梯度;
  5. 利用梯度检验法(即数值检验法)对下降梯度进行检验;
  6. 利用优化算法(如梯度下降算法或其他高级优化算法)计算出使得代价函数J(Θ)最小化的权重矩阵Θ。

注:神经网络模型中的代价函数J(Θ)为非凸(国内非凹)函数。因此,所求得的权重矩阵Θ为代价函数J(Θ)的局部最优解。

补充笔记
Putting it Together

First, pick a network architecture; choose the layout of your neural network, including how many hidden units in each layer and how many layers in total you want to have.

  • Number of input units = dimension of features x(i)
  • Number of output units = number of classes
  • Number of hidden units per layer = usually more the better (must balance with cost of computation as it increases with more hidden units)
  • Defaults: 1 hidden layer. If you have more than 1 hidden layer, then it is recommended that you have the same number of units in every hidden layer.

Training a Neural Network

  1. Randomly initialize the weights
  2. Implement forward propagation to get hΘ(x(i)) for any x(i)
  3. Implement the cost function
  4. Implement backpropagation to compute partial derivatives
  5. Use gradient checking to confirm that your backpropagation works. Then disable gradient checking.
  6. Use gradient descent or a built-in optimization function to minimize the cost function with the weights in theta.

When we perform forward and back propagation, we loop on every training example:

for i = 1:m,
   Perform forward propagation and backpropagation using example (x(i),y(i))
   (Get activations a(l) and delta terms d(l) for l = 2,...,L

The following image gives us an intuition of what is happening as we are implementing our neural network:

Ideally, you want hΘ(x(i)) ≈ y(i). This will minimize our cost function. However, keep in mind that J(Θ) is not convex and thus we can end up in a local minimum instead.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,569评论 4 363
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,499评论 1 294
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,271评论 0 244
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,087评论 0 209
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,474评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,670评论 1 222
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,911评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,636评论 0 202
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,397评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,607评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,093评论 1 261
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,418评论 2 254
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,074评论 3 237
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,092评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,865评论 0 196
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,726评论 2 276
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,627评论 2 270

推荐阅读更多精彩内容

  • 弟弟说像公鸡, 但它真的是母鸡呀(捂脸) 可能是它太女汉子了, 骨子里透露着一种难得的刚毅。 ꉂ ೭(˵¯̴͒ꇴ¯...
    大太阳子阅读 353评论 0 0
  • 不得不说,在某些时候我存在着侥幸心理,在金鸿能源收益达到125时,我看MACD线还认为会上涨,总是想着可以多赚一些...
    守望在麦田里的稻草人阅读 193评论 0 0