探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

学习率是一个控制每次更新模型权重时响应估计误差而调整模型程度的超参数。学习率选取是一项具有挑战性的工作,学习率设置的非常小可能导致训练过程过长甚至训练进程被卡住,而设置的非常大可能会导致过快学习到次优的权重集合或者训练过程不稳定。

迁移学习

我们使用迁移学习将训练好的机器学习模型应用于不同但相关的任务中。这在深度学习这种使用层级链接的神经网络中非常有效。特别是在计算机视觉任务中,这些网络中的前几层倾向于学习较简单的特征。例如:边缘、梯度特征等。

这是一种在计算机视觉任务中被证实过可以产生更好结果的成熟方法。大多数预训练的模型(Resnet,VGG,Inception等)都是在ImageNet上进行训练的,并且根据实际任务中所用数据与ImageNet数据的相似性,这些预训练得到的权重需要或多或少地改变。

在fast.ai课程中,Jeremy Howard探讨了迁移学习的不同学习率策略以提高模型在速度和准确性方面的表现。

1. 差分学习(Differential learning)

差分学习提出的动机来自这样一个事实,即在对预训练模型进行微调时,更靠近输入的层更可能学习更多的简单特征。因此,我们不想改变这些层的权重,而是更大程度上修改更深层的权重从而适应目标任务/数据。

“差分学习率” 是指在网络的不同部分使用不同的学习率,初始层的学习率较低,后几层的学习率逐渐提高。

使用差分学习率的CNN样例

在Keras中实现差分学习率

为了在Keras中实现差异学习,我们需要修改优化器源代码。这里以Adam优化期为例,kears中Adam实现源代码如下:

class Adam(Optimizer):
  
    """Adam optimizer.
    Default parameters follow those provided in the original paper.
    # Arguments
        lr: float >= 0. Learning rate.
        beta_1: float, 0 < beta < 1. Generally close to 1.
        beta_2: float, 0 < beta < 1. Generally close to 1.
        epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
        decay: float >= 0. Learning rate decay over each update.
        amsgrad: boolean. Whether to apply the AMSGrad variant of this
            algorithm from the paper "On the Convergence of Adam and
            Beyond".
    """

    def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999,
                 epsilon=None, decay=0., amsgrad=False, **kwargs):
        super(Adam, self).__init__(**kwargs)
        with K.name_scope(self.__class__.__name__):
            self.iterations = K.variable(0, dtype='int64', name='iterations')
            self.lr = K.variable(lr, name='lr')
            self.beta_1 = K.variable(beta_1, name='beta_1')
            self.beta_2 = K.variable(beta_2, name='beta_2')
            self.decay = K.variable(decay, name='decay')
        if epsilon is None:
            epsilon = K.epsilon()
        self.epsilon = epsilon
        self.initial_decay = decay
        self.amsgrad = amsgrad

    @interfaces.legacy_get_updates_support
    def get_updates(self, loss, params):
        grads = self.get_gradients(loss, params)
        self.updates = [K.update_add(self.iterations, 1)]

        lr = self.lr
        if self.initial_decay > 0:
            lr = lr * (1. / (1. + self.decay * K.cast(self.iterations,
                                                      K.dtype(self.decay))))

        t = K.cast(self.iterations, K.floatx()) + 1
        lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) /
                     (1. - K.pow(self.beta_1, t)))

        ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        if self.amsgrad:
            vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        else:
            vhats = [K.zeros(1) for _ in params]
        self.weights = [self.iterations] + ms + vs + vhats

        for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
            m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
            v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
            if self.amsgrad:
                vhat_t = K.maximum(vhat, v_t)
                p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon)
                self.updates.append(K.update(vhat, vhat_t))
            else:
                p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon)

            self.updates.append(K.update(m, m_t))
            self.updates.append(K.update(v, v_t))
            new_p = p_t

            # Apply constraints.
            if getattr(p, 'constraint', None) is not None:
                new_p = p.constraint(new_p)

            self.updates.append(K.update(p, new_p))
        return self.updates

    def get_config(self):
        config = {'lr': float(K.get_value(self.lr)),
                  'beta_1': float(K.get_value(self.beta_1)),
                  'beta_2': float(K.get_value(self.beta_2)),
                  'decay': float(K.get_value(self.decay)),
                  'epsilon': self.epsilon,
                  'amsgrad': self.amsgrad}
        base_config = super(Adam, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

我们修改上面的源代码以包含以下内容:

  • init函数被修改为包含:

    1. 拆分层:split_1split_2是分别进行第一次和第二次拆分的层名称。
    2. 修改参数lr以应用学习率表 - 应用3个学习率表(因为差分学习结构中分为3个不同的阶段)
  • 在更新每层的学习率时,初始代码遍历所有层并为其分配学习速率。我们改变这一点,以便为不同的层设置不同的学习率。

class Adam_dlr(optimizers.Optimizer):

    """Adam optimizer.
    Default parameters follow those provided in the original paper.
    # Arguments
        split_1: split layer 1
        split_2: split layer 2
        lr: float >= 0. List of Learning rates. [Early layers, Middle layers, Final Layers]
        beta_1: float, 0 < beta < 1. Generally close to 1.
        beta_2: float, 0 < beta < 1. Generally close to 1.
        epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
        decay: float >= 0. Learning rate decay over each update.
        amsgrad: boolean. Whether to apply the AMSGrad variant of this
            algorithm from the paper "On the Convergence of Adam and
            Beyond".
    """

    def __init__(self, split_1, split_2, lr=[1e-7, 1e-4, 1e-2], beta_1=0.9, beta_2=0.999,
                 epsilon=None, decay=0., amsgrad=False, **kwargs):
        super(Adam_dlr, self).__init__(**kwargs)
        with K.name_scope(self.__class__.__name__):
            self.iterations = K.variable(0, dtype='int64', name='iterations')
            self.lr = K.variable(lr, name='lr')
            self.beta_1 = K.variable(beta_1, name='beta_1')
            self.beta_2 = K.variable(beta_2, name='beta_2')
            self.decay = K.variable(decay, name='decay')
            # Extracting name of the split layers
            self.split_1 = split_1.weights[0].name
            self.split_2 = split_2.weights[0].name
        if epsilon is None:
            epsilon = K.epsilon()
        self.epsilon = epsilon
        self.initial_decay = decay
        self.amsgrad = amsgrad

    @keras.optimizers.interfaces.legacy_get_updates_support
    def get_updates(self, loss, params):
        grads = self.get_gradients(loss, params)
        self.updates = [K.update_add(self.iterations, 1)]

        lr = self.lr
        if self.initial_decay > 0:
            lr = lr * (1. / (1. + self.decay * K.cast(self.iterations,
                                                      K.dtype(self.decay))))

        t = K.cast(self.iterations, K.floatx()) + 1
        lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) /
                     (1. - K.pow(self.beta_1, t)))

        ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        if self.amsgrad:
            vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        else:
            vhats = [K.zeros(1) for _ in params]
        self.weights = [self.iterations] + ms + vs + vhats
        
        # Setting lr of the initial layers
        lr_grp = lr_t[0]
        for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
            
            # Updating lr when the split layer is encountered
            if p.name == self.split_1:
                lr_grp = lr_t[1]
            if p.name == self.split_2:
                lr_grp = lr_t[2]
                
            m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
            v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
            if self.amsgrad:
                vhat_t = K.maximum(vhat, v_t)
                p_t = p - lr_grp * m_t / (K.sqrt(vhat_t) + self.epsilon) # 使用更新后的学习率
                self.updates.append(K.update(vhat, vhat_t))
            else:
                p_t = p - lr_grp * m_t / (K.sqrt(v_t) + self.epsilon)

            self.updates.append(K.update(m, m_t))
            self.updates.append(K.update(v, v_t))
            new_p = p_t

            # Apply constraints.
            if getattr(p, 'constraint', None) is not None:
                new_p = p.constraint(new_p)

            self.updates.append(K.update(p, new_p))
        return self.updates

    def get_config(self):
#         print('Optimizer LR: ', K.get_value(self.lr))
#         print()
        config = {
                  'lr': (K.get_value(self.lr)),
                  'beta_1': float(K.get_value(self.beta_1)),
                  'beta_2': float(K.get_value(self.beta_2)),
                  'decay': float(K.get_value(self.decay)),
                  'epsilon': self.epsilon,
                  'amsgrad': self.amsgrad}
        base_config = super(Adam_dlr, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

2. 具有热启动的随机梯度下降(SGDR)
理想情况下,对于每一批的随机梯度下降(SGD)网络应越来越接近损失的全局最小值。因此,随着训练的进行降低学习速率是有意义的,这使得算法不会超过错过并尽可能接近最小值。通过余弦退火,我们可以使用余弦函数来降低学习率。

在前200次迭代内逐步调低学习率

SGDR是学习速率退火的最新变体,由Loshchilov&Hutter在他们的论文“Sgdr:Stochastic Gradient Descent with Warm Restarts”中引入。在这种技术中,我们不时的进行学习率突增。下面是使用余弦退火重置三个均匀间隔的学习速率的示例。

每迭代100次后将学习率调到最大

突然提高学习率背后的基本原理是:在这样做的情况下,梯度下降不会卡在任何局部最小值,并且可能以其向全局最小值的方式“跳出”局部最小值。

每次学习率下降到最小点(上图中每100次迭代),我们称之为循环。作者还建议通过一些常数因子使每个下一周期比前一周期更长。
每个周期需要两倍于上一个周期大小

在Keras中实现SGDR

使用Keras Callbacks回调函数,我们可以实现以遵循特定公式的方式更新学习率。具体实现可以参考周期性学习率官方实现方法这个Git

class LR_Updater(Callback):
    '''This callback is utilized to log learning rates every iteration (batch cycle)
    it is not meant to be directly used as a callback but extended by other callbacks
    ie. LR_Cycle
    '''
    
    def __init__(self, iterations):
        '''
        iterations = dataset size / batch size
        epochs = pass through full training dataset
        '''
        self.epoch_iterations = iterations
        self.trn_iterations = 0.
        self.history = {}
        
    def on_train_begin(self, logs={}):
        self.trn_iterations = 0.
        logs = logs or {}
    
    def on_batch_end(self, batch, logs=None):
        logs = logs or {}
        self.trn_iterations += 1
        K.set_value(self.model.optimizer.lr, self.setRate())
        self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr))
        self.history.setdefault('iterations', []).append(self.trn_iterations)
        for k, v in logs.items():
            self.history.setdefault(k, []).append(v)
    
    def plot_lr(self):
        plt.xlabel("iterations")
        plt.ylabel("learning rate")
        plt.plot(self.history['iterations'], self.history['lr'])
    
    def plot(self, n_skip=10):
        plt.xlabel("learning rate (log scale)")
        plt.ylabel("loss")
        plt.plot(self.history['lr'], self.history['loss'])
        plt.xscale('log')

        
class LR_Cycle(LR_Updater):
    '''This callback is utilized to implement cyclical learning rates
    it is based on this pytorch implementation https://github.com/fastai/fastai/blob/master/fastai
    and adopted from this keras implementation https://github.com/bckenstler/CLR
    '''
    
    def __init__(self, iterations, cycle_mult = 1):
        '''
        iterations = dataset size / batch size
        iterations = number of iterations in one annealing cycle
        cycle_mult = used to increase the cycle length cycle_mult times after every cycle
        for example: cycle_mult = 2 doubles the length of the cycle at the end of each cy$
        '''
        self.min_lr = 0
        self.cycle_mult = cycle_mult
        self.cycle_iterations = 0.
        super().__init__(iterations)
    
    def setRate(self):
        self.cycle_iterations += 1
        if self.cycle_iterations == self.epoch_iterations:
            self.cycle_iterations = 0.
            self.epoch_iterations *= self.cycle_mult
        cos_out = np.cos(np.pi*(self.cycle_iterations)/self.epoch_iterations) + 1
        return self.max_lr / 2 * cos_out
    
    def on_train_begin(self, logs={}):
        super().on_train_begin(logs={}) #changed to {} to fix plots after going from 1 to mult. lr
        self.cycle_iterations = 0.
        self.max_lr = K.get_value(self.model.optimizer.lr)

可以查看github存储库以获取差分学习SGDR的完整代码。它还包含一个测试文件,用于在样本数据集上使用这些技术。

GitGub地址

推荐阅读更多精彩内容