Multi-class AdaBoost learning experience

This is my learning experience when I read the paper Multi-class AdaBoost. To learn this new algorithm, first I reviewed AdaBoost and its relationship with Forward Stagewise Additive Modeling. Then I realized that multi-AdaBoost is natural extension of AdaBoost.

AdaBoost

Let T(x) denote a weak multi-class classifier that assigns a class label to x, then the AdaBoost algorithm proceeds as follows:


AdaBoost

Pay attention to the weak classifier weight, α, to let α >0, we need to ensure the err<0.5. This can be easily done when it's binary classification problem, because error rate of random guessing is 0.5. But it is difficult to achieve in multi-class case. So it may easily fail in multi-class case.

Forward Stagewise Additive Modeling

First let's take a look at Additive Model:

Additive Model

Typically, {βm, γm} are estimated by minimizing some loss function L, which measures the prediction errors over training data.



Directly optimizing such loss function is often difficult. However, consider it's an additive model, a simple greedy method can be used. We can sequentially optimize the following loss function, then add new base functions to the expansion function f(x) without changing the parameters that have been added.

loss function for each step

Thus the Forward Stagewise Additive Modeling algorithm (denote FSAM) is:

  1. Initialize f_0(x)=0
  2. For m = 1 to M :
    a) Compute
    ![](http://www.forkosh.com/mathtex.cgi? \Large \ (\beta_m,\gamma_m) = \mathop{\arg\max}{\beta, \gamma} \sum{i=1}^{N} L[y_i,f_{m-1}(x_i) + \beta b(x_i;\gamma)] (2.1) )

b) Set
![](http://www.forkosh.com/mathtex.cgi? \Large f_m(x) = f_{m-1}(x) + \beta_m b(x;\gamma_m) --(2.2))

AdaBoost and Forward Stagewise Additive Modeling

In fact AdaBoost is equivalent to Forward Stagewise Additive Modeling using the exponential loss funtion
![](http://www.forkosh.com/mathtex.cgi? \Large L[y,f(x)] = exp[-yf(x)]. )

Again let T(x) denote a classifier that assigns a class label to x, in this case, T(x)=1 or -1.

Then (2.1) is:
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \sum{i=1}^{N} \ exp [-y_i[f_{m-1}(x_i) + \beta T(x_i)]])

To simplify the expression, let
![](http://www.forkosh.com/mathtex.cgi? \Large w^{(m)}i = exp[-y_i f{m-1}(x_i)] )
Notice that w depends neither on β nor T(x).

Then the expression is
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \sum{i=1}^{N} w^{(m)}_i exp [-\beta T(x_i)])

It can be rewritten as
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \ e^{-\beta}\sum{y_i = T(x_i)} w^{(m)}i + e^{\beta}\sum{y_i \neq T(x_i)} w^{(m)}i \ = \mathop{\arg\max}{\beta, T} \ (e^\beta - e^{-\beta}) \sum_{i=1}^N w_i^{(m)} I[y_i \neq T(x_i)] + e{-\beta}\sum_{i=1}N w_i^{(m)} )

Now we can get that
![](http://www.forkosh.com/mathtex.cgi? \Large \beta m=\frac{1}{2} log \frac{\sum{i=1}^N w_i^{(m)} (1- I[y_i \neq T_m(x_i)]}{\sum_{i=1}^N I[y_i \neq T_m(x_i)]} )

denote err_m as
![](http://www.forkosh.com/mathtex.cgi? \Large err_m = \frac {\sum_{i=1}^N w_i^{(m)} I[y_i \neq T_m(x_i) ]} {\sum_{i=1}^N w_i^{(m)}} )

Then
![](http://www.forkosh.com/mathtex.cgi? \Large \beta _m=\frac{1}{2} log \frac{1-err_m}{err_m})

Look familiar, right? It is actually the AdaBoost's weak classifier weight, α_m, except the 1/2. I take the 1/2 as learning rate.

Also recall w_m,
![](http://www.forkosh.com/mathtex.cgi? \Large w^{(m)}i = exp[-y_i f{m-1}(x_i)] )
Then
![](http://www.forkosh.com/mathtex.cgi? \Large w_i^{(m+1)}_i = w_i^{(m)} exp[-\beta_m y_i T_m(x_i)] \= w_i^{(m)} \cdot exp[2\beta_m I[y_i \neq T_m(x_i)] \cdot exp(-\beta_m))
using the fact that
![](http://www.forkosh.com/mathtex.cgi? \Large -y_iT_m(x_i)=2 \cdot I[ y_i \neq T_m(x_i)]-1)
compare to the weight in AdaBoost,

weight in AdaBoost

We can know that they are equivalent.

As I mentioned before, AdaBoost can easily fail in multi-class case due to the error rate. To avoid this, a natural way is to do something about α. That's what SAMME does. SAMME is short for Stagewise Additive Modeling using a Multi-class Exponential loss function.

SAMME


Note that SAMME is very similar to AdaBoost, except the term log(K-1). Obviously, when K=2, SAMME is equivalent to AdaBoost. In SAMME, in order for α to be positive, we only need err<(K-1)/K. It's just a little better than random guessing.

SAMME and Forward Stagewise Additive Modeling

We can mimic the steps in AdaBoost and Forward Stagewise Additive Modeling to prove SAMME is equivalent to Forward Stagewise Additive Modeling using a Multi-class Exponential loss function.

In the multi-class classification setting, we can recode the output c with a K-dimensional vector y, as:
y_k = 1 if c == k else -1 / (K - 1), like


y encoding

Given the training data, we wish to find f(x) = (f1(x),...,fK(x))T such that
![](http://www.forkosh.com/mathtex.cgi? \Large \mathop{\min}{f(x)}\sum{i=1}^nL[y_i,f(x_i)]\subject\ to \quad f_1(x)+f_2(x)+...+f_K(X)=0 )
We consider f(x) has following form:
![](http://www.forkosh.com/mathtex.cgi? \Large f(x) = \sum_{m=1}^M\beta_{m}g_{m}(x) )
where β is weight and g(x) is base classifier.

Let's start from the loss function. The multi-class exponential loss function:

![](http://www.forkosh.com/mathtex.cgi? \Large L(y,f)=exp[-1/K(y_1f_1+...+y_Kf_K)]\=exp(-\frac{1}{K} y^Tf) )

Then (2.1) in multi-class case is:
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,g_m) = \mathop{\arg\max}{\beta, g} \sum{i=1}^{N} \ exp [-\frac{1}{K} y_i^T(f_{m-1}(x_i) + \beta g(x_i)] \ =\mathop{\arg\max}{\beta, g} \sum{i=1}^{N}w_i\ exp[-\frac{1}{K}\beta y_i^T g(x_i)] \qquad\qquad(3.1))
where
![](http://www.forkosh.com/mathtex.cgi? \Large w_i^m=exp[-\frac{1}{K}\beta y_i^Tf_{m-1}(x_i)] )
Notice that every g(x) has a one-to-one correspondence with a multi-class classifier T (x) in the following way:
![](http://www.forkosh.com/mathtex.cgi? \Large T(x) =k,\qquad if \quad g_K(x)=1 )

Hence, solving for g_m(x) is equivalent to finding the multi-class classifier T_m(x) that can generate g_m(x).

Recall that in AdaBoost and Forward Stagewise Additive Modeling, we rewrite the expression as
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \ e^{-\beta}\sum{y_i = T(x_i)} w^{(m)}i + e^{\beta}\sum{y_i \neq T(x_i)} w^{(m)}i \ = \mathop{\arg\max}{\beta, T} \ (e^\beta - e^{-\beta}) \sum_{i=1}^N w_i^{(m)} I[y_i \neq T(x_i)] + e{-\beta}\sum_{i=1}N w_i^{(m)} )

Similarly, we rewrite (3.1) as
![](http://www.forkosh.com/mathtex.cgi? \Large (\beta_m,T_m) = \mathop{\arg\max}{\beta, T} \ e^{-\frac{\beta}{K-1}}\sum{c_i = T(x_i)} w^{(m)}i + e{\frac{\beta}{(K-1)2}}\sum{c_i \neq T(x_i)} w^{(m)}i \ = \mathop{\arg\max}{\beta, T} \ [e{\frac{\beta}{(K-1)2}} - e^{-\frac{\beta}{K-1}} ]\sum_{i=1}^N w_i^{(m)} I[c_i \neq T(x_i)]+ e{-\frac{\beta}{K-1}}\sum_{i=1}N w_i^{(m)} )

So the solution is
![](http://www.forkosh.com/mathtex.cgi? \Large \beta m=\frac{(K-1)^2}{K} [log \frac{1-err_m}{err_m} +log(K-1) ] )
where error
![](http://www.forkosh.com/mathtex.cgi? \Large err_m = \frac {\sum
{i=1}^N w_i^{(m)} I[c_i \neq T_m(x_i) ]} {\sum_{i=1}^N w_i^{(m)}} ).

Look at the β_m, we can ignore (K-1)^2/K, the rest is just the difference between AdaBoost and SAMME. It proves that the extra term log(K − 1) in SAMME is not artificial.

SAMME.R

The SAMME algorithm expects the weak learner to deliver a classifier T (x) ∈ {1, . . . , K}. Analternative is to use real-valued confidence-rated predictions such as weighted probability estimates,to update the additive model, rather than the classifications themselves.




In my opinion, the SAMME.R uses more information than SAMME, and has more robustness.

Python3 Implementation of Multi-class AdaBoost using SAMME and SAMME.R

__author__ = 'Xin'

import numpy as np
from numpy.core.umath_tests import inner1d
from copy import deepcopy


class AdaBoostClassifier(object):
    '''
    Parameters
    -----------
    base_estimator: object
        The base model from which the boosted ensemble is built.

    n_estimators: integer, optional(default=50)
        The maximum number of estimators

    learning_rate: float, optional(default=1)

    algorithm: {'SAMME','SAMME.R'}, optional(default='SAMME.R')
        SAMME.R uses predicted probabilities to update wights, while SAMME uses class error rate

    random_state: int or None, optional(default=None)


    Attributes
    -------------
    estimators_: list of base estimators

    estimator_weights_: array of floats
        Weights for each base_estimator

    estimator_errors_: array of floats
        Classification error for each estimator in the boosted ensemble.
    
    Reference:
    1. [multi-adaboost](https://web.stanford.edu/~hastie/Papers/samme.pdf)

    2. [scikit-learn:weight_boosting](https://github.com/scikit-learn/
    scikit-learn/blob/51a765a/sklearn/ensemble/weight_boosting.py#L289)
    
    '''

    def __init__(self, *args, **kwargs):
        if kwargs and args:
            raise ValueError(
                '''AdaBoostClassifier can only be called with keyword
                   arguments for the following keywords: base_estimator ,n_estimators,
                    learning_rate,algorithm,random_state''')
        allowed_keys = ['base_estimator', 'n_estimators', 'learning_rate', 'algorithm', 'random_state']
        keywords_used = kwargs.keys()
        for keyword in keywords_used:
            if keyword not in allowed_keys:
                raise ValueError(keyword + ":  Wrong keyword used --- check spelling")

        n_estimators = 50
        learning_rate = 1
        algorithm = 'SAMME.R'
        random_state = None

        if kwargs and not args:
            if 'base_estimator' in kwargs:
                base_estimator = kwargs.pop('base_estimator')
            else:
                raise ValueError('''base_estimator can not be None''')
            if 'n_estimators' in kwargs: n_estimators = kwargs.pop('n_estimators')
            if 'learning_rate' in kwargs: learning_rate = kwargs.pop('learning_rate')
            if 'algorithm' in kwargs: algorithm = kwargs.pop('algorithm')
            if 'random_state' in kwargs: random_state = kwargs.pop('random_state')

        self.base_estimator_ = base_estimator
        self.n_estimators_ = n_estimators
        self.learning_rate_ = learning_rate
        self.algorithm_ = algorithm
        self.random_state_ = random_state
        self.estimators_ = list()
        self.estimator_weights_ = np.zeros(self.n_estimators_)
        self.estimator_errors_ = np.ones(self.n_estimators_)


    def _samme_proba(self, estimator, n_classes, X):
        """Calculate algorithm 4, step 2, equation c) of Zhu et al [1].

        References
        ----------
        .. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.

        """
        proba = estimator.predict_proba(X)

        # Displace zero probabilities so the log is defined.
        # Also fix negative elements which may occur with
        # negative sample weights.
        proba[proba < np.finfo(proba.dtype).eps] = np.finfo(proba.dtype).eps
        log_proba = np.log(proba)

        return (n_classes - 1) * (log_proba - (1. / n_classes)
                                  * log_proba.sum(axis=1)[:, np.newaxis])


    def fit(self, X, y):
        self.n_samples = X.shape[0]
        # There is hidden trouble for classes, here the classes will be sorted.
        # So in boost we have to ensure that the predict results have the same classes sort
        self.classes_ = np.array(sorted(list(set(y))))
        self.n_classes_ = len(self.classes_)
        for iboost in range(self.n_estimators_):
            if iboost == 0:
                sample_weight = np.ones(self.n_samples) / self.n_samples

            sample_weight, estimator_weight, estimator_error = self.boost(X, y, sample_weight)

            # early stop
            if estimator_error == None:
                break

            # append error and weight
            self.estimator_errors_[iboost] = estimator_error
            self.estimator_weights_[iboost] = estimator_weight

            if estimator_error <= 0:
                break

        return self


    def boost(self, X, y, sample_weight):
        if self.algorithm_ == 'SAMME':
            return self.discrete_boost(X, y, sample_weight)
        elif self.algorithm_ == 'SAMME.R':
            return self.real_boost(X, y, sample_weight)

    def real_boost(self, X, y, sample_weight):
        estimator = deepcopy(self.base_estimator_)
        if self.random_state_:
            estimator.set_params(random_state=1)

        estimator.fit(X, y, sample_weight=sample_weight)

        y_pred = estimator.predict(X)
        incorrect = y_pred != y
        estimator_error = np.dot(incorrect, sample_weight) / np.sum(sample_weight, axis=0)

        # if worse than random guess, stop boosting
        if estimator_error >= 1.0 - 1 / self.n_classes_:
            return None, None, None

        y_predict_proba = estimator.predict_proba(X)
        # repalce zero
        y_predict_proba[y_predict_proba < np.finfo(y_predict_proba.dtype).eps] = np.finfo(y_predict_proba.dtype).eps

        y_codes = np.array([-1. / (self.n_classes_ - 1), 1.])
        y_coding = y_codes.take(self.classes_ == y[:, np.newaxis])

        # for sample weight update
        intermediate_variable = (-1. * self.learning_rate_ * (((self.n_classes_ - 1) / self.n_classes_) *
                                                              inner1d(y_coding, np.log(
                                                                  y_predict_proba))))  #dot iterate for each row

        # update sample weight
        sample_weight *= np.exp(intermediate_variable)

        sample_weight_sum = np.sum(sample_weight, axis=0)
        if sample_weight_sum <= 0:
            return None, None, None

        # normalize sample weight
        sample_weight /= sample_weight_sum

        # append the estimator
        self.estimators_.append(estimator)

        return sample_weight, 1, estimator_error


    def discrete_boost(self, X, y, sample_weight):
        estimator = deepcopy(self.base_estimator_)
        if self.random_state_:
            estimator.set_params(random_state=1)

        estimator.fit(X, y, sample_weight=sample_weight)

        y_pred = estimator.predict(X)
        incorrect = y_pred != y
        estimator_error = np.dot(incorrect, sample_weight) / np.sum(sample_weight, axis=0)

        # if worse than random guess, stop boosting
        if estimator_error >= 1 - 1 / self.n_classes_:
            return None, None, None

        # update estimator_weight
        estimator_weight = self.learning_rate_ * np.log((1 - estimator_error) / estimator_error) + np.log(
            self.n_classes_ - 1)

        if estimator_weight <= 0:
            return None, None, None

        # update sample weight
        sample_weight *= np.exp(estimator_weight * incorrect)

        sample_weight_sum = np.sum(sample_weight, axis=0)
        if sample_weight_sum <= 0:
            return None, None, None

        # normalize sample weight
        sample_weight /= sample_weight_sum

        # append the estimator
        self.estimators_.append(estimator)

        return sample_weight, estimator_weight, estimator_error

    def predict(self, X):
        n_classes = self.n_classes_
        classes = self.classes_[:, np.newaxis]
        pred = None

        if self.algorithm_ == 'SAMME.R':
            # The weights are all 1. for SAMME.R
            pred = sum(self._samme_proba(estimator, n_classes, X) for estimator in self.estimators_)
        else:  # self.algorithm == "SAMME"
            pred = sum((estimator.predict(X) == classes).T * w
                       for estimator, w in zip(self.estimators_,
                                               self.estimator_weights_))

        pred /= self.estimator_weights_.sum()
        if n_classes == 2:
            pred[:, 0] *= -1
            pred = pred.sum(axis=1)
            return self.classes_.take(pred > 0, axis=0)

        return self.classes_.take(np.argmax(pred, axis=1), axis=0)


    def predict_proba(self, X):
        if self.algorithm_ == 'SAMME.R':
            # The weights are all 1. for SAMME.R
            proba = sum(self._samme_proba(estimator, self.n_classes_, X)
                        for estimator in self.estimators_)
        else:  # self.algorithm == "SAMME"
            proba = sum(estimator.predict_proba(X) * w
                        for estimator, w in zip(self.estimators_,
                                                self.estimator_weights_))

        proba /= self.estimator_weights_.sum()
        proba = np.exp((1. / (n_classes - 1)) * proba)
        normalizer = proba.sum(axis=1)[:, np.newaxis]
        normalizer[normalizer == 0.0] = 1.0
        proba /= normalizer

        return proba
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,425评论 4 361
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,058评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,186评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,848评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,249评论 3 286
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,554评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,830评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,536评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,239评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,505评论 2 244
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,004评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,346评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,999评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,060评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,821评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,574评论 2 271
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,480评论 2 267

推荐阅读更多精彩内容