CrowdFlower

https://www.kaggle.com/c/crowdflower-search-relevance
What preprocessing and supervised learning methods did you use?
The documentation and code for my approach are available here. Below is a high level overview of my method.

FIGURE 1. FLOWCHART OF MY METHOD

For preprocessing, I mainly performed HTML tags dropping, word replacement, and stemming. For a supervised learning method, I used ensemble selection to generate an ensemble from a model library. The model library was built with models trained using various algorithms, various parameter settings, and various feature sets. I have used Hyperopt (usually used in parameter tuning) to choose parameter setting from a pre-defined parameter space for training different models.

I have tried various objectives, e.g., MSE, softmax, and pairwise ranking. MSE turned out to be the best with an appropriate decoding method. The following is the decoding method I used for MSE (i.e., regression):

  • Calculate the pdf/cdf of each median relevance level, 1 is about 7.6%, 1 + 2 is about 22%, 1 + 2 + 3 is about 40%, and 1 + 2 + 3 + 4 is 100%.
  • Rank the raw prediction in an ascending order.
  • Set the first 7.6% to 1, 7.6% - 22% to 2, 22% - 40% to 3, and the rest to 4.

In CV, the pdf/cdf is calculated using the training fold only, and in the final model training, it is computed using the whole training data.

FIGURE 2. HISTOGRAMS OF RAW PREDICTION AND REDICTIONS USING VARIOUS DECODING METHODS GROUPED BY TRUE RELEVANCE.

Figure 2 shows some histograms from my reproduced best single model for one run of CV (only one validation fold is used). Specifically, I plotted histograms of 1) raw prediction, 2) rounding decoding, 3) ceiling decoding, and 4) the above cdf decoding, grouped by the true relevance. It's most obvious that both rounding and ceiling decoding methods have difficulty in predicting relevance 4.

Following are the kappa scores for each decoding method (using all 3 runs and 3 folds CV). The above cdf decoding method exhibits the best performance among the three methods we considered.

Method CV Mean CV Std
Rounding 0.404277 0.005069
Ceiling 0.513138 0.006485
CDF 0.681876 0.005259
What was your most important insight into the data?

I have found that the most important features for predicting the search results relevance is the correlation or distance between query and product title/description. In my solution, I have features like interset word counting features, Jaccard coefficients, Dice distance, and cooccurencen word TF-IDF features, etc. Also, it’s important to perform some word replacements/alignments, e.g., spelling correction and synonym replacement, to align those words with the same or similar meaning.

While I didn't have much time exploring word embedding methods, they are very promising for this problem. During the competition, I came across a paper entitled “From word embeddings to document distances”. The authors of this paper used Word Mover’s Distance (WMD) metric together with word2vec embeddings to measure the distance between text documents. This metric is shown to have superior performance than BOW and TF-IDF features.

Were you surprised by any of your findings?

I have tried optimizing kappa directly uisng XGBoost (see below), but it performed a bit worse than plain regression. This might have something to do with the hessian, which I couldn't get to work unless I used some scaling and change it to its absolute value (see here).

Which tools did you use?

I used Python for this competition. For feature engineering part, I heavily relied on pandas and Numpy for data manipulation, TfidfVectorizer and SVD in Sklearn for extracting text features. For model training part, I mostly used XGBoost, Sklearn, keras and rgf.

I would like to say a few more words about XGBoost, which I have been using very often. It is great, accurate, fast and easy of use. Most importantly, it supports customized objective. To use this functionality, you have to provide the gradient and hessian of your objective. This is quite helpful in my case. During the competition, I have tried to optimize quadratic weighted kappa directly using XGBoost. Also, I have implemented two ordinal regression algorithms within XGBoost framework (both by specifying the customized objective.) These models contribute to the final winning submission too.

How did you spend your time on this competition?

Where I spent my time on the competition changed during the competition.

  • In the early stage, I mostly focused on data preprocessing. I have spent quite a lot of time on researching and coding down the methods to perform text cleaning. I have to mention that quite a lot of effort was spent on exploring the data (e.g., figuring out misspellings and synonyms etc.)
  • Then, I spent most of my time on feature extraction and trying to figure out what features would be useful for this task. The time was split pretty equally between researching and coding.
  • In the same period, I decided to build a model using ensemble selection and realized my implementation was not flexible enough to that goal. So, I spent most of the time refactoring my implementation.
  • After that, most of my time was spent on coding down the training and prediction parts of various models. I didn't spend much time on tuning each model's performance. I utilized Hyperopt for parameter tuning and model library building.
  • With the pipeline for ensemble selection being built, most of my time was spent on figuring out new features and exploring the provided data.

In short, I would say I have done a lot of researching and coding during this competition.

What was the run time for both training and prediction of your winning solution?

Since the dataset is kinda of a small size and kappa is not very stable, I utilized bagged ensemble selection from a model library containing hundreds or thousands of models to combat overfitting and stabilize my results. I don't have an exact number of the hours or days, but it should take quite a large amount of time to train and make prediction. Furthermore, this also depends on the trade-off between the size of the model library (computation burden) and the performance.

That being said, you should be able to train the best single model (i.e., XGBoost with linear booster) in a few hours. It will give you a model of kappa score about 0.708 (Private LB), which should be enough for a top15 place. For this model, feature extraction occupied most of the time. The training part (using the best parameters I have found) should be a few minutes using multi-threads (e.g., 8).

Words of Wisdom

What have you taken away from this competition?

Ensembling of a bunch of diverse models helps a lot. Figure 3 shows the CV mean, Public LB, and Private LB scores of my 35 best Public LB submissions generated using ensemble selection. As time went by, I have trained more and more different models, which turned out to be helpful for ensemble selection in both CV and Private LB.

Do not ever underestimate the power of linear models. They can be much better than tree-based models or SVR with RBF/poly kernels when using raw TF-IDF features. They can be even better if you introduce appropriate nonlinearities.

Hyperopt is very useful for parameter tuning, and can be used to build model library for ensemble selection.Keep your implementation flexible and scaleable. I was lucky to refactor my implementation early on. This allowed me to add new models to the model library very easily.


FIGURE 3. CV MEAN, PUBLIC LB, AND PRIVATE LB SCORES OF MY 35 BEST PUBLIC LB SUBMISSIONS. ONE STANDARD DEVIATION OF THE CV SCORE IS PLOTTED VIA ERROR BAR.

Do you have any advice for those just getting started in data science?

  • Use things like Google to find a few relevant research papers. Especially if you are not a domain expert.
  • Read the winning solutions for previous competitions. They contain lots of insights and tricks, which are quite inspired and useful.
  • Practice makes perfect. Choose one competition that you are interested in on Kaggle and start Kaggling today (and every day)!
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,458评论 4 363
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,454评论 1 294
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,171评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,062评论 0 207
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,440评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,661评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,906评论 2 313
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,609评论 0 200
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,379评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,600评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,085评论 1 261
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,409评论 2 254
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,072评论 3 237
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,088评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,860评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,704评论 2 276
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,608评论 2 270

推荐阅读更多精彩内容

  • **2014真题Directions:Read the following text. Choose the be...
    又是夜半惊坐起阅读 8,586评论 0 23
  • 她也不知怎地还是悄无声息地踱进他的卧房。紧握双手的指缝中刀锋照出冷冷的月光,浑然不知手指间已伴着鲜血滴答而下。...
    绚烂的小哈阅读 182评论 1 3
  • 《恶意》也是东野圭吾的作品之一,也是侦探类小说,不过,小说的悬疑点不在于凶手是谁,因为凶手犯案后很快被抓了,凶手也...
    jack_520阅读 765评论 0 0
  • 我没有宗教信仰。但我追求真理,追求精神上美好,我觉得人只要向善,有大爱,做一个有良知的好人就够了。不需要用那些僵硬...
    馨蘭若雪阅读 310评论 6 3
  • 清晨,一片安详。沙沙的风吹动树叶,喳喳的鸟鸣奏成一曲美妙旋律。 不一会儿,一阵喧闹打破宁静。小贩...
    思忆陌沫阅读 187评论 0 1