帖子

目前显示的是 七月, 2015的博文

L1-regularized logistic regression

图片
先挖个坑吧,以后慢慢填。刚读到台大几个学生写的一篇关于L1-regularized logistic regression(L1LR)综述性的论文:“A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification”。使我很受益,他们把各种思路和方法都罗列了出来,使得我可以按图索骥,一一尝试。不至于自己独自瞎忙活。所以我先挖个坑,逐渐的慢慢更新。关于问题的背景请参考我的Logistic Regression笔记。我认为L1-regularized logistic regression的核心是如何对一个非平滑(non-smooth)的非线性凸函数求最小值的问题。这在数学上是一个很广的topic,够写厚厚一本书了。我在此试图以L1LR为例探一探路。关于subgradient与BFGS的关系,在这篇A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning 论文中讲的比较好。Boyd的插值法http://stanford.edu/~boyd/l1_logreg/论文: An Interior-Point Method for Large-Scale `1-Regularized Logistic RegressionGlmnet论文: Regularization paths for generalized linear models via coordinate descent . Jerome H. Friedman, Trevor Hastie, and Robert Tibshirani. Journal of Statistical Software, 33(1):1–22, 2010.L-BFGS-B这是一个比较经典的算法了,也可以用在这个问题上。OWLQN论文: OrthantWise Limited-memory Quasi-Newton "Scalable Training of L1-Regularized Log-Linear Models(2007)" 视频:

Scalable…