-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: AdaptiveLasso and AdaptiveLassoCV #169
Closed
+613
−96
Closed
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit
Hold shift + click to select a range
1971397
Add AdaptiveLasso and AdaptiveLassoCV classes
mathurinm 86287c5
doc and docstrings
mathurinm 82e1fbe
Add a test, disable warm starting theta
mathurinm 6833e7a
forgot to commit test
mathurinm 3684efa
Use reweighting in LassoCV, add example
mathurinm fbc527b
example
mathurinm 95cc5f0
fix self.model
mathurinm a92b530
Improve example
mathurinm 98b0766
Better description + Readme
mathurinm ddd12bb
invert loop over alphas and reweightings
mathurinm f96e3e3
messed up rebase
mathurinm ad93c8d
Ignore infinity weights in primal, recompute theta always
mathurinm 5ec3776
other messup in rebase
mathurinm 0a8dcf6
make docstring test pass
mathurinm 3922fdc
Flake
mathurinm 7332a54
avoid division by zero warning
mathurinm 587bec8
a broken test that is not fixed by disabling screening
mathurinm e683519
even simpler failing case
mathurinm 1fe0aff
Fix: mismatch between infinite weights and non zero w[j]
mathurinm 2ec58ff
script to play with AdaptiveLassoCv path
mathurinm f62a7d0
no screening for non zero feature
mathurinm 0c5eb40
better example adaptive
mathurinm d764302
fix new example
mathurinm 498b887
improve example
mathurinm 49c99da
Merge branch 'master' of github.com:mathurinm/celer into adaptivelassocv
mathurinm bc5bd52
flake8
mathurinm 28b18db
Merge branch 'master' of github.com:mathurinm/celer into adaptivelassocv
mathurinm 98d834f
flexible reweighting function
mathurinm 93963af
rm print, fix missing abs in sqrt weights
mathurinm File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
make docstring test pass
commit 0a8dcf61ecf1b78edde19f88ae6f49106dfc7687
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am now thinking this is a bad name, as adaptive Lasso takes weights equal to 1 / w^ols_j, and it performs a single Lasso.
What we implement is rather iterative l1 reweighting (Candes Wakin Boyd paper).
IterativeReweightedLasso seems more correct for me, but it does not pop up on google, and I don't know if it good for visibility. When people talk about it in scikit-learn, they say adaptive lasso : scikit-learn/scikit-learn#4912
@agramfort @josephsalmon do you have an opinion ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main naming issue to me is that an estimator should be well-defined mathematically; then the implementation is something under the hood for the user.
Here, the algorithm you are proposing is a DC-programming (à la Gasso et al. 2009) approach for solving sparse regression with \ell_0.5 regularization (also referred to as reweighted l1 by Cands et al.).
Hence, I would be in favor of separating the "theoretical" estimator from the algorithms used (for instance a coordinate descent alternative could be considered as another solver for sparse regression with \ell_0.5 regularization).
I agree that AdaptiveLasso is originally 2-step:
But I think this is vague enough in the original article (any consistent estimator can be used in the first step, not only OLS), so we can reuse this naming.
Potentially, the exponent \gamma (corresponding to the \ell_q norm used in the Adaptive Lasso paper) could be an optional parameter something like:
lq_norm = 0.5
(with the possibility to add more variants later on).
So in the end, I won't bother too much about the naming and stick to AdaptiveLasso as a good shortcut.