-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconsider penalty scaling for SLOPE #11
Comments
As default I would use the same as glmnet? |
I updated the post with a couple of references, but I'm having a hard time finding references on this. |
Could you start an overleaf of this also? |
Yes, absolutely.
not exactly sure what you mean here |
scaling = "l1", no scaling is applied. |
In SLOPE version 0.3.0 and above, the penalty in the SLOPE objective is scaled depending on the type of scaling that is used in the call to
SLOPE()
. The behavior is:scaling = "l1"
, no scaling is appliedscaling = "l2"
, the penalty is scaled withsqrt(n)
scaling = "sd", the penalty is scaled with
n`.There are advantages and disadvantages of doing this kind of scaling, and I think a discussion is warranted regarding what the correct behavior should be.
Pros
Cons
alpha
parameter as variance in the orthogonal X case is lost.Possible solutions
Whichever way we go with this, I think we should keep the other option available as a toggle, i.e. add an argument along the lines of
penalty_scaling
to turn off/on penalty scaling, or even to provide a more fine-grained type of penalty scaling. That way, it would be possible to achieve either behavior, which, really, means that this discussion is really about what the default should be.Thoughts? Ideas?
References
Hastie et al. (2015) mentions that scaling with n is "useful for cross-validation" and makes lambda values comparable for different sizes of samples, but otherwise doesn't seem to mention it.
scikit-learn has a brief article covering these things here: https://scikit-learn.org/stable/auto_examples/svm/plot_svm_scale_c.html
The text was updated successfully, but these errors were encountered: