Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training with lora? #30

Closed
xiexing0916 opened this issue Sep 29, 2024 · 3 comments
Closed

training with lora? #30

xiexing0916 opened this issue Sep 29, 2024 · 3 comments

Comments

@xiexing0916
Copy link

Is there a way (lora?) to train with less memory burden? Full fine-tuning is not trainable at 48G of RAM.

@ChrisLiu6
Copy link
Contributor

The released codes do not support lora yet, but it is easy to implement and you can do it by yourself. You may also implement the get_trainable_params method for the model class to only train part of the model parameters.

if hasattr(unwrapped_model, "get_trainable_params"):

The follows issues may be helpful:

#17
#18

@xiexing0916
Copy link
Author

The released codes do not support lora yet, but it is easy to implement and you can do it by yourself. You may also implement the get_trainable_params method for the model class to only train part of the model parameters.

if hasattr(unwrapped_model, "get_trainable_params"):

The follows issues may be helpful:

#17 #18

Thanks for the answer! I would also like to ask how much memory is needed to fine tune all the parameters? I currently can't work with 3 48G A6000.

@ChrisLiu6
Copy link
Contributor

ChrisLiu6 commented Sep 29, 2024

Thanks for the answer! I would also like to ask how much memory is needed to fine tune all the parameters? I currently can't work with 3 48G A6000.

We have not tested on the memory lower bound. 8*A100 * 80G should be enough, but the lower I have no idea😅.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants