Skip to content
This repository has been archived by the owner on May 20, 2023. It is now read-only.

Benchmark for LLaMA pruning #2

Open
dercaft opened this issue Apr 17, 2023 · 2 comments
Open

Benchmark for LLaMA pruning #2

dercaft opened this issue Apr 17, 2023 · 2 comments

Comments

@dercaft
Copy link

dercaft commented Apr 17, 2023

If it is possible to give a benchmark of Pruned LLaMA.

@horseee
Copy link
Owner

horseee commented Apr 17, 2023

Hi.

It's on our waitlist, but it requires a large amount of time and resources to conduct post-training on the pruned model. Otherwise, the pruned model is poorly functioning.

We are still trying to find a possible and efficient solution for post-training the pruned model since the dataset is not public. If you have any ideas, we would be grateful for any suggestions or contributions you may have.

@horseee
Copy link
Owner

horseee commented May 20, 2023

We have updated the evaluation results in https://github.com/horseee/LLM-Pruner. Please refer to the new repo-v-

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants