Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update requirements.txt #40

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update requirements.txt #40

wants to merge 1 commit into from

Conversation

RDXiaoLu
Copy link

solve some requirement problems.... such as peft version.... accelerate version.
image
I can run success...

solve some requirement problems
@kellenf
Copy link

kellenf commented Feb 16, 2025

@RDXiaoLu 您好,我跑实验的时候看到有问题:
Some weights of the model checkpoint at ckpts/pretrain_qformer/ were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_to
wer.blocks.17.attn.rope.freqs_sin', 'model.vision_tower.vision_tower.blocks.22.mlp.ffn_ln.bias', 'model.mm_projector.query_decoder.layers.4.transfor
mer_layers.5.bias', 'model.vision_tower.vision_tower.blocks.19.attn.rope.freqs_sin', 'model.vision_tower.vision_tower.blocks.18.attn.q_bias', 'model.
vision_tower.vision_tower.blocks.1.attn.proj.weight', 'model.vision_tower.vision_tower.blocks.8.attn.v_proj.weight', 'model.vision_tower.vision_tower
.blocks.23.attn.q_bias', 'model.vision_tower.vision_tower.blocks.13.mlp.w2.bias', 'model.vision_tower.vision_tower.blocks.18.mlp.w2.bias', 'model.mm

projector.query_decoder._layers.3.transformer_layers.0.attn.out_proj.weight', 'model.mm_projector.query_decoder._layers.4.transformer_layers.4._layer
s.0.bias', 'model.vision_tower.vision_tower.blocks.12.attn.proj.bias', 'model.mm_projector.query_decoder._layers.5.transformer_layers.2.attn.out_proj
.bias', 'model.vision_tower.vision_tower.blocks.
你那边有这样的情况吗?

@RDXiaoLu
Copy link
Author

@RDXiaoLu 您好,我跑实验的时候看到有问题: Some weights of the model checkpoint at ckpts/pretrain_qformer/ were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_to wer.blocks.17.attn.rope.freqs_sin', 'model.vision_tower.vision_tower.blocks.22.mlp.ffn_ln.bias', 'model.mm_projector.query_decoder.layers.4.transfor mer_layers.5.bias', 'model.vision_tower.vision_tower.blocks.19.attn.rope.freqs_sin', 'model.vision_tower.vision_tower.blocks.18.attn.q_bias', 'model. vision_tower.vision_tower.blocks.1.attn.proj.weight', 'model.vision_tower.vision_tower.blocks.8.attn.v_proj.weight', 'model.vision_tower.vision_tower .blocks.23.attn.q_bias', 'model.vision_tower.vision_tower.blocks.13.mlp.w2.bias', 'model.vision_tower.vision_tower.blocks.18.mlp.w2.bias', 'model.mm projector.query_decoder._layers.3.transformer_layers.0.attn.out_proj.weight', 'model.mm_projector.query_decoder._layers.4.transformer_layers.4._layer s.0.bias', 'model.vision_tower.vision_tower.blocks.12.attn.proj.bias', 'model.mm_projector.query_decoder._layers.5.transformer_layers.2.attn.out_proj .bias', 'model.vision_tower.vision_tower.blocks. 你那边有这样的情况吗?

Hello, I guess it may be that the file under your /pretrain_qformer has not been fully downloaded, and the eva02_petr_proj.pth file needs to be placed under/ckpts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants