-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
预训练模型加载出错 #29
Comments
可能是torch版本的问题 |
我的demo是跑得起来的:https://huggingface.co/spaces/gyrojeff/YuzuMarker.FontDetection 可以看一下我是怎么用 docker 跑的。 |
感谢您及时回复,我也尝试使用镜像跑过,但是也有类似的问题,不知道是否是权重下载的有问题,因为有8个权重,我只挑了其中2个尝试过。 |
别自己本地去 build,因为当时我为了省事没有 pip freeze,是手写的 requirements,现在估计都有版本问题了 |
我pull的就是这个,我不太熟悉使用docker,所以不知道是不是操作有问题。我进入容器后,看见workspace中只有demo_fonts font_demo_cache.bin requirements.txt这三个文件,然后我将本地代码和权重复制到workspace,并将demo_fonts font_demo_cache.bin 放到对应目录。尝试运行python demo.py -d -1 -c name=4x-epoch=84-step=1649340.ckpt,就报错了。 |
能看一下报错的log吗 |
如果实在不行的话可以去用那个 Dockerfile,然后 clone huggingface 上的 code,那个是肯定能跑的,最近一次 restart 还是这个月
|
嗯好,非常感谢,我再多尝试几次。
发自我的iPhone
…------------------ 原始邮件 ------------------
发件人: Haoyun Qin ***@***.***>
发送时间: 2024年10月28日 17:42
收件人: JeffersonQin/YuzuMarker.FontDetection ***@***.***>
抄送: ChenSiyi ***@***.***>, Author ***@***.***>
主题: Re: [JeffersonQin/YuzuMarker.FontDetection] 预训练模型加载出错 (Issue #29)
如果实在不行的话可以去用那个 Dockerfile,然后 clone huggingface 上的 code,那个是肯定能跑的,最近一次 restart 还是这个月
===== Application Startup at 2024-10-16 06:01:11 ===== Downloading (…)=18-step=368676.ckpt: 0%| | 0.00/434M [00:00<?, ?B/s] Downloading (…)=18-step=368676.ckpt: 5%|▍ | 21.0M/434M [00:00<00:10, 39.4MB/s] Downloading (…)=18-step=368676.ckpt: 10%|▉ | 41.9M/434M [00:00<00:07, 55.1MB/s] Downloading (…)=18-step=368676.ckpt: 12%|█▏ | 52.4M/434M [00:01<00:09, 41.1MB/s] Downloading (…)=18-step=368676.ckpt: 14%|█▍ | 62.9M/434M [00:01<00:07, 46.8MB/s] Downloading (…)=18-step=368676.ckpt: 17%|█▋ | 73.4M/434M [00:01<00:08, 41.2MB/s] Downloading (…)=18-step=368676.ckpt: 19%|█▉ | 83.9M/434M [00:02<00:09, 35.8MB/s] Downloading (…)=18-step=368676.ckpt: 22%|██▏ | 94.4M/434M [00:02<00:10, 33.5MB/s] Downloading (…)=18-step=368676.ckpt: 24%|██▍ | 105M/434M [00:02<00:09, 34.4MB/s] Downloading (…)=18-step=368676.ckpt: 27%|██▋ | 115M/434M [00:02<00:08, 35.5MB/s] Downloading (…)=18-step=368676.ckpt: 29%|██▉ | 126M/434M [00:03<00:07, 41.4MB/s] Downloading (…)=18-step=368676.ckpt: 31%|███▏ | 136M/434M [00:03<00:08, 37.1MB/s] Downloading (…)=18-step=368676.ckpt: 34%|███▍ | 147M/434M [00:03<00:08, 35.8MB/s] Downloading (…)=18-step=368676.ckpt: 36%|███▌ | 157M/434M [00:03<00:06, 43.2MB/s] Downloading (…)=18-step=368676.ckpt: 39%|███▊ | 168M/434M [00:04<00:06, 40.6MB/s] Downloading (…)=18-step=368676.ckpt: 41%|████ | 178M/434M [00:04<00:09, 28.3MB/s] Downloading (…)=18-step=368676.ckpt: 46%|████▌ | 199M/434M [00:05<00:07, 29.7MB/s] Downloading (…)=18-step=368676.ckpt: 48%|████▊ | 210M/434M [00:06<00:08, 27.4MB/s] Downloading (…)=18-step=368676.ckpt: 53%|█████▎ | 231M/434M [00:06<00:06, 32.3MB/s] Downloading (…)=18-step=368676.ckpt: 56%|█████▌ | 241M/434M [00:06<00:06, 30.7MB/s] Downloading (…)=18-step=368676.ckpt: 60%|██████ | 262M/434M [00:07<00:04, 36.1MB/s] Downloading (…)=18-step=368676.ckpt: 63%|██████▎ | 273M/434M [00:07<00:05, 31.5MB/s] Downloading (…)=18-step=368676.ckpt: 65%|██████▌ | 283M/434M [00:07<00:04, 35.9MB/s] Downloading (…)=18-step=368676.ckpt: 68%|██████▊ | 294M/434M [00:08<00:04, 33.5MB/s] Downloading (…)=18-step=368676.ckpt: 70%|███████ | 304M/434M [00:08<00:04, 29.9MB/s] Downloading (…)=18-step=368676.ckpt: 75%|███████▍ | 325M/434M [00:10<00:04, 22.2MB/s] Downloading (…)=18-step=368676.ckpt: 77%|███████▋ | 336M/434M [00:10<00:03, 25.0MB/s] Downloading (…)=18-step=368676.ckpt: 80%|███████▉ | 346M/434M [00:10<00:03, 28.6MB/s] Downloading (…)=18-step=368676.ckpt: 82%|████████▏ | 357M/434M [00:10<00:02, 31.6MB/s] Downloading (…)=18-step=368676.ckpt: 87%|████████▋ | 377M/434M [00:11<00:01, 37.2MB/s] Downloading (…)=18-step=368676.ckpt: 89%|████████▉ | 388M/434M [00:11<00:01, 39.8MB/s] Downloading (…)=18-step=368676.ckpt: 92%|█████████▏| 398M/434M [00:11<00:00, 43.2MB/s] Downloading (…)=18-step=368676.ckpt: 94%|█████████▍| 409M/434M [00:11<00:00, 39.5MB/s] Downloading (…)=18-step=368676.ckpt: 97%|█████████▋| 419M/434M [00:12<00:00, 26.1MB/s] Downloading (…)=18-step=368676.ckpt: 99%|█████████▉| 430M/434M [00:13<00:00, 25.2MB/s] Downloading (…)=18-step=368676.ckpt: 100%|██████████| 434M/434M [00:13<00:00, 20.6MB/s] Downloading (…)=18-step=368676.ckpt: 100%|██████████| 434M/434M [00:13<00:00, 32.1MB/s] Preparing fonts ... Running on local URL: http://0.0.0.0:7860
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
您好,我也遇到了同样的问题,想知道解决了吗 |
您好,我也遇到了同样的问题,想知道解决了吗 |
您好,我这边下载了您的权重文件,运行
python demo.py -c name=4x-epoch=84-step=1649340.ckpt
报错信息
Traceback (most recent call last):
File "demo.py", line 116, in
FontDetector.load_from_checkpoint(
File "/home/algroup/anaconda3/envs/pytorch2/lib/python3.8/site-packages/pytorch_lightning/utilities/model_helpers.py", line 125, in wrapper
return self.method(cls, *args, **kwargs)
File "/home/algroup/anaconda3/envs/pytorch2/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1586, in load_from_checkpoint
loaded = _load_from_checkpoint(
File "/home/algroup/anaconda3/envs/pytorch2/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 91, in _load_from_checkpoint
model = _load_state(cls, checkpoint, strict=strict, **kwargs)
File "/home/algroup/anaconda3/envs/pytorch2/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 187, in _load_state
keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict)
File "/home/algroup/anaconda3/envs/pytorch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for FontDetector:
Unexpected key(s) in state_dict: "model._orig_mod.model.layer1.2.conv1.weight", "model._orig_mod.model.layer1.2.bn1.weight", "model._orig_mod.model.layer1.2.bn1.bias", "model._orig_mod.model.layer1.2.bn1.running_mean", "model._orig_mod.model.layer1.2.bn1.running_var", "model._orig_mod.model.layer1.2.bn1.num_batches_tracked", "model._orig_mod.model.layer1.2.conv2.weight", "model._orig_mod.model.layer1.2.bn2.weight", "model._orig_mod.model.layer1.2.bn2.bias", "model._orig_mod.model.layer1.2.bn2.running_mean", "model._orig_mod.model.layer1.2.bn2.running_var", "model._orig_mod.model.layer1.2.bn2.num_batches_tracked", "model._orig_mod.model.layer1.2.conv3.weight", "model._orig_mod.model.layer1.2.bn3.weight", "model._orig_mod.model.layer1.2.bn3.bias", "model._orig_mod.model.layer1.2.bn3.running_mean", "model._orig_mod.model.layer1.2.bn3.running_var", "model._orig_mod.model.layer1.2.bn3.num_batches_tracked", "model._orig_mod.model.layer1.0.conv3.weight", "model._orig_mod.model.layer1.0.bn3.weight", "model._orig_mod.model.layer1.0.bn3.bias", "model._orig_mod.model.layer
The text was updated successfully, but these errors were encountered: