-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The pre-trained checkpoint generates very short output #38
Comments
Should probably upgrade transformers huggingface/transformers#22903 |
Hi, I also encountered the same problem. I took a screenshot of the left subgraph of Figure 1 in the pix2struct paper, and the pix2struct-large model can only output the same '<>'. This is severely inconsistent with expectations and I am quite confused. I am eagerly anticipating the response from the author. Thanks a lot. PS: my transformer version is 4.31.0. |
+1 |
@kentonl, is there a prompt for pretraining? |
+1 |
2 similar comments
+1 |
+1 |
Thanks for your awesome work!
I want to utilize the model to generate the HTML of an image, so I choose the pre-trained checkpoint without fine-tuning. However, the generated output is very short. For example, the following code only generate
<img_src=image>
without any detailed struct.My transformers version is 4.28.0. Do you know how to solve this problem? Thanks in advance :)
The text was updated successfully, but these errors were encountered: