-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why prompt is limited to 77 #20
Comments
The clip-vit-large-patch14 (https://huggingface.co/openai/clip-vit-large-patch14) model used by SD can only handle sequences of 77 tokens. It works like that in the original pytorch implementation as well. Anything longer than that gets silently truncated. |
Thank you for the kind reply. In this case, I would suggest truncating the the sequence to 77 tokens while giving a warning, instead of throw an assertion error from this place:
|
Any tips on how to truncate or where to get started? |
In a pipeline I replaced the pytorch version with this implementation, but found the maximum prompt is limited to 77. Is this a compromise for some reasons?
The text was updated successfully, but these errors were encountered: