You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can this language-binding model handle scenarios where a modality is missing? Specifically, is it possible to perform inference without the audio modality, and if so, would it significantly impact performance?
Lastly, is there a model checkpoint that was trained without a specific modality? I’m planning to use this model architecture for my project, but I don’t intend to use the audio modality.
Thank you so much, Sincerely
The text was updated successfully, but these errors were encountered:
Hello, LanguageBind has separated models for each modality, i.e., it's not a single block that processes everything at once. You can load the required modalities by setting the clip_type variable from the Readme inference example.
Each modality's embeddings are computed individually, and then you combine them.
Quote from related ImageBind paper. I guess the same works here #38
Embedding arithmetic. For arithmetic, we again use the embedding features after temperature scaling. We ℓ2 normalize the features and sum the embeddings after scaling them by 0.5.
They used OpenCLIP weights to initialize each modality model and fine-tune it with an additional linear projection layer in a multi-modal joint manner. So, there are no general intermediate checkpoints.
Thank you for your wonderful project.
Can this language-binding model handle scenarios where a modality is missing? Specifically, is it possible to perform inference without the audio modality, and if so, would it significantly impact performance?
Lastly, is there a model checkpoint that was trained without a specific modality? I’m planning to use this model architecture for my project, but I don’t intend to use the audio modality.
Thank you so much, Sincerely
The text was updated successfully, but these errors were encountered: