-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combination of multiple modalities #38
Comments
Yes, just average two modalities logits. |
Is the code for this available? I can not seem to locate it in the repository. If not could you perhaps provide it? For example for the Infrared+RGB -> Text task. Thanks in advance, |
And is there is specific reason to average the logits and not directly the produced embeddings of the modalities? Especially for the retrieval task there are no logits computed if I understand correctly. How would this be done without the logits? |
i'm not the paper author but in my experience you can just sum/average regular CLIP model features to achieve similar things, eg the text embedding for |
@damian0815 Hey, thanks for the answer. The fact that the scenario you describe works, is only due to the fact that CLIP uses cosine similarity as a distance metric. Said similarity function has a very specific property, namely that the scale of the vectors does not matter, but only their angle in high dimensional space. As a result, adding or averaging produces a vector with an average angle that can then be compared to a database. However, for tasks that do not use cosine similarity, this argument does not hold. |
@anthony-mendil i would assume based on my experience working with CLIP, and with the understanding that the LanguageBind models are, more or less, CLIP (or CLIP-like) models, that the answer to this question:
is literally just:
(assuming, as you point out, that the what are you doing with the feature vectors that is not cosine similarity? again, they're not logits and should not be treated as such, unless you're using the hidden states or you've added a classifier layer onto the end or something, in which case i guess you're on your own..? |
@damian0815 I was investigating how this approach could be used in varies downstream tasks, which involve adding a classifier on top of the combined embedding space. As you also mention in the end, averaging the embeddings causes undesired behavior in such cases. |
First of all congrats on the paper and thanks for providing the code!
In the paper at 'Zero-shot language-based multi-modal joint retrieval' you mention that integrating/combining multiple embeddings improves the performance. I am specifically referring to the sentence:
'Similar trends have been observed in other modalities, where each modality has the potential to enhance the performance when combined with other modalities.'
However, the paper does not clarify how the embeddings for different modalities are actually combined. If for instance, the input modalities are text, audio, video and depth the model would produce individual embeddings for all of the modalities. How do you then combine these embeddings in order to obtain the results you report?
Do you simply average the different embeddings?
Thanks in advance,
Anthony Mendil.
The text was updated successfully, but these errors were encountered: