Thanks so much for your attention and for raising this thoughtful point! We really appreciate you highlighting these related works. Given the rapidly growing body of work in language-image alignment and some time constraints during writing, we unfortunately didn’t include a discussion of these papers in our initial version. We’ll definitely consider including them in our later version to provide readers with a more comprehensive view of the field.
One key distinction between these works and ours lies in the focus and setup of our study. As reflected in our title, we aim to investigate whether the original, fixed text embeddings from large language models (LLMs) can directly benefit language-image alignment. In contrast, all three works you mentioned introduce some form of post-processing to the original LLM text embeddings (e.g., alignment or projection layers). This makes it difficult to attribute the observed performance gains specifically to the LLMs themselves, as those gains may be due to the added components or fine-tuning. We also train the image encoder entirely from scratch rather than building on a pre-trained model, ensuring that the superiority or limitations of a base model do not influence our analysis.
Again, thanks for your insightful comment! We’ll definitely expand our related work section to include these contributions in the next version. Wishing you a great weekend!