67 lines
3.6 KiB
Markdown
67 lines
3.6 KiB
Markdown
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# InstructBLIP
|
|
|
|
## Overview
|
|
|
|
The InstructBLIP model was proposed in [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
|
|
InstructBLIP leverages the [BLIP-2](blip2) architecture for visual instruction tuning.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.*
|
|
|
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg"
|
|
alt="drawing" width="600"/>
|
|
|
|
<small> InstructBLIP architecture. Taken from the <a href="https://arxiv.org/abs/2305.06500">original paper.</a> </small>
|
|
|
|
This model was contributed by [nielsr](https://huggingface.co/nielsr).
|
|
The original code can be found [here](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip).
|
|
|
|
## Usage tips
|
|
|
|
InstructBLIP uses the same architecture as [BLIP-2](blip2) with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
|
|
|
|
## InstructBlipConfig
|
|
|
|
[[autodoc]] InstructBlipConfig
|
|
- from_vision_qformer_text_configs
|
|
|
|
## InstructBlipVisionConfig
|
|
|
|
[[autodoc]] InstructBlipVisionConfig
|
|
|
|
## InstructBlipQFormerConfig
|
|
|
|
[[autodoc]] InstructBlipQFormerConfig
|
|
|
|
## InstructBlipProcessor
|
|
|
|
[[autodoc]] InstructBlipProcessor
|
|
|
|
## InstructBlipVisionModel
|
|
|
|
[[autodoc]] InstructBlipVisionModel
|
|
- forward
|
|
|
|
## InstructBlipQFormerModel
|
|
|
|
[[autodoc]] InstructBlipQFormerModel
|
|
- forward
|
|
|
|
## InstructBlipForConditionalGeneration
|
|
|
|
[[autodoc]] InstructBlipForConditionalGeneration
|
|
- forward
|
|
- generate |