63 lines
3.9 KiB
Markdown
63 lines
3.9 KiB
Markdown
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||
the License. You may obtain a copy of the License at
|
||
|
||
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||
specific language governing permissions and limitations under the License.
|
||
|
||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||
rendered properly in your Markdown viewer.
|
||
|
||
-->
|
||
|
||
# Swin Transformer V2
|
||
|
||
## Overview
|
||
|
||
The Swin Transformer V2 model was proposed in [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
|
||
|
||
The abstract from the paper is the following:
|
||
|
||
*Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.*
|
||
|
||
This model was contributed by [nandwalritik](https://huggingface.co/nandwalritik).
|
||
The original code can be found [here](https://github.com/microsoft/Swin-Transformer).
|
||
|
||
## Resources
|
||
|
||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2.
|
||
|
||
<PipelineTag pipeline="image-classification"/>
|
||
|
||
- [`Swinv2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
|
||
- See also: [Image classification task guide](../tasks/image_classification)
|
||
|
||
Besides that:
|
||
|
||
- [`Swinv2ForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining).
|
||
|
||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||
|
||
## Swinv2Config
|
||
|
||
[[autodoc]] Swinv2Config
|
||
|
||
## Swinv2Model
|
||
|
||
[[autodoc]] Swinv2Model
|
||
- forward
|
||
|
||
## Swinv2ForMaskedImageModeling
|
||
|
||
[[autodoc]] Swinv2ForMaskedImageModeling
|
||
- forward
|
||
|
||
## Swinv2ForImageClassification
|
||
|
||
[[autodoc]] transformers.Swinv2ForImageClassification
|
||
- forward
|