[Grounding DINO] Add resources (#30232)
* Add resources * Address comments * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
This commit is contained in:
parent
d2cec09baa
commit
8c12690cec
|
@ -70,6 +70,20 @@ results = processor.post_process_grounded_object_detection(
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Grounded SAM
|
||||||
|
|
||||||
|
One can combine Grounding DINO with the [Segment Anything](sam) model for text-based mask generation as introduced in [Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks](https://arxiv.org/abs/2401.14159). You can refer to this [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb) 🌍 for details.
|
||||||
|
|
||||||
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grounded_sam.png"
|
||||||
|
alt="drawing" width="900"/>
|
||||||
|
|
||||||
|
<small> Grounded SAM overview. Taken from the <a href="https://github.com/IDEA-Research/Grounded-Segment-Anything">original repository</a>. </small>
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Grounding DINO. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||||
|
|
||||||
|
- Demo notebooks regarding inference with Grounding DINO as well as combining it with [SAM](sam) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Grounding%20DINO). 🌎
|
||||||
|
|
||||||
## GroundingDinoImageProcessor
|
## GroundingDinoImageProcessor
|
||||||
|
|
||||||
|
|
|
@ -109,6 +109,15 @@ SlimSAM, a pruned version of SAM, was proposed in [0.1% Data Makes Segment Anyth
|
||||||
|
|
||||||
Checkpoints can be found on the [hub](https://huggingface.co/models?other=slimsam), and they can be used as a drop-in replacement of SAM.
|
Checkpoints can be found on the [hub](https://huggingface.co/models?other=slimsam), and they can be used as a drop-in replacement of SAM.
|
||||||
|
|
||||||
|
## Grounded SAM
|
||||||
|
|
||||||
|
One can combine [Grounding DINO](grounding-dino) with SAM for text-based mask generation as introduced in [Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks](https://arxiv.org/abs/2401.14159). You can refer to this [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb) 🌍 for details.
|
||||||
|
|
||||||
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/grounded_sam.png"
|
||||||
|
alt="drawing" width="900"/>
|
||||||
|
|
||||||
|
<small> Grounded SAM overview. Taken from the <a href="https://github.com/IDEA-Research/Grounded-Segment-Anything">original repository</a>. </small>
|
||||||
|
|
||||||
## SamConfig
|
## SamConfig
|
||||||
|
|
||||||
[[autodoc]] SamConfig
|
[[autodoc]] SamConfig
|
||||||
|
|
Loading…
Reference in New Issue