mirror of https://github.com/open-mmlab/mmpose
168 lines
7.0 KiB
Markdown
168 lines
7.0 KiB
Markdown
# FAQ
|
|
|
|
We list some common issues faced by many users and their corresponding solutions here.
|
|
Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them.
|
|
If the contents here do not cover your issue, please create an issue using the [provided templates](/.github/ISSUE_TEMPLATE/error-report.md) and make sure you fill in all required information in the template.
|
|
|
|
## Installation
|
|
|
|
Compatibility issue between MMCV and MMPose; "AssertionError: MMCV==xxx is used but incompatible. Please install mmcv>=xxx, \<=xxx."
|
|
|
|
Here are the version correspondences between `mmdet`, `mmcv` and `mmpose`:
|
|
|
|
- mmdet 2.x \<=> mmpose 0.x \<=> mmcv 1.x
|
|
- mmdet 3.x \<=> mmpose 1.x \<=> mmcv 2.x
|
|
|
|
Detailed compatible MMPose and MMCV versions are shown as below. Please choose the correct version of MMCV to avoid installation issues.
|
|
|
|
### MMPose 1.x
|
|
|
|
| MMPose version | MMCV/MMEngine version |
|
|
| :------------: | :-----------------------------: |
|
|
| 1.3.2 | mmcv>=2.0.1, mmengine>=0.9.0 |
|
|
| 1.3.1 | mmcv>=2.0.1, mmengine>=0.9.0 |
|
|
| 1.3.0 | mmcv>=2.0.1, mmengine>=0.9.0 |
|
|
| 1.2.0 | mmcv>=2.0.1, mmengine>=0.8.0 |
|
|
| 1.1.0 | mmcv>=2.0.1, mmengine>=0.8.0 |
|
|
| 1.0.0 | mmcv>=2.0.0, mmengine>=0.7.0 |
|
|
| 1.0.0rc1 | mmcv>=2.0.0rc4, mmengine>=0.6.0 |
|
|
| 1.0.0rc0 | mmcv>=2.0.0rc0, mmengine>=0.0.1 |
|
|
| 1.0.0b0 | mmcv>=2.0.0rc0, mmengine>=0.0.1 |
|
|
|
|
### MMPose 0.x
|
|
|
|
| MMPose version | MMCV version |
|
|
| :------------: | :-----------------------: |
|
|
| 0.x | mmcv-full>=1.3.8, \<1.8.0 |
|
|
| 0.29.0 | mmcv-full>=1.3.8, \<1.7.0 |
|
|
| 0.28.1 | mmcv-full>=1.3.8, \<1.7.0 |
|
|
| 0.28.0 | mmcv-full>=1.3.8, \<1.6.0 |
|
|
| 0.27.0 | mmcv-full>=1.3.8, \<1.6.0 |
|
|
| 0.26.0 | mmcv-full>=1.3.8, \<1.6.0 |
|
|
| 0.25.1 | mmcv-full>=1.3.8, \<1.6.0 |
|
|
| 0.25.0 | mmcv-full>=1.3.8, \<1.5.0 |
|
|
| 0.24.0 | mmcv-full>=1.3.8, \<1.5.0 |
|
|
| 0.23.0 | mmcv-full>=1.3.8, \<1.5.0 |
|
|
| 0.22.0 | mmcv-full>=1.3.8, \<1.5.0 |
|
|
| 0.21.0 | mmcv-full>=1.3.8, \<1.5.0 |
|
|
| 0.20.0 | mmcv-full>=1.3.8, \<1.4.0 |
|
|
| 0.19.0 | mmcv-full>=1.3.8, \<1.4.0 |
|
|
| 0.18.0 | mmcv-full>=1.3.8, \<1.4.0 |
|
|
| 0.17.0 | mmcv-full>=1.3.8, \<1.4.0 |
|
|
| 0.16.0 | mmcv-full>=1.3.8, \<1.4.0 |
|
|
| 0.14.0 | mmcv-full>=1.1.3, \<1.4.0 |
|
|
| 0.13.0 | mmcv-full>=1.1.3, \<1.4.0 |
|
|
| 0.12.0 | mmcv-full>=1.1.3, \<1.3 |
|
|
| 0.11.0 | mmcv-full>=1.1.3, \<1.3 |
|
|
| 0.10.0 | mmcv-full>=1.1.3, \<1.3 |
|
|
| 0.9.0 | mmcv-full>=1.1.3, \<1.3 |
|
|
| 0.8.0 | mmcv-full>=1.1.1, \<1.2 |
|
|
| 0.7.0 | mmcv-full>=1.1.1, \<1.2 |
|
|
|
|
- **Unable to install xtcocotools**
|
|
|
|
1. Try to install it using pypi manually `pip install xtcocotools`.
|
|
2. If step1 does not work. Try to install it from [source](https://github.com/jin-s13/xtcocoapi).
|
|
|
|
```
|
|
git clone https://github.com/jin-s13/xtcocoapi
|
|
cd xtcocoapi
|
|
python setup.py install
|
|
```
|
|
|
|
- **No matching distribution found for xtcocotools>=1.6**
|
|
|
|
1. Install cython by `pip install cython`.
|
|
2. Install xtcocotools from [source](https://github.com/jin-s13/xtcocoapi).
|
|
|
|
```
|
|
git clone https://github.com/jin-s13/xtcocoapi
|
|
cd xtcocoapi
|
|
python setup.py install
|
|
```
|
|
|
|
- **"No module named 'mmcv.ops'"; "No module named 'mmcv.\_ext'"**
|
|
|
|
1. Uninstall existing mmcv in the environment using `pip uninstall mmcv`.
|
|
2. Install mmcv following [mmcv installation instruction](https://mmcv.readthedocs.io/en/2.x/get_started/installation.html).
|
|
|
|
## Data
|
|
|
|
- **What if my custom dataset does not have bounding box label?**
|
|
|
|
We can estimate the bounding box of a person as the minimal box that tightly bounds all the keypoints.
|
|
|
|
- **What is `COCO_val2017_detections_AP_H_56_person.json`? Can I train pose models without it?**
|
|
|
|
"COCO_val2017_detections_AP_H_56_person.json" contains the "detected" human bounding boxes for COCO validation set, which are generated by FasterRCNN.
|
|
One can choose to use gt bounding boxes to evaluate models, by setting `bbox_file=None` in `val_dataloader.dataset` in config. Or one can use detected boxes to evaluate
|
|
the generalizability of models, by setting `bbox_file='COCO_val2017_detections_AP_H_56_person.json'`.
|
|
|
|
## Training
|
|
|
|
- **RuntimeError: Address already in use**
|
|
|
|
Set the environment variables `MASTER_PORT=XXX`. For example:
|
|
|
|
```shell
|
|
MASTER_PORT=29517 GPUS=16 GPUS_PER_NODE=8 CPUS_PER_TASK=2 ./tools/slurm_train.sh train res50 configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_8xb64-210e_coco-256x192.py work_dirs/res50_coco_256x192
|
|
```
|
|
|
|
- **"Unexpected keys in source state dict" when loading pre-trained weights**
|
|
|
|
It's normal that some layers in the pretrained model are not used in the pose model. ImageNet-pretrained classification network and the pose network may have different architectures (e.g. no classification head). So some unexpected keys in source state dict is actually expected.
|
|
|
|
- **How to use trained models for backbone pre-training ?**
|
|
|
|
Refer to [Migration - Step3: Model - Backbone](../migration.md).
|
|
|
|
When training, the unexpected keys will be ignored.
|
|
|
|
- **How to visualize the training accuracy/loss curves in real-time ?**
|
|
|
|
Modify `vis_backends` in config file like:
|
|
|
|
```python
|
|
vis_backends = [
|
|
dict(type='LocalVisBackend'),
|
|
dict(type='TensorboardVisBackend')
|
|
]
|
|
```
|
|
|
|
You can refer to [user_guides/visualization.md](../user_guides/visualization.md).
|
|
|
|
- **Log info is NOT printed**
|
|
|
|
Use smaller log interval. For example, change `interval=50` to `interval=1` in the config:
|
|
|
|
```python
|
|
# hooks
|
|
default_hooks = dict(logger=dict(interval=1))
|
|
```
|
|
|
|
## Evaluation
|
|
|
|
- **How to evaluate on MPII test dataset?**
|
|
Since we do not have the ground-truth for test dataset, we cannot evaluate it 'locally'.
|
|
If you would like to evaluate the performance on test set, you have to upload the pred.mat (which is generated during testing) to the official server via email, according to [the MPII guideline](http://human-pose.mpi-inf.mpg.de/#evaluation).
|
|
|
|
- **For top-down 2d pose estimation, why predicted joint coordinates can be out of the bounding box (bbox)?**
|
|
We do not directly use the bbox to crop the image. bbox will be first transformed to center & scale, and the scale will be multiplied by a factor (1.25) to include some context. If the ratio of width/height is different from that of model input (possibly 192/256), we will adjust the bbox.
|
|
|
|
## Inference
|
|
|
|
- **How to run mmpose on CPU?**
|
|
|
|
Run demos with `--device=cpu`.
|
|
|
|
- **How to speed up inference?**
|
|
|
|
A few approaches may help to improve the inference speed:
|
|
|
|
1. Set `flip_test=False` in `init_cfg` in the config file.
|
|
2. For top-down models, use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html).
|
|
|
|
- **What is the definition of each keypoint index?**
|
|
|
|
Check the [meta information file](https://github.com/open-mmlab/mmpose/tree/main/configs/_base_/datasets) for the dataset used to train the model you are using. They key `keypoint_info` includes the definition of each keypoint.
|