transformers/tests/quantization/bnb
Marc Sun 4207a4076d
[bnb] Fix offload test (#30039)
fix bnb test
2024-04-05 13:11:28 +02:00
..
README.md [Docs] Add language identifiers to fenced code blocks (#28955) 2024-02-12 10:48:31 -08:00
__init__.py GPTQ integration (#25062) 2023-08-10 16:06:29 -04:00
test_4bit.py FIX [`bnb` / `tests`] Propagate the changes from #29092 to 4-bit tests (#29122) 2024-02-20 11:11:15 +01:00
test_mixed_int8.py [bnb] Fix offload test (#30039) 2024-04-05 13:11:28 +02:00

README.md

Testing mixed int8 quantization

HFxbitsandbytes.png

The following is the recipe on how to effectively debug bitsandbytes integration on Hugging Face transformers.

Library requirements

  • transformers>=4.22.0
  • accelerate>=0.12.0
  • bitsandbytes>=0.31.5.

Hardware requirements

The following instructions are tested with 2 NVIDIA-Tesla T4 GPUs. To run successfully bitsandbytes you would need a 8-bit core tensor supported GPU. Note that Turing, Ampere or newer architectures - e.g. T4, RTX20s RTX30s, A40-A100, A6000 should be supported.

Virutal envs

conda create --name int8-testing python==3.8
pip install bitsandbytes>=0.31.5
pip install accelerate>=0.12.0
pip install transformers>=4.23.0

if transformers>=4.23.0 is not released yet, then use:

pip install git+https://github.com/huggingface/transformers.git

Troubleshooting

A list of common errors:

Torch does not correctly do the operations on GPU

First check that:

import torch

vec = torch.randn(1, 2, 3).to(0)

Works without any error. If not, install torch using conda like:

conda create --name int8-testing python==3.8
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge
pip install bitsandbytes>=0.31.5
pip install accelerate>=0.12.0
pip install transformers>=4.23.0

For the latest pytorch instructions please see this

and the snippet above should work.

bitsandbytes operations are not supported under CPU!

This happens when some Linear weights are set to the CPU when using accelerate. Please check carefully model.hf_device_map and make sure that there is no Linear module that is assigned to CPU. It is fine to have the last module (usually the Lm_head) set on CPU.

To use the type as a Parameter, please correct the detach() semantics defined by __torch_dispatch__() implementation.

Use the latest version of accelerate with a command such as: pip install -U accelerate and the problem should be solved.

Parameter has no attribue .CB

Same solution as above.

RuntimeError: CUDA error: an illegal memory access was encountered ... consider passing CUDA_LAUNCH_BLOCKING=1

Run your script by pre-pending CUDA_LAUNCH_BLOCKING=1 and you should observe an error as described in the next section.

CUDA illegal memory error: an illegal memory access at line...:

Check the CUDA verisons with:

nvcc --version

and confirm it is the same version as the one detected by bitsandbytes. If not, run:

ls -l $CONDA_PREFIX/lib/libcudart.so

or

ls -l $LD_LIBRARY_PATH

Check if libcudart.so has a correct symlink that is set. Sometimes nvcc detects the correct CUDA version but bitsandbytes doesn't. You have to make sure that the symlink that is set for the file libcudart.so is redirected to the correct CUDA file.

Here is an example of a badly configured CUDA installation:

nvcc --version gives:

Screenshot 2022-08-15 at 15.12.23.png

which means that the detected CUDA version is 11.3 but bitsandbytes outputs:

image.png

First check:

echo $LD_LIBRARY_PATH

If this contains multiple paths separated by :. Then you have to make sure that the correct CUDA version is set. By doing:

ls -l $path/libcudart.so

On each path ($path) separated by :. If not, simply run

ls -l $LD_LIBRARY_PATH/libcudart.so

and you can see

Screenshot 2022-08-15 at 15.12.33.png

If you see that the file is linked to the wrong CUDA version (here 10.2), find the correct location for libcudart.so (find --name libcudart.so) and replace the environment variable LD_LIBRARY_PATH with the one containing the correct libcudart.so file.