transformers/tests/quantization/bnb
Poedator d78e78a0e4
`HfQuantizer` class for quantization-related stuff in `modeling_utils.py` (#26610)
* squashed earlier commits for easier rebase

* rm rebase leftovers

* 4bit save enabled @quantizers

* TMP gptq test use exllama

* fix AwqConfigTest::test_wrong_backend for A100

* quantizers AWQ fixes

* _load_pretrained_model low_cpu_mem_usage branch

* quantizers style

* remove require_low_cpu_mem_usage attr

* rm dtype arg from process_model_before_weight_loading

* rm config_origin from Q-config

* rm inspect from q_config

* fixed docstrings in QuantizationConfigParser

* logger.warning fix

* mv is_loaded_in_4(8)bit to BnbHFQuantizer

* is_accelerate_available error msg fix in quantizer

* split is_model_trainable in bnb quantizer class

* rm llm_int8_skip_modules as separate var in Q

* Q rm todo

* fwd ref to HFQuantizer in type hint

* rm note re optimum.gptq.GPTQQuantizer

* quantization_config in __init__ simplified

* replaced NonImplemented with  create_quantized_param

* rm load_in_4/8_bit deprecation warning

* QuantizationConfigParser refactoring

* awq-related minor changes

* awq-related changes

* awq config.modules_to_not_convert

* raise error if no q-method in q-config in args

* minor cleanup

* awq quantizer docstring

* combine common parts in bnb process_model_before_weight_loading

* revert test_gptq

* .process_model_ cleanup

* restore dict config warning

* removed typevars in quantizers.py

* cleanup post-rebase 16 jan

* QuantizationConfigParser classmethod refactor

* rework of handling of unexpected aux elements of bnb weights

* moved q-related stuff from save_pretrained to quantizers

* refactor v1

* more changes

* fix some tests

* remove it from main init

* ooops

* Apply suggestions from code review

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* fix awq issues

* fix

* fix

* fix

* fix

* fix

* fix

* add docs

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update docs/source/en/hf_quantizer.md

* address comments

* fix

* fixup

* Update src/transformers/modeling_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* address final comment

* update

* Update src/transformers/quantizers/base.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/quantizers/auto.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix

* add kwargs update

* fixup

* add `optimum_quantizer` attribute

* oops

* rm unneeded file

* fix doctests

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-01-30 02:48:25 +01:00
..
README.md [`Docs`] Fix un-rendered images (#25561) 2023-08-17 12:08:11 +02:00
__init__.py GPTQ integration (#25062) 2023-08-10 16:06:29 -04:00
test_4bit.py Don't allow passing `load_in_8bit` and `load_in_4bit` at the same time (#28266) 2024-01-30 01:43:40 +01:00
test_mixed_int8.py `HfQuantizer` class for quantization-related stuff in `modeling_utils.py` (#26610) 2024-01-30 02:48:25 +01:00

README.md

Testing mixed int8 quantization

HFxbitsandbytes.png

The following is the recipe on how to effectively debug bitsandbytes integration on Hugging Face transformers.

Library requirements

  • transformers>=4.22.0
  • accelerate>=0.12.0
  • bitsandbytes>=0.31.5.

Hardware requirements

The following instructions are tested with 2 NVIDIA-Tesla T4 GPUs. To run successfully bitsandbytes you would need a 8-bit core tensor supported GPU. Note that Turing, Ampere or newer architectures - e.g. T4, RTX20s RTX30s, A40-A100, A6000 should be supported.

Virutal envs

conda create --name int8-testing python==3.8
pip install bitsandbytes>=0.31.5
pip install accelerate>=0.12.0
pip install transformers>=4.23.0

if transformers>=4.23.0 is not released yet, then use:

pip install git+https://github.com/huggingface/transformers.git

Troubleshooting

A list of common errors:

Torch does not correctly do the operations on GPU

First check that:

import torch

vec = torch.randn(1, 2, 3).to(0)

Works without any error. If not, install torch using conda like:

conda create --name int8-testing python==3.8
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge
pip install bitsandbytes>=0.31.5
pip install accelerate>=0.12.0
pip install transformers>=4.23.0

For the latest pytorch instructions please see this

and the snippet above should work.

bitsandbytes operations are not supported under CPU!

This happens when some Linear weights are set to the CPU when using accelerate. Please check carefully model.hf_device_map and make sure that there is no Linear module that is assigned to CPU. It is fine to have the last module (usually the Lm_head) set on CPU.

To use the type as a Parameter, please correct the detach() semantics defined by __torch_dispatch__() implementation.

Use the latest version of accelerate with a command such as: pip install -U accelerate and the problem should be solved.

Parameter has no attribue .CB

Same solution as above.

RuntimeError: CUDA error: an illegal memory access was encountered ... consider passing CUDA_LAUNCH_BLOCKING=1

Run your script by pre-pending CUDA_LAUNCH_BLOCKING=1 and you should observe an error as described in the next section.

CUDA illegal memory error: an illegal memory access at line...:

Check the CUDA verisons with:

nvcc --version

and confirm it is the same version as the one detected by bitsandbytes. If not, run:

ls -l $CONDA_PREFIX/lib/libcudart.so

or

ls -l $LD_LIBRARY_PATH

Check if libcudart.so has a correct symlink that is set. Sometimes nvcc detects the correct CUDA version but bitsandbytes doesn't. You have to make sure that the symlink that is set for the file libcudart.so is redirected to the correct CUDA file.

Here is an example of a badly configured CUDA installation:

nvcc --version gives:

Screenshot 2022-08-15 at 15.12.23.png

which means that the detected CUDA version is 11.3 but bitsandbytes outputs:

image.png

First check:

echo $LD_LIBRARY_PATH

If this contains multiple paths separated by :. Then you have to make sure that the correct CUDA version is set. By doing:

ls -l $path/libcudart.so

On each path ($path) separated by :. If not, simply run

ls -l $LD_LIBRARY_PATH/libcudart.so

and you can see

Screenshot 2022-08-15 at 15.12.33.png

If you see that the file is linked to the wrong CUDA version (here 10.2), find the correct location for libcudart.so (find --name libcudart.so) and replace the environment variable LD_LIBRARY_PATH with the one containing the correct libcudart.so file.