-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Open
Labels
unsure bug?I'm unsureI'm unsure
Description
- Did you update? - No need, did fresh install
Colab
orKaggle
or local / cloud - Local (WSL)- Number GPUs used, use
nvidia-smi
- 1 - Which notebook? Please link! - None
- Which Unsloth version, TRL version, transformers version, PyTorch version? Unsloth: 2025.7.3 TRL: 0.19.1, Transformers: 4.53.2, PyTorch: 2.7.1+cu126
- Which trainer?
SFTTrainer
,GRPOTrainer
etc - none, error at importingFastLanguageModel
How to reproduce:
- Create new VENV based on Python 3.12.3.
- Open VS Code
- Create new jupyter notebook, open it
- Choose Kernel to use new VENV
- Run %pip install unsloth in a cell
from unsloth import FastLanguageModel
Will give the output:
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 1
----> 1 from unsloth import FastLanguageModel
2 model, tokenizer = FastLanguageModel.from_pretrained(
3 model_name = "lora_model_loss3.0",
4 max_seq_length= 2048, # Choose any for long context!
(...) 7 # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
8 )
File ~/unslothfasz/lib/python3.12/site-packages/unsloth/__init__.py:243
240 raise ImportError("Unsloth: Please install unsloth_zoo via `pip install unsloth_zoo`")
241 pass
--> 243 from .models import *
244 from .models import __version__
245 from .save import *
File ~/unslothfasz/lib/python3.12/site-packages/unsloth/models/__init__.py:15
1 # Copyright 2023-present Daniel Han-Chen & the Unsloth team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...) 12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 15 from .llama import FastLlamaModel
16 from .loader import FastLanguageModel, FastVisionModel, FastTextModel, FastModel
17 from .mistral import FastMistralModel
File ~/unslothfasz/lib/python3.12/site-packages/unsloth/models/llama.py:50
48 if HAS_FLASH_ATTENTION:
49 from flash_attn import flash_attn_func
---> 50 from .vision import FastBaseModel
52 # Final patching code
53 from transformers.models.llama.modeling_llama import (
54 LlamaAttention,
55 LlamaDecoderLayer,
56 LlamaModel,
57 LlamaForCausalLM,
58 )
File ~/unslothfasz/lib/python3.12/site-packages/unsloth/models/vision.py:83
76 _compile_config = CompileConfig(
77 fullgraph = False,
78 dynamic = None,
79 mode = "reduce-overhead",
80 )
81 _compile_config.disable = True # Must set manually
---> 83 from unsloth_zoo.vllm_utils import (
84 convert_lora_modules,
85 return_lora_modules,
86 )
88 def unsloth_base_fast_generate(
89 self,
90 *args,
91 **kwargs,
92 ):
93 if len(args) != 0:
File ~/unslothfasz/lib/python3.12/site-packages/unsloth_zoo/vllm_utils.py:95
92 return vllm_check or unsloth_check
93 pass
---> 95 import vllm.model_executor.layers.quantization.bitsandbytes
97 if not hasattr(
98 vllm.model_executor.layers.quantization.bitsandbytes,
99 "apply_bnb_4bit"
100 ):
101 # Fix force using torch.bfloat16 all the time and make it dynamic
102 def _apply_4bit_weight(
103 self,
104 layer: torch.nn.Module,
(...) 107 ) -> torch.Tensor:
108 # only load the bitsandbytes module when needed
ModuleNotFoundError: No module named 'vllm.model_executor'
By the way I noticed the notebooks on Colab do NOT use vLLM. And I do NOT want to use vLLM. I have no idea why this happens.
Metadata
Metadata
Assignees
Labels
unsure bug?I'm unsureI'm unsure