/opt/conda/envs/py310/bin/python -m mlc_llm gen_config /models/Qwen2-1.5B-Instruct --quantization q4f32_1 --conv-template chatml --output /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC [2024-06-06 23:39:44] INFO auto_config.py:116: Found model configuration: /models/Qwen2-1.5B-Instruct/config.json [2024-06-06 23:39:44] INFO auto_config.py:154: Found model type: qwen2. Use `--model-type` to override. [2024-06-06 23:39:44] INFO qwen2_model.py:49: context_window_size not found in config.json. Falling back to max_position_embeddings (32768) [2024-06-06 23:39:44] INFO qwen2_model.py:66: prefill_chunk_size defaults to 2048 [2024-06-06 23:39:44] INFO config.py:107: Overriding max_batch_size from 1 to 80 [2024-06-06 23:39:44] INFO gen_config.py:143: [generation_config.json] Setting bos_token_id: 151643 [2024-06-06 23:39:44] INFO gen_config.py:143: [generation_config.json] Setting pad_token_id: 151643 [2024-06-06 23:39:44] INFO gen_config.py:143: [generation_config.json] Setting eos_token_id: [151645, 151643] [2024-06-06 23:39:44] INFO gen_config.py:143: [generation_config.json] Setting repetition_penalty: 1.1 [2024-06-06 23:39:44] INFO gen_config.py:143: [generation_config.json] Setting temperature: 0.7 [2024-06-06 23:39:44] INFO gen_config.py:143: [generation_config.json] Setting top_p: 0.8 [2024-06-06 23:39:44] INFO gen_config.py:157: Not found tokenizer config: /models/Qwen2-1.5B-Instruct/tokenizer.model [2024-06-06 23:39:44] INFO gen_config.py:155: Found tokenizer config: /models/Qwen2-1.5B-Instruct/tokenizer.json. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/tokenizer.json [2024-06-06 23:39:44] INFO gen_config.py:155: Found tokenizer config: /models/Qwen2-1.5B-Instruct/vocab.json. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/vocab.json [2024-06-06 23:39:44] INFO gen_config.py:155: Found tokenizer config: /models/Qwen2-1.5B-Instruct/merges.txt. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/merges.txt [2024-06-06 23:39:44] INFO gen_config.py:157: Not found tokenizer config: /models/Qwen2-1.5B-Instruct/added_tokens.json [2024-06-06 23:39:44] INFO gen_config.py:155: Found tokenizer config: /models/Qwen2-1.5B-Instruct/tokenizer_config.json. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/tokenizer_config.json [2024-06-06 23:39:44] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_level', 'prepend_space_in_encode': False, 'strip_space_in_decode': False} [2024-06-06 23:39:44] INFO gen_config.py:32: [System default] Setting presence_penalty: 0.0 [2024-06-06 23:39:44] INFO gen_config.py:32: [System default] Setting frequency_penalty: 0.0 [2024-06-06 23:39:44] INFO gen_config.py:223: Dumping configuration file to: /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/mlc-chat-config.json /opt/conda/envs/py310/bin/python -m mlc_llm convert_weight /models/Qwen2-1.5B-Instruct --quantization q4f32_1 --output /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC [2024-06-06 23:39:46] INFO auto_config.py:116: Found model configuration: /models/Qwen2-1.5B-Instruct/config.json [2024-06-06 23:39:47] INFO auto_device.py:79: Found device: cuda:0 [2024-06-06 23:39:49] INFO auto_device.py:88: Not found device: rocm:0 [2024-06-06 23:39:50] INFO auto_device.py:88: Not found device: metal:0 [2024-06-06 23:39:52] INFO auto_device.py:79: Found device: vulkan:0 [2024-06-06 23:39:52] INFO auto_device.py:79: Found device: vulkan:1 [2024-06-06 23:39:52] INFO auto_device.py:79: Found device: vulkan:2 [2024-06-06 23:39:52] INFO auto_device.py:79: Found device: vulkan:3 [2024-06-06 23:39:54] INFO auto_device.py:88: Not found device: opencl:0 [2024-06-06 23:39:54] INFO auto_device.py:35: Using device: cuda:0 [2024-06-06 23:39:54] INFO auto_weight.py:71: Finding weights in: /models/Qwen2-1.5B-Instruct [2024-06-06 23:39:54] INFO auto_weight.py:137: Not found Huggingface PyTorch [2024-06-06 23:39:54] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /models/Qwen2-1.5B-Instruct/model.safetensors.index.json [2024-06-06 23:39:54] INFO auto_weight.py:107: Using source weight configuration: /models/Qwen2-1.5B-Instruct/model.safetensors.index.json. Use `--source` to override. [2024-06-06 23:39:54] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-06-06 23:39:54] INFO auto_config.py:154: Found model type: qwen2. Use `--model-type` to override. [2024-06-06 23:39:54] INFO qwen2_model.py:49: context_window_size not found in config.json. Falling back to max_position_embeddings (32768) [2024-06-06 23:39:54] INFO qwen2_model.py:66: prefill_chunk_size defaults to 2048 Traceback (most recent call last): File "/opt/conda/envs/py310/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/py310/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/__main__.py", line 64, in main() File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/__main__.py", line 37, in main cli.main(sys.argv[2:]) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/cli/convert_weight.py", line 88, in main convert_weight( File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/interface/convert_weight.py", line 181, in convert_weight _convert_args(args) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/interface/convert_weight.py", line 145, in _convert_args tvmjs.dump_ndarray_cache( File "/opt/conda/envs/py310/lib/python3.10/site-packages/tvm/contrib/tvmjs.py", line 272, in dump_ndarray_cache for k, origin_v in param_generator: File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/interface/convert_weight.py", line 122, in _param_generator loader = LOADER[args.source_format]( File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/loader/huggingface_loader.py", line 99, in __init__ check_parameter_usage(extern_param_map, set(self.torch_to_path.keys())) File "/opt/conda/envs/py310/lib/python3.10/site-packages/mlc_llm/loader/utils.py", line 33, in check_parameter_usage raise ValueError( ValueError: The following extern parameters do not exist in the weight files: lm_head.weight Weight conversion with arguments: --config /models/Qwen2-1.5B-Instruct/config.json --quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type qwen2 --device cuda:0 --source /models/Qwen2-1.5B-Instruct/model.safetensors.index.json --source-format huggingface-safetensor --output /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC Start storing to cache /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC /home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm gen_config /ssd2/models/Qwen2-1.5B-Instruct --quantization q4f32_1 --conv-template chatml --output /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC [2024-06-06 22:38:49] INFO auto_config.py:116: Found model configuration: /ssd2/models/Qwen2-1.5B-Instruct/config.json [2024-06-06 22:38:49] INFO auto_config.py:154: Found model type: qwen2. Use `--model-type` to override. [2024-06-06 22:38:49] INFO qwen2_model.py:50: context_window_size not found in config.json. Falling back to max_position_embeddings (32768) [2024-06-06 22:38:49] INFO qwen2_model.py:67: prefill_chunk_size defaults to 2048 [2024-06-06 22:38:49] INFO config.py:107: Overriding max_batch_size from 1 to 80 [2024-06-06 22:38:49] INFO gen_config.py:143: [generation_config.json] Setting bos_token_id: 151643 [2024-06-06 22:38:49] INFO gen_config.py:143: [generation_config.json] Setting pad_token_id: 151643 [2024-06-06 22:38:49] INFO gen_config.py:143: [generation_config.json] Setting eos_token_id: [151645, 151643] [2024-06-06 22:38:49] INFO gen_config.py:143: [generation_config.json] Setting repetition_penalty: 1.1 [2024-06-06 22:38:49] INFO gen_config.py:143: [generation_config.json] Setting temperature: 0.7 [2024-06-06 22:38:49] INFO gen_config.py:143: [generation_config.json] Setting top_p: 0.8 [2024-06-06 22:38:49] INFO gen_config.py:157: Not found tokenizer config: /ssd2/models/Qwen2-1.5B-Instruct/tokenizer.model [2024-06-06 22:38:49] INFO gen_config.py:155: Found tokenizer config: /ssd2/models/Qwen2-1.5B-Instruct/tokenizer.json. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/tokenizer.json [2024-06-06 22:38:49] INFO gen_config.py:155: Found tokenizer config: /ssd2/models/Qwen2-1.5B-Instruct/vocab.json. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/vocab.json [2024-06-06 22:38:49] INFO gen_config.py:155: Found tokenizer config: /ssd2/models/Qwen2-1.5B-Instruct/merges.txt. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/merges.txt [2024-06-06 22:38:49] INFO gen_config.py:157: Not found tokenizer config: /ssd2/models/Qwen2-1.5B-Instruct/added_tokens.json [2024-06-06 22:38:49] INFO gen_config.py:155: Found tokenizer config: /ssd2/models/Qwen2-1.5B-Instruct/tokenizer_config.json. Copying to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/tokenizer_config.json [2024-06-06 22:38:49] INFO gen_config.py:216: Detected tokenizer info: {'token_postproc_method': 'byte_level', 'prepend_space_in_encode': False, 'strip_space_in_decode': False} [2024-06-06 22:38:49] INFO gen_config.py:32: [System default] Setting presence_penalty: 0.0 [2024-06-06 22:38:49] INFO gen_config.py:32: [System default] Setting frequency_penalty: 0.0 [2024-06-06 22:38:49] INFO gen_config.py:223: Dumping configuration file to: /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/mlc-chat-config.json /home/rickzhou/miniconda3/envs/mlc/bin/python -m mlc_llm convert_weight /ssd2/models/Qwen2-1.5B-Instruct --quantization q4f32_1 --output /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC [2024-06-06 22:38:50] INFO auto_config.py:116: Found model configuration: /ssd2/models/Qwen2-1.5B-Instruct/config.json [2024-06-06 22:38:51] INFO auto_device.py:79: Found device: cuda:0 [2024-06-06 22:38:51] INFO auto_device.py:79: Found device: cuda:1 [2024-06-06 22:38:53] INFO auto_device.py:88: Not found device: rocm:0 [2024-06-06 22:38:54] INFO auto_device.py:88: Not found device: metal:0 [2024-06-06 22:38:55] INFO auto_device.py:79: Found device: vulkan:0 [2024-06-06 22:38:55] INFO auto_device.py:79: Found device: vulkan:1 [2024-06-06 22:38:55] INFO auto_device.py:79: Found device: vulkan:2 [2024-06-06 22:38:56] INFO auto_device.py:88: Not found device: opencl:0 [2024-06-06 22:38:56] INFO auto_device.py:35: Using device: cuda:0 [2024-06-06 22:38:56] INFO auto_weight.py:71: Finding weights in: /ssd2/models/Qwen2-1.5B-Instruct [2024-06-06 22:38:56] INFO auto_weight.py:137: Not found Huggingface PyTorch [2024-06-06 22:38:56] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /ssd2/models/Qwen2-1.5B-Instruct/model.safetensors.index.json [2024-06-06 22:38:56] INFO auto_weight.py:107: Using source weight configuration: /ssd2/models/Qwen2-1.5B-Instruct/model.safetensors.index.json. Use `--source` to override. [2024-06-06 22:38:56] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-06-06 22:38:56] INFO auto_config.py:154: Found model type: qwen2. Use `--model-type` to override. [2024-06-06 22:38:56] INFO qwen2_model.py:50: context_window_size not found in config.json. Falling back to max_position_embeddings (32768) [2024-06-06 22:38:56] INFO qwen2_model.py:67: prefill_chunk_size defaults to 2048 Weight conversion with arguments: --config /ssd2/models/Qwen2-1.5B-Instruct/config.json --quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type qwen2 --device cuda:0 --source /ssd2/models/Qwen2-1.5B-Instruct/model.safetensors.index.json --source-format huggingface-safetensor --output /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC Start storing to cache /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC 0%| | 0/198 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/rickzhou/miniconda3/envs/mlc/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) 1%| | 1/198 [00:03<10:59, 3.35s/it] [2024-06-06 22:39:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.input_layernorm.weight", shape: (1536,), dtype: float32 1%| | 1/198 [00:03<10:59, 3.35s/it] [2024-06-06 22:39:03] INFO group_quantization.py:217: Compiling quantize function for key: ((1536, 8960), float32, cuda, axis=1, output_transpose=False) 1%| | 1/198 [00:03<10:59, 3.35s/it] [2024-06-06 22:39:03] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 1%| | 1/198 [00:03<10:59, 3.35s/it] [2024-06-06 22:39:03] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 1%| | 1/198 [00:03<10:59, 3.35s/it] 2%|▏ | 3/198 [00:03<03:14, 1.00it/s] [2024-06-06 22:39:03] INFO group_quantization.py:217: Compiling quantize function for key: ((17920, 1536), float32, cuda, axis=1, output_transpose=False) 2%|▏ | 3/198 [00:03<03:14, 1.00it/s] [2024-06-06 22:39:03] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 2%|▏ | 3/198 [00:04<03:14, 1.00it/s] [2024-06-06 22:39:03] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 2%|▏ | 3/198 [00:04<03:14, 1.00it/s] 2%|▏ | 4/198 [00:04<02:33, 1.26it/s] [2024-06-06 22:39:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.post_attention_layernorm.weight", shape: (1536,), dtype: float32 2%|▏ | 4/198 [00:04<02:33, 1.26it/s] [2024-06-06 22:39:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.self_attn.c_attn.bias", shape: (2048,), dtype: float32 2%|▏ | 4/198 [00:04<02:33, 1.26it/s] [2024-06-06 22:39:03] INFO group_quantization.py:217: Compiling quantize function for key: ((2048, 1536), float32, cuda, axis=1, output_transpose=False) 2%|▏ | 4/198 [00:04<02:33, 1.26it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 2%|▏ | 4/198 [00:04<02:33, 1.26it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 2%|▏ | 4/198 [00:04<02:33, 1.26it/s] 4%|▎ | 7/198 [00:04<01:12, 2.63it/s] [2024-06-06 22:39:04] INFO group_quantization.py:217: Compiling quantize function for key: ((1536, 1536), float32, cuda, axis=1, output_transpose=False) 4%|▎ | 7/198 [00:04<01:12, 2.63it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 4%|▎ | 7/198 [00:04<01:12, 2.63it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.0.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 4%|▎ | 7/198 [00:04<01:12, 2.63it/s] 4%|▍ | 8/198 [00:04<01:09, 2.73it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.input_layernorm.weight", shape: (1536,), dtype: float32 4%|▍ | 8/198 [00:04<01:09, 2.73it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 4%|▍ | 8/198 [00:04<01:09, 2.73it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 4%|▍ | 8/198 [00:04<01:09, 2.73it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 4%|▍ | 8/198 [00:04<01:09, 2.73it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 4%|▍ | 8/198 [00:04<01:09, 2.73it/s] 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.post_attention_layernorm.weight", shape: (1536,), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.self_attn.c_attn.bias", shape: (2048,), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.1.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.input_layernorm.weight", shape: (1536,), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 6%|▌ | 11/198 [00:04<00:38, 4.85it/s] 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.post_attention_layernorm.weight", shape: (1536,), dtype: float32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.self_attn.c_attn.bias", shape: (2048,), dtype: float32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.10.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.input_layernorm.weight", shape: (1536,), dtype: float32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 9%|▉ | 18/198 [00:04<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 9%|▉ | 18/198 [00:05<00:16, 11.22it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 9%|▉ | 18/198 [00:05<00:16, 11.22it/s] 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.post_attention_layernorm.weight", shape: (1536,), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.self_attn.c_attn.bias", shape: (2048,), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.11.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.input_layernorm.weight", shape: (1536,), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 13%|█▎ | 25/198 [00:05<00:09, 18.19it/s] 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.post_attention_layernorm.weight", shape: (1536,), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.self_attn.c_attn.bias", shape: (2048,), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.12.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.input_layernorm.weight", shape: (1536,), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:04] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 16%|█▌ | 32/198 [00:05<00:06, 25.35it/s] 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.post_attention_layernorm.weight", shape: (1536,), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.self_attn.c_attn.bias", shape: (2048,), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.13.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.input_layernorm.weight", shape: (1536,), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 20%|█▉ | 39/198 [00:05<00:04, 32.30it/s] 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.post_attention_layernorm.weight", shape: (1536,), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.self_attn.c_attn.bias", shape: (2048,), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.14.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.input_layernorm.weight", shape: (1536,), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 23%|██▎ | 46/198 [00:05<00:03, 38.67it/s] 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.post_attention_layernorm.weight", shape: (1536,), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.self_attn.c_attn.bias", shape: (2048,), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.15.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.input_layernorm.weight", shape: (1536,), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 27%|██▋ | 53/198 [00:05<00:03, 44.16it/s] 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.post_attention_layernorm.weight", shape: (1536,), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.self_attn.c_attn.bias", shape: (2048,), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.16.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.input_layernorm.weight", shape: (1536,), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 30%|███ | 60/198 [00:05<00:02, 48.61it/s] 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.post_attention_layernorm.weight", shape: (1536,), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.self_attn.c_attn.bias", shape: (2048,), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.17.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.input_layernorm.weight", shape: (1536,), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 34%|███▍ | 67/198 [00:05<00:02, 52.20it/s] 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.post_attention_layernorm.weight", shape: (1536,), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.self_attn.c_attn.bias", shape: (2048,), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.18.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.input_layernorm.weight", shape: (1536,), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 37%|███▋ | 74/198 [00:05<00:02, 54.96it/s] 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.post_attention_layernorm.weight", shape: (1536,), dtype: float32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.self_attn.c_attn.bias", shape: (2048,), dtype: float32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.19.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.input_layernorm.weight", shape: (1536,), dtype: float32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 41%|████ | 81/198 [00:05<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 41%|████ | 81/198 [00:06<00:02, 57.01it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 41%|████ | 81/198 [00:06<00:02, 57.01it/s] 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.post_attention_layernorm.weight", shape: (1536,), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.self_attn.c_attn.bias", shape: (2048,), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.2.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.input_layernorm.weight", shape: (1536,), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 44%|████▍ | 88/198 [00:06<00:01, 58.52it/s] 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.post_attention_layernorm.weight", shape: (1536,), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.self_attn.c_attn.bias", shape: (2048,), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.20.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.input_layernorm.weight", shape: (1536,), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:05] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 48%|████▊ | 95/198 [00:06<00:01, 59.39it/s] 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.post_attention_layernorm.weight", shape: (1536,), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.self_attn.c_attn.bias", shape: (2048,), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.21.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.input_layernorm.weight", shape: (1536,), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 52%|█████▏ | 102/198 [00:06<00:01, 60.23it/s] 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.post_attention_layernorm.weight", shape: (1536,), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.self_attn.c_attn.bias", shape: (2048,), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.22.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.input_layernorm.weight", shape: (1536,), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 55%|█████▌ | 109/198 [00:06<00:01, 60.81it/s] 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.post_attention_layernorm.weight", shape: (1536,), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.self_attn.c_attn.bias", shape: (2048,), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.23.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.input_layernorm.weight", shape: (1536,), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 59%|█████▊ | 116/198 [00:06<00:01, 61.24it/s] 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.post_attention_layernorm.weight", shape: (1536,), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.self_attn.c_attn.bias", shape: (2048,), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.24.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.input_layernorm.weight", shape: (1536,), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 62%|██████▏ | 123/198 [00:06<00:01, 61.37it/s] 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.post_attention_layernorm.weight", shape: (1536,), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.self_attn.c_attn.bias", shape: (2048,), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.25.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.input_layernorm.weight", shape: (1536,), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 66%|██████▌ | 130/198 [00:06<00:01, 61.67it/s] 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.post_attention_layernorm.weight", shape: (1536,), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.self_attn.c_attn.bias", shape: (2048,), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.26.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.input_layernorm.weight", shape: (1536,), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 69%|██████▉ | 137/198 [00:06<00:00, 61.89it/s] 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.post_attention_layernorm.weight", shape: (1536,), dtype: float32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.self_attn.c_attn.bias", shape: (2048,), dtype: float32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.27.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.input_layernorm.weight", shape: (1536,), dtype: float32 73%|███████▎ | 144/198 [00:06<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 73%|███████▎ | 144/198 [00:07<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 73%|███████▎ | 144/198 [00:07<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 73%|███████▎ | 144/198 [00:07<00:00, 62.01it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 73%|███████▎ | 144/198 [00:07<00:00, 62.01it/s] 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.post_attention_layernorm.weight", shape: (1536,), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.self_attn.c_attn.bias", shape: (2048,), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.3.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.input_layernorm.weight", shape: (1536,), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 76%|███████▋ | 151/198 [00:07<00:00, 61.96it/s] 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.post_attention_layernorm.weight", shape: (1536,), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.self_attn.c_attn.bias", shape: (2048,), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.4.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.input_layernorm.weight", shape: (1536,), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:06] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 80%|███████▉ | 158/198 [00:07<00:00, 61.89it/s] 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.post_attention_layernorm.weight", shape: (1536,), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.self_attn.c_attn.bias", shape: (2048,), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.5.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.6.input_layernorm.weight", shape: (1536,), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 83%|████████▎ | 165/198 [00:07<00:00, 61.75it/s] 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.6.post_attention_layernorm.weight", shape: (1536,), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.6.self_attn.c_attn.bias", shape: (2048,), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.6.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.7.input_layernorm.weight", shape: (1536,), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 87%|████████▋ | 172/198 [00:07<00:00, 61.93it/s] 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.7.post_attention_layernorm.weight", shape: (1536,), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.7.self_attn.c_attn.bias", shape: (2048,), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.7.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.8.input_layernorm.weight", shape: (1536,), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 90%|█████████ | 179/198 [00:07<00:00, 61.71it/s] 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.8.post_attention_layernorm.weight", shape: (1536,), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.8.self_attn.c_attn.bias", shape: (2048,), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.8.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.9.input_layernorm.weight", shape: (1536,), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.mlp.down_proj.q_weight", shape: (1536, 1120), dtype: uint32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.mlp.down_proj.q_scale", shape: (1536, 280), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.mlp.gate_up_proj.q_weight", shape: (17920, 192), dtype: uint32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.mlp.gate_up_proj.q_scale", shape: (17920, 48), dtype: float32 94%|█████████▍| 186/198 [00:07<00:00, 61.80it/s] 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.9.post_attention_layernorm.weight", shape: (1536,), dtype: float32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.9.self_attn.c_attn.bias", shape: (2048,), dtype: float32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.self_attn.c_attn.q_weight", shape: (2048, 192), dtype: uint32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.self_attn.c_attn.q_scale", shape: (2048, 48), dtype: float32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.self_attn.o_proj.q_weight", shape: (1536, 192), dtype: uint32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:167: [Quantized] Parameter: "model.layers.9.self_attn.o_proj.q_scale", shape: (1536, 48), dtype: float32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.norm.weight", shape: (1536,), dtype: float32 97%|█████████▋| 193/198 [00:07<00:00, 61.95it/s] 100%|██████████| 198/198 [00:07<00:00, 25.43it/s] [2024-06-06 22:39:07] INFO huggingface_loader.py:197: Unloading HF weight file: /ssd2/models/Qwen2-1.5B-Instruct/model.safetensors [2024-06-06 22:39:07] INFO stats.py:77: Time usage: HF loading: 1.829 sec; Pre-quantization mapping: 1.099 sec; Quantization: 2.448 sec [2024-06-06 22:39:07] INFO stats.py:91: RAM usage: Peak RAM: 5.751 GB. Total bytes loaded from disk: 5.751 GB [2024-06-06 22:39:07] INFO convert_weight.py:155: Parameter size after quantization: 0.899 GB [2024-06-06 22:39:07] INFO convert_weight.py:160: Total parameters: 1,543,714,304 [2024-06-06 22:39:07] INFO convert_weight.py:161: Bits per parameter: 5.003 [2024-06-06 22:39:07] INFO convert_weight.py:166: Saved to directory: /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC All finished, 30 total shards committed, record saved to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/ndarray-cache.json Also saved a bf16 record to /models/mlc-delivery/hf/mlc-ai/Qwen2-1.5B-Instruct-q4f32_1-MLC/ndarray-cache-b16.json