--- base_model: meta-llama/Llama-2-7b-chat --- Convert from meta-llama/Llama-2-7b-chat, and 4 bits quantized.