Issue with response

#6
by terminator33 - opened

Code:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
device = 'cuda'
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)

bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtyp>
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, torch_dtype=>

messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, retu>
input_ids = input_ids.to(device) # Move input tensor to GPU

gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)

RESPONSE:

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|████████████████████████████████████| 44/44 [03:10<00:00, 4.33s/it]
Some weights of the model checkpoint at CohereForAI/c4ai-command-r-plus were not used when initializing CohereForCausalLM: ['model.layers.0.self_attn.k_norm.weight', 'model.layers.0.self_attn.q_norm.weight', 'model.layers.1.self_attn.k_norm.weight', 'model.layers.1.self_attn.q_norm.weight', 'model.layers.10.self_attn.k_norm.weight', 'model.layers.10.self_attn.q_norm.weight', 'model.layers.11.self_attn.k_norm.weight', 'model.layers.11.self_attn.q_norm.weight', 'model.layers.12.self_attn.k_norm.weight', 'model.layers.12.self_attn.q_norm.weight', 'model.layers.13.self_attn.k_norm.weight', 'model.layers.13.self_attn.q_norm.weight', 'model.layers.14.self_attn.k_norm.weight', 'model.layers.14.self_attn.q_norm.weight', 'model.layers.15.self_attn.k_norm.weight', 'model.layers.15.self_attn.q_norm.weight', 'model.layers.16.self_attn.k_norm.weight', 'model.layers.16.self_attn.q_norm.weight', 'model.layers.17.self_attn.k_norm.weight', 'model.layers.17.self_attn.q_norm.weight', 'model.layers.18.self_attn.k_norm.weight', 'model.layers.18.self_attn.q_norm.weight', 'model.layers.19.self_attn.k_norm.weight', 'model.layers.19.self_attn.q_norm.weight', 'model.layers.2.self_attn.k_norm.weight', 'model.layers.2.self_attn.q_norm.weight', 'model.layers.20.self_attn.k_norm.weight', 'model.layers.20.self_attn.q_norm.weight', 'model.layers.21.self_attn.k_norm.weight', 'model.layers.21.self_attn.q_norm.weight', 'model.layers.22.self_attn.k_norm.weight', 'model.layers.22.self_attn.q_norm.weight', 'model.layers.23.self_attn.k_norm.weight', 'model.layers.23.self_attn.q_norm.weight', 'model.layers.24.self_attn.k_norm.weight', 'model.layers.24.self_attn.q_norm.weight', 'model.layers.25.self_attn.k_norm.weight', 'model.layers.25.self_attn.q_norm.weight', 'model.layers.26.self_attn.k_norm.weight', 'model.layers.26.self_attn.q_norm.weight', 'model.layers.27.self_attn.k_norm.weight', 'model.layers.27.self_attn.q_norm.weight', 'model.layers.28.self_attn.k_norm.weight', 'model.layers.28.self_attn.q_norm.weight', 'model.layers.29.self_attn.k_norm.weight', 'model.layers.29.self_attn.q_norm.weight', 'model.layers.3.self_attn.k_norm.weight', 'model.layers.3.self_attn.q_norm.weight', 'model.layers.30.self_attn.k_norm.weight', 'model.layers.30.self_attn.q_norm.weight', 'model.layers.31.self_attn.k_norm.weight', 'model.layers.31.self_attn.q_norm.weight', 'model.layers.32.self_attn.k_norm.weight', 'model.layers.32.self_attn.q_norm.weight', 'model.layers.33.self_attn.k_norm.weight', 'model.layers.33.self_attn.q_norm.weight', 'model.layers.34.self_attn.k_norm.weight', 'model.layers.34.self_attn.q_norm.weight', 'model.layers.35.self_attn.k_norm.weight', 'model.layers.35.self_attn.q_norm.weight', 'model.layers.36.self_attn.k_norm.weight', 'model.layers.36.self_attn.q_norm.weight', 'model.layers.37.self_attn.k_norm.weight', 'model.layers.37.self_attn.q_norm.weight', 'model.layers.38.self_attn.k_norm.weight', 'model.layers.38.self_attn.q_norm.weight', 'model.layers.39.self_attn.k_norm.weight', 'model.layers.39.self_attn.q_norm.weight', 'model.layers.4.self_attn.k_norm.weight', 'model.layers.4.self_attn.q_norm.weight', 'model.layers.40.self_attn.k_norm.weight', 'model.layers.40.self_attn.q_norm.weight', 'model.layers.41.self_attn.k_norm.weight', 'model.layers.41.self_attn.q_norm.weight', 'model.layers.42.self_attn.k_norm.weight', 'model.layers.42.self_attn.q_norm.weight', 'model.layers.43.self_attn.k_norm.weight', 'model.layers.43.self_attn.q_norm.weight', 'model.layers.44.self_attn.k_norm.weight', 'model.layers.44.self_attn.q_norm.weight', 'model.layers.45.self_attn.k_norm.weight', 'model.layers.45.self_attn.q_norm.weight', 'model.layers.46.self_attn.k_norm.weight', 'model.layers.46.self_attn.q_norm.weight', 'model.layers.47.self_attn.k_norm.weight', 'model.layers.47.self_attn.q_norm.weight', 'model.layers.48.self_attn.k_norm.weight', 'model.layers.48.self_attn.q_norm.weight', 'model.layers.49.self_attn.k_norm.weight', 'model.layers.49.self_attn.q_norm.weight', 'model.layers.5.self_attn.k_norm.weight', 'model.layers.5.self_attn.q_norm.weight', 'model.layers.50.self_attn.k_norm.weight', 'model.layers.50.self_attn.q_norm.weight', 'model.layers.51.self_attn.k_norm.weight', 'model.layers.51.self_attn.q_norm.weight', 'model.layers.52.self_attn.k_norm.weight', 'model.layers.52.self_attn.q_norm.weight', 'model.layers.53.self_attn.k_norm.weight', 'model.layers.53.self_attn.q_norm.weight', 'model.layers.54.self_attn.k_norm.weight', 'model.layers.54.self_attn.q_norm.weight', 'model.layers.55.self_attn.k_norm.weight', 'model.layers.55.self_attn.q_norm.weight', 'model.layers.56.self_attn.k_norm.weight', 'model.layers.56.self_attn.q_norm.weight', 'model.layers.57.self_attn.k_norm.weight', 'model.layers.57.self_attn.q_norm.weight', 'model.layers.58.self_attn.k_norm.weight', 'model.layers.58.self_attn.q_norm.weight', 'model.layers.59.self_attn.k_norm.weight', 'model.layers.59.self_attn.q_norm.weight', 'model.layers.6.self_attn.k_norm.weight', 'model.layers.6.self_attn.q_norm.weight', 'model.layers.60.self_attn.k_norm.weight', 'model.layers.60.self_attn.q_norm.weight', 'model.layers.61.self_attn.k_norm.weight', 'model.layers.61.self_attn.q_norm.weight', 'model.layers.62.self_attn.k_norm.weight', 'model.layers.62.self_attn.q_norm.weight', 'model.layers.63.self_attn.k_norm.weight', 'model.layers.63.self_attn.q_norm.weight', 'model.layers.7.self_attn.k_norm.weight', 'model.layers.7.self_attn.q_norm.weight', 'model.layers.8.self_attn.k_norm.weight', 'model.layers.8.self_attn.q_norm.weight', 'model.layers.9.self_attn.k_norm.weight', 'model.layers.9.self_attn.q_norm.weight']

  • This IS expected if you are initializing CohereForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing CohereForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    <|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>beginnsectionKE-sectionauthor-sectionauthor-section-sectionsub-begin-sectionauthor-sectionn-begin-author-begin-begin a-usectionauthor-documentauthor.KEn -KEauthor-begin-authorauthor-author-author-authorauthor-n-allauthor-author-allauthor-rauthor-authorn-n-allalln
    --newauthor--author--KEauthorn-n -KEauthor

You have to install the Transformers library from source

Cohere For AI org

Hi, this seems an older version of transformers. For this model you should install it from the latest transformers repository using pip install git+https://github.com/huggingface/transformers.git

Indeed it is working now!

terminator33 changed discussion status to closed

Sign up or log in to comment