Edit model card

RWKV 14B one state model

finetuend on instruction datasets ,can do Role play, for openllm leaderboard, impoved mmlu training datasets

this is a huggingface formatted model

checkpoint can be founded here https://huggingface.co/xiaol/Model_zoo/blob/main/rwkv-raven-14B-v4-one-state.pth and need to use new vocabs file https://huggingface.co/xiaol/Model_zoo/blob/main/20B_tokenizer_new_inference.json

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

#model_id = "xiaol/Huggingface-RWKV-claude-for-mobile-v4-world-1.5B-16k"
model_id = "xiaol/RWKV-raven-14B-one-state"

model = AutoModelForCausalLM.from_pretrained(model_id,  torch_dtype=torch.bfloat16)

#model = model.half() #1.5B need fp32

#model = torch.compile(model) #need pytorch 2.0 and linux
model.to(0)

tokenizer = AutoTokenizer.from_pretrained(model_id)

question = "Tell me about ravens"
prompt = f"### Instruction: {question}\n### Response:"

inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=100)

print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))

Traning details

https://wandb.ai/one-/out14B-one/runs/uhomhbgg/workspace

Test case

https://rwkv.ai-creator.net/st

https://rwkv-next-web.ai-creator.net/

Downloads last month
412
Safetensors
Model size
14.1B params
Tensor type
BF16
·
Unable to determine this model’s pipeline type. Check the docs .

Datasets used to train xiaol/RWKV-v4-raven-14B-one-state

Collection including xiaol/RWKV-v4-raven-14B-one-state