Edit model card

korean-gpt-neox-125M

Model Details

Model Description

Uses

Direct Use

# Import the transformers library
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("cateto/korean-gpt-neox-125M")

model = AutoModelForCausalLM.from_pretrained("cateto/korean-gpt-neox-125M")

# Get user input
user_input = "์šฐ๋ฆฌ๋Š” ์•ž์œผ๋กœ ๋”๋‚˜์€ ๋ฏธ๋ž˜๋ฅผ"

# Encode the prompt using the tokenizer
input_ids = tokenizer.encode(user_input, return_tensors="pt")

# Generate chatbot output using the model
output_ids = model.generate(
    input_ids,
    num_beams=4,
    repetition_penalty=1.5,
    no_repeat_ngram_size=3
)

# Decode chatbot output ids as text
bot_output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)

# Print chatbot output
print(f"์ถœ๋ ฅ ## ", bot_output)

# ์ถœ๋ ฅ ##  ์šฐ๋ฆฌ๋Š” ์•ž์œผ๋กœ ๋”๋‚˜์€ ๋ฏธ๋ž˜๋ฅผ ํ–ฅํ•ด ๋‚˜์•„๊ฐˆ ์ˆ˜ ์žˆ๋‹ค.
Downloads last month
58
Safetensors
Model size
215M params
Tensor type
FP16
ยท
BOOL
ยท
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.