Update README.md
Browse files
README.md
CHANGED
@@ -32,19 +32,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
32 |
|
33 |
## Model Usage
|
34 |
|
35 |
-
The model is designed for use as a conversational chatbot in Javanese language. It can be deployed for various applications requiring natural language understanding and generation in Javanese. The model can be interacted with using the typical Hugging Face Transformers pipeline for text generation.
|
36 |
-
|
37 |
-
Example usage in Python:
|
38 |
-
```python
|
39 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
-
|
41 |
-
tokenizer = AutoTokenizer.from_pretrained("ravialdy/llama2-javanese-chat")
|
42 |
-
model = AutoModelForCausalLM.from_pretrained("ravialdy/llama2-javanese-chat")
|
43 |
-
|
44 |
-
# Example prompt
|
45 |
-
prompt = "Sampeyan minangka chatbot umum sing tansah mangsuli nganggo basa Jawa."
|
46 |
-
|
47 |
-
inputs = tokenizer(prompt, return_tensors="pt")
|
48 |
-
outputs = model.generate(inputs["input_ids"])
|
49 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
50 |
-
```
|
|
|
32 |
|
33 |
## Model Usage
|
34 |
|
35 |
+
The model is designed for use as a conversational chatbot in Javanese language. It can be deployed for various applications requiring natural language understanding and generation in Javanese. The model can be interacted with using the typical Hugging Face Transformers pipeline for text generation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|