MaziyarPanahi commited on
Commit
4490293
1 Parent(s): e597d9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -16,14 +16,14 @@ inference: false
16
  model_creator: MaziyarPanahi
17
  quantized_by: MaziyarPanahi
18
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
19
- model_name: calme-2.1-legalkit-8b
20
  datasets:
21
  - MaziyarPanahi/legalkit_cot_reasoning_nous_hermes
22
  ---
23
 
24
  <img src="./calme-2-legalkit.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
25
 
26
- # MaziyarPanahi/calme-2.1-legalkit-8b
27
 
28
  This model is an advanced iteration of the powerful `meta-llama/Meta-Llama-3.1-8B-Instruct`, specifically fine-tuned to enhance its capabilities in the legal domain. The fine-tuning process utilized a synthetically generated dataset derived from the French [LegalKit](https://huggingface.co/datasets/louisbrulenaudet/legalkit), a comprehensive legal language resource.
29
 
@@ -34,7 +34,7 @@ The resulting model combines the robust foundation of `Llama-3.1-8B` with tailor
34
 
35
  # ⚡ Quantized GGUF
36
 
37
- All GGUF models are available here: [MaziyarPanahi/calme-2.1-legalkit-8b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.1-legalkit-8b-GGUF)
38
 
39
 
40
  # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
@@ -138,7 +138,7 @@ from transformers import pipeline
138
  messages = [
139
  {"role": "user", "content": "Who are you?"},
140
  ]
141
- pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.1-legalkit-8b")
142
  pipe(messages)
143
 
144
 
@@ -146,8 +146,8 @@ pipe(messages)
146
 
147
  from transformers import AutoTokenizer, AutoModelForCausalLM
148
 
149
- tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.1-legalkit-8b")
150
- model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.1-legalkit-8b")
151
  ```
152
 
153
 
 
16
  model_creator: MaziyarPanahi
17
  quantized_by: MaziyarPanahi
18
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
19
+ model_name: calme-2.3-legalkit-8b
20
  datasets:
21
  - MaziyarPanahi/legalkit_cot_reasoning_nous_hermes
22
  ---
23
 
24
  <img src="./calme-2-legalkit.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
25
 
26
+ # MaziyarPanahi/calme-2.3-legalkit-8b
27
 
28
  This model is an advanced iteration of the powerful `meta-llama/Meta-Llama-3.1-8B-Instruct`, specifically fine-tuned to enhance its capabilities in the legal domain. The fine-tuning process utilized a synthetically generated dataset derived from the French [LegalKit](https://huggingface.co/datasets/louisbrulenaudet/legalkit), a comprehensive legal language resource.
29
 
 
34
 
35
  # ⚡ Quantized GGUF
36
 
37
+ All GGUF models are available here: [MaziyarPanahi/calme-2.3-legalkit-8b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b-GGUF)
38
 
39
 
40
  # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
 
138
  messages = [
139
  {"role": "user", "content": "Who are you?"},
140
  ]
141
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.3-legalkit-8b")
142
  pipe(messages)
143
 
144
 
 
146
 
147
  from transformers import AutoTokenizer, AutoModelForCausalLM
148
 
149
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.3-legalkit-8b")
150
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.3-legalkit-8b")
151
  ```
152
 
153