Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# 模型摘要
|
2 |
本模型是基于llama3.1-8B-Chinese-Chat预训练模型基础上再次训练的法律条文模型。
|
3 |
* 基础型号:llama3.1-8B-Chinese-Chat
|
@@ -35,7 +41,7 @@ snapshot_download(repo_id="basuo/llama-law", ignore_patterns=["*.gguf"]) # Down
|
|
35 |
import torch
|
36 |
from unsloth import FastLanguageModel
|
37 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
38 |
-
model_name = "
|
39 |
max_seq_length = 2048,
|
40 |
dtype = torch.float16,
|
41 |
load_in_4bit = True,
|
@@ -73,4 +79,4 @@ tokenizer.batch_decode(outputs)
|
|
73 |
```
|
74 |
## GGUF模型
|
75 |
1. 从模型文件中下载GGUF文件;
|
76 |
-
2. 将GGUF模型与LM Studio或Ollama结合使用;
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- Skepsun/lawyer_llama_data
|
4 |
+
language:
|
5 |
+
- zh
|
6 |
+
---
|
7 |
# 模型摘要
|
8 |
本模型是基于llama3.1-8B-Chinese-Chat预训练模型基础上再次训练的法律条文模型。
|
9 |
* 基础型号:llama3.1-8B-Chinese-Chat
|
|
|
41 |
import torch
|
42 |
from unsloth import FastLanguageModel
|
43 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
44 |
+
model_name = "/Your/Local/Path/to/llama-law",
|
45 |
max_seq_length = 2048,
|
46 |
dtype = torch.float16,
|
47 |
load_in_4bit = True,
|
|
|
79 |
```
|
80 |
## GGUF模型
|
81 |
1. 从模型文件中下载GGUF文件;
|
82 |
+
2. 将GGUF模型与LM Studio或Ollama结合使用;
|