RichardErkhov commited on
Commit
5cefb7a
1 Parent(s): 614cf10

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Phi-1.5-Tele - GGUF
11
+ - Model creator: https://huggingface.co/AliMaatouk/
12
+ - Original model: https://huggingface.co/AliMaatouk/Phi-1.5-Tele/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Phi-1.5-Tele.Q2_K.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q2_K.gguf) | Q2_K | 0.54GB |
18
+ | [Phi-1.5-Tele.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.IQ3_XS.gguf) | IQ3_XS | 0.59GB |
19
+ | [Phi-1.5-Tele.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.IQ3_S.gguf) | IQ3_S | 0.61GB |
20
+ | [Phi-1.5-Tele.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q3_K_S.gguf) | Q3_K_S | 0.61GB |
21
+ | [Phi-1.5-Tele.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.IQ3_M.gguf) | IQ3_M | 0.64GB |
22
+ | [Phi-1.5-Tele.Q3_K.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q3_K.gguf) | Q3_K | 0.69GB |
23
+ | [Phi-1.5-Tele.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q3_K_M.gguf) | Q3_K_M | 0.69GB |
24
+ | [Phi-1.5-Tele.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q3_K_L.gguf) | Q3_K_L | 0.75GB |
25
+ | [Phi-1.5-Tele.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.IQ4_XS.gguf) | IQ4_XS | 0.74GB |
26
+ | [Phi-1.5-Tele.Q4_0.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q4_0.gguf) | Q4_0 | 0.77GB |
27
+ | [Phi-1.5-Tele.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.IQ4_NL.gguf) | IQ4_NL | 0.78GB |
28
+ | [Phi-1.5-Tele.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q4_K_S.gguf) | Q4_K_S | 0.78GB |
29
+ | [Phi-1.5-Tele.Q4_K.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q4_K.gguf) | Q4_K | 0.83GB |
30
+ | [Phi-1.5-Tele.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q4_K_M.gguf) | Q4_K_M | 0.83GB |
31
+ | [Phi-1.5-Tele.Q4_1.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q4_1.gguf) | Q4_1 | 0.85GB |
32
+ | [Phi-1.5-Tele.Q5_0.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q5_0.gguf) | Q5_0 | 0.92GB |
33
+ | [Phi-1.5-Tele.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q5_K_S.gguf) | Q5_K_S | 0.92GB |
34
+ | [Phi-1.5-Tele.Q5_K.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q5_K.gguf) | Q5_K | 0.96GB |
35
+ | [Phi-1.5-Tele.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q5_K_M.gguf) | Q5_K_M | 0.96GB |
36
+ | [Phi-1.5-Tele.Q5_1.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q5_1.gguf) | Q5_1 | 1.0GB |
37
+ | [Phi-1.5-Tele.Q6_K.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q6_K.gguf) | Q6_K | 1.09GB |
38
+ | [Phi-1.5-Tele.Q8_0.gguf](https://huggingface.co/RichardErkhov/AliMaatouk_-_Phi-1.5-Tele-gguf/blob/main/Phi-1.5-Tele.Q8_0.gguf) | Q8_0 | 1.41GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: mit
46
+ language:
47
+ - en
48
+ pipeline_tag: text-generation
49
+ tags:
50
+ - nlp
51
+ ---
52
+
53
+ # Phi-1.5-Tele Model Card
54
+
55
+ ## Model Summary
56
+
57
+ The language model Phi-1.5-Tele is a Transformer with **1.3 billion** parameters, specialized in telecommunications. It is based on Microsoft [phi-1.5](https://huggingface.co/microsoft/phi-1_5) and was continutally pretrained on [Tele-Data](https://huggingface.co/datasets/AliMaatouk/Tele-Data), a large-scale dataset of approximately 2.5 billion tokens of telecommunications material, including articles, standards, and general web content related to the telecommunications domain.
58
+
59
+ When assessed against telecommunications benchmarks such as [Tele-Eval](https://huggingface.co/datasets/AliMaatouk/Tele-Eval), Phi-1.5-Tele outperforms [phi-1.5](https://huggingface.co/microsoft/phi-1_5) by several percentage points. Additionally, Phi-1.5-Tele matches [phi-1.5](https://huggingface.co/microsoft/phi-1_5) across benchmarks related to common sense, language understanding, and logical reasoning. Thus, this adaptation was achieved with minimal compromise in performance on the original version.
60
+
61
+ ### Context Length
62
+
63
+ The model was trained on a context length of 2048 tokens.
64
+
65
+ ## Usage
66
+
67
+ Phi-1.5-Tele is a base model best suited for fine-tuning on applications related to telecommunications. Although it has not been specifically fine-tuned to follow instructions, it can be prompted to answer questions and follow instructions using the following format:
68
+
69
+ ```markdown
70
+ Write me a poem about telecommunications.
71
+
72
+ Answer: Our world is a network of digital streams,
73
+ Connecting every voice and thought,
74
+ Through the wires and fibers that transmit,
75
+ Bringing us closer to the end of the road.
76
+ ```
77
+
78
+ where the model generates the text after "Answer:".
79
+
80
+ ## Sample Code
81
+
82
+ Below we share some code snippets on how to get quickly started with running the model. First, make sure to `pip install transformers`, then copy the snippet corresponding to your hardware and adapt it to your usecase.
83
+
84
+ #### Running the model on a CPU
85
+
86
+
87
+ ```python
88
+ from transformers import AutoTokenizer, AutoModelForCausalLM
89
+
90
+ model = AutoModelForCausalLM.from_pretrained("AliMaatouk/Phi-1.5-Tele", torch_dtype="auto")
91
+ tokenizer = AutoTokenizer.from_pretrained("AliMaatouk/Phi-1.5-Tele")
92
+
93
+ prompt = "Write me a poem about telecommunications.\nAnswer:"
94
+ input_ids = tokenizer(prompt, return_tensors="pt")
95
+ outputs = model.generate(**input_ids, max_new_tokens=100)
96
+
97
+ generated_tokens = outputs[0, len(input_ids['input_ids'][0]):]
98
+ response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
99
+ print(response)
100
+ ```
101
+
102
+ #### Running the model on a single / multi GPU
103
+
104
+ ```python
105
+ import torch
106
+ from transformers import AutoModelForCausalLM, AutoTokenizer
107
+
108
+ model = AutoModelForCausalLM.from_pretrained("AliMaatouk/Phi-1.5-Tele", torch_dtype="auto", device_map="auto")
109
+ tokenizer = AutoTokenizer.from_pretrained("AliMaatouk/Phi-1.5-Tele")
110
+
111
+ prompt = "Write me a poem about telecommunications.\nAnswer:"
112
+ input_ids = tokenizer(prompt, return_tensors="pt").to("cuda")
113
+ outputs = model.generate(**input_ids, max_new_tokens=100)
114
+
115
+ generated_tokens = outputs[0, len(input_ids['input_ids'][0]):]
116
+ response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
117
+ print(response)
118
+ ```
119
+
120
+ ## Citation
121
+
122
+ You can find the paper with all details about the model at https://arxiv.org/abs/2409.05314. Please cite it as follows:
123
+
124
+ ```bib
125
+ @misc{maatouk2024telellmsseriesspecializedlarge,
126
+ title={Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications},
127
+ author={Ali Maatouk and Kenny Chirino Ampudia and Rex Ying and Leandros Tassiulas},
128
+ year={2024},
129
+ eprint={2409.05314},
130
+ archivePrefix={arXiv},
131
+ primaryClass={cs.IT},
132
+ url={https://arxiv.org/abs/2409.05314},
133
+ }
134
+ ```
135
+