AliMaatouk commited on
Commit
71e7ff4
1 Parent(s): 2c279d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -3
README.md CHANGED
@@ -1,3 +1,84 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - nlp
8
+ ---
9
+
10
+ # TinyLlama-1.1B-Tele-it Model Card
11
+
12
+ ## Model Summary
13
+
14
+ The language model TinyLlama-1.1B-Tele-it is an instruct version of [TinyLlama-1.1B-Tele](https://huggingface.co/AliMaatouk/TinyLlama-1.1B-Tele), which is based on [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama_v1.1) and specialized in telecommunications. It was fine-tuned to follow instructions using Supervised Fine-tuning (SFT) with a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Open-instruct](https://huggingface.co/datasets/VMware/open-instruct) datasets.
15
+
16
+ ### Context Length
17
+
18
+ The context length of the model is 2048 tokens.
19
+
20
+ ## Usage
21
+
22
+ TinyLlama-1.1B-Tele-it has been fine-tuned using pairs of instructions and responses from the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Open-instruct](https://huggingface.co/datasets/VMware/open-instruct) datasets, separated by the "\n" delimiter. Below is an example of how to query the model using this format:
23
+
24
+ ```markdown
25
+ Prompt: Explain to me Shannon capacity.\n
26
+
27
+ Model: The Shannon capacity of a communication channel is the maximum amount of information that can be transmitted over the channel in a single transmission. It is a measure of the maximum amount of information that can be transmitted over a channel with a given noise level. The Shannon capacity is a fundamental limit on the amount of information that can be transmitted over a communication channel.
28
+ ```
29
+
30
+ ## Sample Code
31
+
32
+ Below we share some code snippets on how to get quickly started with running the model. First, make sure to `pip install transformers`, then copy the snippet corresponding to your hardware and adapt it to your usecase.
33
+
34
+ #### Running the model on a CPU
35
+
36
+
37
+ ```python
38
+ from transformers import AutoTokenizer, AutoModelForCausalLM
39
+
40
+ model = AutoModelForCausalLM.from_pretrained("AliMaatouk/TinyLlama-1.1B-Tele-it", torch_dtype="auto")
41
+ tokenizer = AutoTokenizer.from_pretrained("AliMaatouk/TinyLlama-1.1B-Tele-it")
42
+
43
+ prompt = "Explain to me Shannon capacity.\n"
44
+ input_ids = tokenizer(prompt, return_tensors="pt")
45
+ outputs = model.generate(**input_ids, max_new_tokens=100)
46
+
47
+ generated_tokens = outputs[0, len(input_ids['input_ids'][0]):]
48
+ response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
49
+ print(response)
50
+ ```
51
+
52
+ #### Running the model on a single / multi GPU
53
+
54
+ ```python
55
+ import torch
56
+ from transformers import AutoModelForCausalLM, AutoTokenizer
57
+
58
+ model = AutoModelForCausalLM.from_pretrained("AliMaatouk/TinyLlama-1.1B-Tele-it", torch_dtype="auto", device_map="auto")
59
+ tokenizer = AutoTokenizer.from_pretrained("AliMaatouk/TinyLlama-1.1B-Tele-it")
60
+
61
+ prompt = "Explain to me Shannon capacity.\n"
62
+ input_ids = tokenizer(prompt, return_tensors="pt").to("cuda")
63
+ outputs = model.generate(**input_ids, max_new_tokens=100)
64
+
65
+ generated_tokens = outputs[0, len(input_ids['input_ids'][0]):]
66
+ response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
67
+ print(response)
68
+ ```
69
+
70
+ ## Citation
71
+
72
+ You can find the paper with all details about the model at https://arxiv.org/abs/2409.05314. Please cite it as follows:
73
+
74
+ ```bib
75
+ @misc{maatouk2024telellmsseriesspecializedlarge,
76
+ title={Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications},
77
+ author={Ali Maatouk and Kenny Chirino Ampudia and Rex Ying and Leandros Tassiulas},
78
+ year={2024},
79
+ eprint={2409.05314},
80
+ archivePrefix={arXiv},
81
+ primaryClass={cs.IT},
82
+ url={https://arxiv.org/abs/2409.05314},
83
+ }
84
+ ```