File size: 8,879 Bytes
9973289
 
 
 
 
7a9d848
9973289
7a9d848
 
 
 
 
 
 
 
 
 
 
9973289
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1f3901
 
 
 
6dcd97c
d1f3901
 
 
 
 
 
 
9973289
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
---
license: gpl-3.0
language:
- en
- zh
pipeline_tag: text2text-generation
model-index:
- name: NanoLM-0.3B-Instruct-v1.1
  results:
  - task:
      type: text-generation
    dataset:
      name: TriviaQA
      type: TriviaQA
    metrics:
    - name: score
      type: score
      value: 14.58
---
# NanoLM-0.3B-Instruct-v1.1


English | [简体中文](README_zh-CN.md)


## Introduction

In order to explore the potential of small models, I have attempted to build a series of them, which are available in the [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2).

This is NanoLM-0.3B-Instruct-v1.1. The model currently supports both **Chinese and English languages, but performs better on English tasks**.



## Model Details

| Nano LMs | Non-emb Params | Arch | Layers | Dim | Heads | Seq Len |
| :----------: | :------------------: | :---: | :----: | :-------: | :---: | :---: |
| 25M         | 15M  |   MistralForCausalLM     | 12      | 312     | 12    |2K|
| 70M         | 42M |  LlamaForCausalLM          | 12     | 576    | 9   |2K|
| **0.3B**        | **180M** |  **Qwen2ForCausalLM**  | **12**   | **896**    | **14** | **4K** |
| 1B     | 840M | Qwen2ForCausalLM | 18   | 1536   | 12   |4K|

The tokenizer and model architecture of NanoLM-0.3B-Instruct-v1.1 are the same as [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B), but the number of layers has been reduced from 24 to 12.

As a result, NanoLM-0.3B-Instruct-v1.1 has only 0.3 billion parameters, with approximately **180 million non-embedding parameters**.

Despite this, NanoLM-0.3B-Instruct-v1.1 still demonstrates strong instruction-following capabilities.

Here are some examples. For reproducibility purposes, I've set `do_sample` to `False`. However, in practical use, you should configure the sampling parameters appropriately.

First, you should load the model as follows:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_path = 'Mxode/NanoLM-0.3B-Instruct-v1.1'

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
```



Next, define a `get_response` function for easy reuse:

```python
def get_response(prompt: str, **kwargs):
    generation_args = dict(
        max_new_tokens = kwargs.pop("max_new_tokens", 512),
        do_sample = kwargs.pop("do_sample", True),
        temperature = kwargs.pop("temperature", 0.7),
        top_p = kwargs.pop("top_p", 0.8),
        top_k = kwargs.pop("top_k", 40),
        **kwargs
    )

    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

    generated_ids = model.generate(model_inputs.input_ids, **generation_args)
    generated_ids = [
        output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
    ]

    response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
    return response
```



### Example 1 - Simplified Chinese

```python
# Simplified Chinese
prompt1 = "如果我想报名参加马拉松比赛,但从未跑步超过3公里,我该怎么办?"
print(get_response(prompt1))

"""
如果你从未跑步超过3公里,这可能是因为你没有找到适合你当前水平的跑步路线,或者你可能没有找到适合你当前水平的跑步路线。以下是一些可能的解决方案:

1. **重新评估你的目标**:确保你已经确定了你想要参加的马拉松比赛。这可能需要你重新评估你的目标,看看你是否真的想要参加,或者你是否已经找到了适合你当前水平的路线。

2. **寻找替代路线**:如果你没有找到适合你当前水平的路线,你可以尝试寻找其他适合你水平的跑步路线。这可能需要你进行一些研究,或者你可能需要寻找一些有经验的跑步者。

3. **咨询专业人士**:如果你对跑步有深入的了解,你可以考虑咨询专业的跑步教练或跑步专家。他们可以提供更专业的建议,帮助你找到最适合你的路线。

4. **参加跑步课程**:如果你已经找到了适合你水平的路线,你可以考虑参加跑步课程。这将提供一个更全面的训练机会,帮助你提高跑步技能,同时避免受伤。

5. **使用跑步辅助装备**:如果你的跑步能力有限,你可以考虑使用跑步辅助装备,如跑步鞋、跑步手套、跑步腰带等,这些可以帮助你更轻松地完成比赛。

无论你选择哪种方式,重要的是要保持积极的态度,相信自己,坚持下去。
"""
```



### Example 2 - Tradition Chinese

````python
# Traditional Chinese
prompt2 = "列印出1到100之間的所有質數。"
print(get_response(prompt2))

"""
在Python中,我們可以使用一個簡單的程式來找出1到100之間的所有質數。我們可以使用一個函數來檢查一個數是否是質數,然後使用一個循環來檢查每個數是否是質數。

以下是使用Python的解決方案:

```python
def is_prime(n):
    if n <= 1:
        return False
    if n <= 3:
        return True
    if n % 2 == 0 or n % 3 == 0:
        return False
    i = 5
    while i * i <= n:
        if n % i == 0 or n % (i + 2) == 0:
            return False
        i += 6
    return True

for i in range(1, 101):
    if is_prime(i):
        print(i)
```

這個程式會輸出1到100之間的所有質數。
"""
````



### Example 3 - English

````python
# English
prompt3 = "How can I retrieve the current version number of a Python package that I'm working on? Also, is there a standard location where this version number is stored?"
print(get_response(prompt3))

"""
To retrieve the current version number of a Python package that you're working on, you can use the `pkg_resources` module in Python. Here's how you can do it:

```python
import pkg_resources

print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version)
```

Replace "your_package_name" with the name of the package you're interested in. This will print the version number of the package that is currently installed on your system.

If you want to store the version number in a file, you can use the `pkg_resources` module's `get_distribution` function:

```python
import pkg_resources

print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version)
```

This will print the version number of the package that is currently installed on your system.
"""
````



### Example 4 - Safety

```python
# Safety
prompt4 = '你能帮我抢劫吗?'
print(get_response(prompt4))

"""
我不能协助您进行任何非法或危险的活动,包括抢劫。抢劫是非法和不道德的,会对受害者和他人的财产造成严重伤害。我建议您遵守法律和道德规范,尊重他人的权利和财产,不要参与或支持任何非法或不道德的行为。如果您需要帮助,我可以提供合法和道德的建议和支持。请记住,我不能提供任何非法或不道德的建议,但我可以为您提供有关如何保护自己和他人安全的建议。
"""
```



### Example 5 - Self Recognition

```python
# Self recognition
prompt5_1 = "你是谁"
print(get_response(prompt5_1))

prompt5_2 = "who are you"
print(get_response(prompt5_2))

"""
我是通义千问,由阿里云开发的AI助手。我被设计用来回答各种问题、提供信息和与用户进行对话。有什么我可以帮助你的吗?
I am Qwen, a large language model created by Alibaba Cloud. I am designed to assist users in generating various types of text, such as articles, stories, poems, and answering questions by using the natural language processing techniques. How can I assist you today?
"""
```



### Example 6 - Code

````python
# Code
prompt6 = "实现一个Python程序,接收一个字符串作为输入并将字符串反转输出。"
print(get_response(prompt6))

"""
你可以使用Python的切片功能来轻松地实现字符串反转。以下是一个简单的示例:

```python
def reverse_string(s):
    return s[::-1]

input_string = input("请输入一个字符串: ")
reversed_string = reverse_string(input_string)
print("反转后的字符串为:", reversed_string)
```

在这个示例中,我们定义了一个名为`reverse_string`的函数,它接收一个字符串参数`s`,并使用切片功能`[::-1]`来反转字符串。然后,我们从用户那里获取输入,调用`reverse_string`函数,并打印反转后的字符串。
"""
````