Xenova HF staff commited on
Commit
e489190
1 Parent(s): 993d8a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,7 +1,54 @@
1
  ---
2
  library_name: transformers.js
 
 
3
  ---
4
 
5
  https://huggingface.co/ibm-granite/granite-3.0-2b-instruct with ONNX weights to be compatible with Transformers.js.
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
1
  ---
2
  library_name: transformers.js
3
+ base_model:
4
+ - ibm-granite/granite-3.0-2b-instruct
5
  ---
6
 
7
  https://huggingface.co/ibm-granite/granite-3.0-2b-instruct with ONNX weights to be compatible with Transformers.js.
8
 
9
+
10
+ ## Usage (Transformers.js)
11
+
12
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
13
+ ```bash
14
+ npm i @huggingface/transformers
15
+ ```
16
+
17
+ **Example:** Text generation with `onnx-community/granite-3.0-2b-instruct`.
18
+
19
+ ```js
20
+ import { pipeline } from "@huggingface/transformers";
21
+
22
+ // Create a text generation pipeline
23
+ const generator = await pipeline(
24
+ "text-generation",
25
+ "onnx-community/granite-3.0-2b-instruct",
26
+ { dtype: "q4" },
27
+ );
28
+
29
+ // Define the list of messages
30
+ const messages = [
31
+ { role: "system", content: "You are a helpful assistant." },
32
+ { role: "user", content: "Tell me a joke." },
33
+ ];
34
+
35
+ // Generate a response
36
+ const output = await generator(messages, { max_new_tokens: 128 });
37
+ console.log(output[0].generated_text.at(-1).content);
38
+ ```
39
+
40
+ <details>
41
+
42
+ <summary>Example output</summary>
43
+
44
+ ```
45
+ Why don't scientists trust atoms?
46
+
47
+ Because they make up everything!
48
+ ```
49
+ </details>
50
+
51
+
52
+ ---
53
+
54
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).