davidmezzetti commited on
Commit
59925dc
1 Parent(s): a5e6a5f

Add code example

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -19,13 +19,54 @@ license: apache-2.0
19
  txtai has a built in Text to Speech (TTS) pipeline that makes using this model easy.
20
 
21
  ```python
 
 
 
 
 
 
 
 
 
 
 
 
22
  ```
23
 
24
  ## Usage with ONNX
25
 
26
  This model can also be run directly with ONNX provided the input text is tokenized. Tokenization can be done with [ttstokenizer](https://github.com/neuml/ttstokenizer).
27
 
 
 
28
  ```python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
 
31
  ## How to export
 
19
  txtai has a built in Text to Speech (TTS) pipeline that makes using this model easy.
20
 
21
  ```python
22
+ import soundfile as sf
23
+
24
+ from txtai.pipeline import TextToSpeech
25
+
26
+ # Build pipeline
27
+ tts = TextToSpeech("NeuML/ljspeech-jets-onnx")
28
+
29
+ # Generate speech
30
+ speech = tts("Say something here")
31
+
32
+ # Write to file
33
+ sf.write("out.wav", speech, 22050)
34
  ```
35
 
36
  ## Usage with ONNX
37
 
38
  This model can also be run directly with ONNX provided the input text is tokenized. Tokenization can be done with [ttstokenizer](https://github.com/neuml/ttstokenizer).
39
 
40
+ Note that the txtai pipeline has additional functionality such as batching large inputs together that would need to be duplicated with this method.
41
+
42
  ```python
43
+ import onnxruntime
44
+ import soundfile as sf
45
+ import yaml
46
+
47
+ from ttstokenizer import TTSTokenizer
48
+
49
+ # This example assumes the files have been downloaded locally
50
+ with open("ljspeech-jets-onnx/config.yaml", "r", encoding="utf-8") as f:
51
+ config = yaml.safe_load(f)
52
+
53
+ # Create model
54
+ model = onnxruntime.InferenceSession(
55
+ "ljspeech-jets-onnx/model.onnx",
56
+ providers=["CPUExecutionProvider"]
57
+ )
58
+
59
+ # Create tokenizer
60
+ tokenizer = TTSTokenizer(config["token"]["list"])
61
+
62
+ # Tokenize inputs
63
+ inputs = tokenizer("Say something here")
64
+
65
+ # Generate speech
66
+ outputs = model.run(None, {"text": inputs})
67
+
68
+ # Write to file
69
+ sf.write("out.wav", outputs[0], 22050)
70
  ```
71
 
72
  ## How to export