File size: 4,078 Bytes
2b739a9
 
 
f0fb314
646f05b
f0fb314
cb0c160
f0fb314
21c2476
0c4f028
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: agpl-3.0
---

This repo catalogs my weights for use with my [VALL-E](https://github.com/e-c-k-e-r/vall-e) implementation as I try and iron out the kinks.

The model currently is in a *semi-usable* state, and I'm releasing them now in hopes that it also helps jumpstart anyone else that wants to use them.

To reiterate, this is ***by no means*** complete. I am not passing this off as competitive.

## Models

* `config.retnet.yaml` / `ar+nar-retnet-8`: The previously released weights.
	+ This configuration utilizes a RetNet (retention based transformer) as the underlying architecture due to a number of misleading interpretations with comparisons, for better or for worse.
		+ Prompt and response embeddings are summed (further RVQ levels gets the previous RVQ levels' embeddings factored in).
		+ Tokenizer is a homebrewed "naive" implementation.
	+ This model received the most training time between my 4070Ti, 7900XTX, and a few rental rigs to training further progress, entirely at `bfloat16` with `prodigyopt` (and a few optimizer restarts).
	+ The later part of training aimed to shuffle between speakers rather than the global pool of utterances to better focus on zero-shot performance. Due to this, I feel it achieved *decent* zero-shot performance.
	+ However, due to the dataset being aggressively trimmed under 12 seconds for memory savings during training, it suffers trying to inference non-short utterances. Additional training may fix this, the following models seemed to adapt well to longer utterances.
	+ Prior testing showed that longer prompt durations results in better utterances.

* `config.llama.yaml` / `ar+nar-llama-8`: The most recent-ishly trained weights after learning from my mistakes.
	+ This configuration utilizes Llama's attention-based transformer as the underlying architecture, making use of creature comforts like RoPE, GQA, and memory-efficient attention (trained under `xformers`, shouldn't really affect things).
		+ Prompt and response embeddings are NOT summed (each RVQ level only attends to the current RVQ level).
		+ Utilizes a HF tokenizer for "optimal" vocab.
		+ The current RVQ level is included as a token as well to help guide NAR tasks better.
	+ This model received a few days of training on my 4xV100s, stepping up the duration window to *try* and better make the model inference for longer utterances.
		+ Some sessions end up training the current duration window for a few epochs, but I don't know how much it affected things.
	+ However, it seems to *only* do well with long utterances. Short utterances fumble. I believe further training with a variety of durations should allow the AR to handle a variety of durations.
		- I believe the "slowly stepping up the context length" only works for text, and not audio.
	+ Zero-shot performance leaves a bit to be desired, as it did not receive the special training prioritizing shuffling between speakers rather than the global pool of utterances.
	+ Testing showed that, despite also stepping up the prompt duration, it *really* likes three second prompts.
	+ Definitely needs additional training.

* `config.llama.split.yaml` / `ar-llama-1` + `nar-llama-8`: The above model, but split and trained a little bit more.
	+ This experiment is to see whether the AR and NAR benefitted from being split up after enough pretraining, to un-"lobotomize" any penalties from attending to two different tasks (as the AR predicts the next token, and the NAR predicts the same token but a different level).
	+ I believe I trained each separate model an additional extra day for another additional audio-duration window for similar training lengths.
	+ I don't think audio quality differs a non-trivial amount to warrant splitting the model.

There's a bunch of additional configurations (between the underlying arch, embedding modes, interleaving, and even a NAR-"only" model) that are to be further explored, but current experiments showed they either are not worth the additional performance penalties (interleaving) or fall flat (NAR-"only", chunked interleaving).