ecker commited on
Commit
ad71a7a
1 Parent(s): 03504d1

Update README.md

Browse files

Added additional observations from recent experimentation......

Files changed (1) hide show
  1. README.md +34 -4
README.md CHANGED
@@ -10,14 +10,19 @@ To reiterate, this is ***by no means*** complete. I am not passing this off as c
10
 
11
  ## Models
12
 
 
 
13
  * `config.retnet.yaml` / `ar+nar-retnet-8`: The previously released weights.
14
- + This configuration utilizes a RetNet (retention based transformer) as the underlying architecture due to a number of misleading interpretations with comparisons, for better or for worse.
15
  + Prompt and response embeddings are summed (further RVQ levels gets the previous RVQ levels' embeddings factored in).
16
  + Tokenizer is a homebrewed "naive" implementation.
17
  + This model received the most training time between my 4070Ti, 7900XTX, and a few rental rigs to training further progress, entirely at `bfloat16` with `prodigyopt` (and a few optimizer restarts).
18
  + The later part of training aimed to shuffle between speakers rather than the global pool of utterances to better focus on zero-shot performance. Due to this, I feel it achieved *decent* zero-shot performance.
19
  + However, due to the dataset being aggressively trimmed under 12 seconds for memory savings during training, it suffers trying to inference non-short utterances. Additional training may fix this, the following models seemed to adapt well to longer utterances.
 
20
  + Prior testing showed that longer prompt durations results in better utterances.
 
 
21
 
22
  * `config.llama.yaml` / `ar+nar-llama-8`: The most recent-ishly trained weights after learning from my mistakes.
23
  + This configuration utilizes Llama's attention-based transformer as the underlying architecture, making use of creature comforts like RoPE, GQA, and memory-efficient attention (trained under `xformers`, shouldn't really affect things).
@@ -32,11 +37,36 @@ To reiterate, this is ***by no means*** complete. I am not passing this off as c
32
  + Zero-shot performance leaves a bit to be desired, as it did not receive the special training prioritizing shuffling between speakers rather than the global pool of utterances.
33
  - Addendum: Additional brief training for sampling based on speaker per "epoch" (per dataloader, not dataset) seemed to slightly improve it.
34
  + Testing showed that, despite also stepping up the prompt duration, it *really* likes three second prompts.
35
- + Definitely needs additional training.
 
 
 
 
36
 
37
  * `config.llama.split.yaml` / `ar-llama-1` + `nar-llama-8`: The above model, but split and trained a little bit more.
38
  + This experiment is to see whether the AR and NAR benefitted from being split up after enough pretraining, to un-"lobotomize" any penalties from attending to two different tasks (as the AR predicts the next token, and the NAR predicts the same token but a different level).
39
  + I believe I trained each separate model an additional extra day for another additional audio-duration window for similar training lengths.
40
- + I don't think audio quality differs a non-trivial amount to warrant splitting the model.
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
- There's a bunch of additional configurations (between the underlying arch, embedding modes, interleaving, and even a NAR-"only" model) that are to be further explored, but current experiments showed they either are not worth the additional performance penalties (interleaving) or fall flat (NAR-"only", chunked interleaving).
 
 
 
 
 
 
 
 
 
 
10
 
11
  ## Models
12
 
13
+ This repo contains the following configurations:
14
+
15
  * `config.retnet.yaml` / `ar+nar-retnet-8`: The previously released weights.
16
+ + This configuration utilizes a RetNet (retention based "transformer") as the underlying architecture due to a number of misleading interpretations with comparisons, for better or for worse.
17
  + Prompt and response embeddings are summed (further RVQ levels gets the previous RVQ levels' embeddings factored in).
18
  + Tokenizer is a homebrewed "naive" implementation.
19
  + This model received the most training time between my 4070Ti, 7900XTX, and a few rental rigs to training further progress, entirely at `bfloat16` with `prodigyopt` (and a few optimizer restarts).
20
  + The later part of training aimed to shuffle between speakers rather than the global pool of utterances to better focus on zero-shot performance. Due to this, I feel it achieved *decent* zero-shot performance.
21
  + However, due to the dataset being aggressively trimmed under 12 seconds for memory savings during training, it suffers trying to inference non-short utterances. Additional training may fix this, the following models seemed to adapt well to longer utterances.
22
+ + From the `ar+nar-llama-8` experiment, I believe this can be "fixed" with additional training on the currently processed dataset.
23
  + Prior testing showed that longer prompt durations results in better utterances.
24
+ + *Can* benefit from additional training, but I recall the average loss being around `1.9` to `2.1`.
25
+ + However, due to regressions (or bias from working under `llama`), I don't think I can optimially train with a RetNet again (both in terms of VRAM consumption and throughput).
26
 
27
  * `config.llama.yaml` / `ar+nar-llama-8`: The most recent-ishly trained weights after learning from my mistakes.
28
  + This configuration utilizes Llama's attention-based transformer as the underlying architecture, making use of creature comforts like RoPE, GQA, and memory-efficient attention (trained under `xformers`, shouldn't really affect things).
 
37
  + Zero-shot performance leaves a bit to be desired, as it did not receive the special training prioritizing shuffling between speakers rather than the global pool of utterances.
38
  - Addendum: Additional brief training for sampling based on speaker per "epoch" (per dataloader, not dataset) seemed to slightly improve it.
39
  + Testing showed that, despite also stepping up the prompt duration, it *really* likes three second prompts.
40
+ + Definitely needs additional training, but the next way to go is unknown.
41
+ + Naturally, training it on a "next RVQ level is half as likely" distribution introduces some crust as the later RVQ levels are less accurate, introducing noise and artifacts.
42
+ + Naively training it on equally distributed RVQ levels *does* lobotomize the AR.
43
+ + Additional training on the AR will see huge diminishing returns, so I don't know if it's worth doing so.
44
+ + Seems to be a decent foundation for "distillation", at the very least for LoRA training.
45
 
46
  * `config.llama.split.yaml` / `ar-llama-1` + `nar-llama-8`: The above model, but split and trained a little bit more.
47
  + This experiment is to see whether the AR and NAR benefitted from being split up after enough pretraining, to un-"lobotomize" any penalties from attending to two different tasks (as the AR predicts the next token, and the NAR predicts the same token but a different level).
48
  + I believe I trained each separate model an additional extra day for another additional audio-duration window for similar training lengths.
49
+ + ~~I don't think audio quality differs a non-trivial amount to warrant splitting the model.~~
50
+ - From recent experiments, it does seem a NAR-only model is beneficial.
51
+
52
+ Some additional configurations have been explored with, but experiments have not been fruitful:
53
+
54
+ * Exotic wrappers like `BitNet` seemed to yield little gains in inferencing, somehow.
55
+
56
+ * Mamba / Mamba2-based models have shown that it's ***really*** hard to have an AR+NAR model.
57
+
58
+ * A NAR only model has been experimented with, but seemed utterly useless in practice.
59
+ + The underlying architecture will query the model for the duration, and then inference *all* RVQ levels in parallel (one level at a time).
60
+ + Despite working in the overfitting test trainer and decent training metrics, inferencing will have the model fall completely flat.
61
+ + I have zero ideas for which path to go with for further experimentation.
62
 
63
+ * A [Descript-Audio-Codec](https://github.com/descriptinc/descript-audio-codec/) based model has been experimented with, but has not seem fruitful.
64
+ + This model would make use of 16 layers instead of the default 12 layers. I feel the performance hit is negligible, even with the additional tokens-per-frame increase with DAC.
65
+ + This utilizes DAC's 44Khz model (erroniously at an actual 44KHz instead of 44.1KHz), as audio quantized through the 24KHz model will *always* diverge.
66
+ + I imagine due to the nature of DAC leaving very little room for errors (a testament to how "optimized" the codes are), it's ***really*** hard to model an LM with it.
67
+ + Output audio is rather crunchy and crusty from the later RVQ levels being inaccurate enough.
68
+ + I'm not sure which path to go with it for further experimentation:
69
+ + Utilizing the original model for embeddings or last hidden state as the input embeddings for the prompt/response.
70
+ + I don't think this is the way to go. It seems negligible for additional complexity.
71
+ + Training a dedicated NAR model in hopes to bolster the later RVQ levels' performance, as the issues come from the later RVQ levels.
72
+ + Utilizing an interleaved pattern instead to make better use of attending to past tokens for all levels.