Update README.md
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ This model is a parameter-efficient fine-tuned version of Phi-3 Mini 4K trained
|
|
44 |
|
45 |
The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the [EurekaRebus](https://huggingface.co/datasets/gsarti/eureka-rebus) using QLora via [Unsloth](https://github.com/unslothai/unsloth) and [TRL](https://github.com/huggingface/trl). This version has merged adapter weights in half precision, enabling out-of-the-box for usage with the `transformers` library.
|
46 |
|
47 |
-
We also provide [adapter checkpoints through training](gsarti/phi3-mini-rebus-solver-adapters) and [8-bit GGUF](gsarti/phi3-mini-rebus-solver-Q8_0-GGUF) versions of this model for analysis and local execution.
|
48 |
|
49 |
## Using the Model
|
50 |
|
|
|
44 |
|
45 |
The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the [EurekaRebus](https://huggingface.co/datasets/gsarti/eureka-rebus) using QLora via [Unsloth](https://github.com/unslothai/unsloth) and [TRL](https://github.com/huggingface/trl). This version has merged adapter weights in half precision, enabling out-of-the-box for usage with the `transformers` library.
|
46 |
|
47 |
+
We also provide [adapter checkpoints through training](https://huggingface.co/gsarti/phi3-mini-rebus-solver-adapters) and [8-bit GGUF](https://huggingface.co/gsarti/gsarti/phi3-mini-rebus-solver-Q8_0-GGUF) versions of this model for analysis and local execution.
|
48 |
|
49 |
## Using the Model
|
50 |
|