File size: 4,305 Bytes
cde4f56
 
23baa9d
 
cde4f56
23baa9d
 
db74fad
 
 
 
b6f2434
 
ed12661
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8edfeaf
d3a59b6
ed12661
 
 
 
 
 
d3a59b6
ed12661
 
 
 
23baa9d
 
 
 
 
 
 
 
 
 
 
 
a68a042
 
 
23baa9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: mit
language:
- pl
---

## Model Overview
Quantized version exllamav2 https://github.com/turboderp/exllamav2/

Original model: https://huggingface.co/Nondzu/zephyr-7b-beta-pl

<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/2.5">2.5 bits per weight</a>

<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/3.0">3.0 bits per weight</a>
 
<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/3.5">3.5 bits per weight</a>

<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/4.0">4.0 bits per weight</a>

<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/5.0">5.0 bits per weight</a>

<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/6.0">6.0 bits per weight</a>

<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/7.0">7.0 bits per weight</a>
 
<a href="https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2/tree/8.0">8.0 bits per weight</a>

## Download instructions

With git:

```shell
git clone --single-branch --branch 4.0 https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `zephyr-7b-beta-pl-exl2`:

```shell
mkdir zephyr-7b-beta-pl-exl2
huggingface-cli download Nondzu/zephyr-7b-beta-pl-exl2 --local-dir zephyr-7b-beta-pl-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

```shell
mkdir zephyr-7b-beta-pl-exl2
huggingface-cli download Nondzu/zephyr-7b-beta-pl-exl2 --revision 8.0 --local-dir zephyr-7b-beta-pl-exl2 --local-dir-use-symlinks False
```




## Current Status: Alpha
- **Stage**: Alpha-Alpaca

## Training Details

I trained the model using 3xRTX 3090 for 163 hours.
[![Built with Axolotl](https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png)](https://github.com/OpenAccess-AI-Collective/axolotl)
## Quantised Model Links:


1. https://huggingface.co/Nondzu/zephyr-7b-beta-pl-exl2
2. https://huggingface.co/TheBloke/zephyr-7B-beta-pl-GGUF
3. https://huggingface.co/TheBloke/zephyr-7B-beta-pl-AWQ
4. https://huggingface.co/TheBloke/zephyr-7B-beta-pl-GPTQ


## Model Specifics
- **Base Model**: HuggingFaceH4/zephyr-7b-beta
- **Fine-Tuning Method**: QLORA
- **Primary Focus**: Polish language datasets

## Datasets:
- Dataset 1 Name: Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
- Dataset 1 Link: [Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish](https://huggingface.co/datasets/Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish?row=16)
- Dataset 2 Name: klima7/polish-prose
- Dataset 2 Link: [klima7/polish-prose](https://huggingface.co/datasets/klima7/polish-prose)

## Usage Warning
As this is an experimental model, users should be aware of the following:
- **Reliability**: The model has not been fully tested and may exhibit unexpected behaviors or performance issues.
- **Updates**: The model is subject to change based on ongoing testing and feedback.
- **Data Sensitivity**: Users should exercise caution when using sensitive or private data, as the model's output and behavior are not fully predictable at this stage.

## Prompt template: Alpaca

```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

```

## Example

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/1WYp9Su1NYvYCIU-2J7TG.png)

## Feedback and Contribution
User feedback is crucial during this testing phase. We encourage users to provide feedback on model performance, issues encountered, and any suggestions for improvements. Contributions in terms of shared test results, datasets, or code improvements are also welcome.

---

**Disclaimer**: This experimental model is provided 'as is', without warranty of any kind. Users should use the model at their own risk. The creators or maintainers of the model are not responsible for any consequences arising from its use.


![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/CPClYNIMp3Qswt2F0Y9B3.png)