File size: 1,044 Bytes
4c2c187
 
 
60d2be6
 
 
 
 
 
4c2c187
 
 
 
 
841344f
4c2c187
 
 
 
 
 
 
 
841344f
4c2c187
 
 
 
 
 
 
 
 
 
 
841344f
 
 
 
4c2c187
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
language:
- en
pipeline_tag: conversational
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
TinyLlama/TinyLlama-1.1B-Chat-v1.0 sft on alpaca dataset using LoRA


## Model Details

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [https://github.com/bytebarde/llm_alpaca](https://github.com/bytebarde/llm_alpaca)


## Training Details

### Training Procedure 

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->


#### Training Hyperparameters

- **Training regime:** [fp16 mixed precision] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- **Per device train batch size:** 4
- **Epoch:** 10
- **Loss:** 0.9044

### Framework versions

- PEFT 0.7.1