File size: 4,962 Bytes
4fed173
 
46e99c9
4fed173
46e99c9
4fed173
 
 
46e99c9
4fed173
46e99c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4fed173
 
46e99c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4fed173
46e99c9
4fed173
46e99c9
4fed173
46e99c9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
language:
- it
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
- unsloth
- gemma
- gemma2
- trl
- word-game
- rebus
- italian
- word-puzzle
- crossword
datasets:
- gsarti/eureka-rebus
base_model: unsloth/gemma-2-2b-bnb-4bit

model-index:
- name: gsarti/gemma-2-2b-rebus-solver-fp16
  results:
  - task:
      type: verbalized-rebus-solving
      name: Verbalized Rebus Solving
    dataset:
      type: gsarti/eureka-rebus
      name: EurekaRebus
      config: llm_sft
      split: test
      revision: 0f24ebc3b66cd2f8968077a5eb058be1d5af2f05
    metrics:
      - type: exact_match
        value: 0.43
        name: First Pass Exact Match
      - type: exact_match
        value: 0.36
        name: Solution Exact Match
---

# Gemma-2 2B Verbalized Rebus Solver - GGUF Q8_0 🇮🇹

This model is a parameter-efficient fine-tuned version of Gemma-2 2B trained for verbalized rebus solving in Italian, as part of the [release](https://huggingface.co/collections/gsarti/verbalized-rebus-clic-it-2024-66ab8f11cb04e68bdf4fb028) for our paper [Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses](https://arxiv.org/abs/2408.00584). The task of verbalized rebus solving consists of converting an encrypted sequence of letters and crossword definitions into a solution phrase matching the word lengths specified in the solution key. An example is provided below.

The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the [EurekaRebus](https://huggingface.co/datasets/gsarti/eureka-rebus) using QLora via [Unsloth](https://github.com/unslothai/unsloth) and [TRL](https://github.com/huggingface/trl).This repository contains the GGUF exported checkpoint of the model in `Q8_0` format, and the `Modelfile` for usage with [Ollama](https://ollama.com/) (see below).

We also provide [adapter checkpoints through training](https://huggingface.co/gsarti/gemma-2-2b-rebus-solver-adapters) and [FP16 merged](https://huggingface.co/gsarti/gsarti/gemma-2-2b-rebus-solver-fp16) versions of this model for analysis and local execution. 

## Local usage with Ollama

A ready-to-use local version of this model is hosted on the [Ollama Hub](https://ollama.com/gsarti/gemma2-2b-rebus-solver) and can be used as follows:

```shell
ollama run gsarti/llama3.1-8b-rebus-solver "Rebus: [Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]\nChiave risolutiva: 1 ' 5 6 5 3 3 1 14"

# EXAMPLE GENERATION:
#
# Procediamo alla risoluzione del rebus passo per passo:
# - [Materiale espulso dai vulcani] = lava
# - R O = R O
# - [Strumento del calzolaio] = lesina
# - [Si trovano ai lati del bacino] = anche
# - C I = C I
# - [Si ingrassano con la polenta] = oche
# - E I N = E I N
# - [Contiene scorte di cibi] = dispensa
# - B = B
# - [Isola in francese] = ile
# 
# Prima lettura: lava R O lesina anche C I oche E I N dispensa B ile
# 
# Ora componiamo la soluzione seguendo la chiave risolutiva:
# 1 = L
# ' = '
# 5 = avaro
# 6 = lesina
# 5 = anche
# 3 = ciò
# 3 = che
# 1 = è
# 14 = indispensabile
# 
# Soluzione: L'avaro lesina anche ciò che è indispensabile
```

## Limitations

**Lexical overfitting**: As remarked in the related publication, the model overfitted the set of definitions/answers for first pass words. As a result, words that were [explicitly witheld](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/ood_words.txt) from the training set cause significant performance degradation when used as solutions for verbalized rebuses' definitions. You can compare model performances between [in-domain](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/id_test.jsonl) and [out-of-domain](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/ood_test.jsonl) test examples to verify this limitation.

## Model curators

For problems or updates on this model, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).

### Citation Information

If you use this model in your work, please cite our paper as follows:

```bibtex
@article{sarti-etal-2024-rebus,
    title = "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses",
    author = "Sarti, Gabriele and Caselli, Tommaso and Nissim, Malvina and Bisazza, Arianna",
    journal = "ArXiv",
    month = jul,
    year = "2024",
    volume = {abs/2408.00584},
    url = {https://arxiv.org/abs/2408.00584},
}
```

## Acknowledgements

We are grateful to the [Associazione Culturale "Biblioteca Enigmistica Italiana - G. Panini"](http://www.enignet.it/home) for making its rebus collection freely accessible on the [Eureka5 platform](http://www.eureka5.it).

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)