File size: 3,945 Bytes
3270921
 
923b41c
 
3270921
923b41c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f0957c1
923b41c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f0957c1
923b41c
f0957c1
923b41c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f0957c1
923b41c
 
 
f0957c1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
license: mit
language:
- en
---
# NPC Model

This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3**, using LoRA. 

This model parses a text description of a game scene, and outputs commands like:
* `say <player1> "Hello Adventurer, care to join me on a quest?`
* `greet <player1>`
* `attack <player1>`
* Any other `<action> <param>` you add to the prompt! (We call these "skills"!)


⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️


## Usage

**Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**

* Instantiating the model using outlines:
```py
from outlines import models
from gigax.step import NPCStepper

# Download model from the Hub
model_name = "Gigax/NPC-LLM-3_8B"
llm = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Our stepper takes in a Outlines model to enable guided generation
# This forces the model to follow our output format
model = models.Transformers(llm, tokenizer)

# Instantiate a stepper: handles prompting + output parsing
stepper = NPCStepper(model=model)
```

* Calling the model on your game's data:

```py
from gigax.parse import CharacterAction
from gigax.scene import (
    Character,
    Item,
    Location,
    ProtagonistCharacter,
    ProtagonistCharacter,
    Skill,
    ParameterType,
)
# Use sample data
context = "Medieval world"
current_location = Location(name="Old Town", description="A quiet and peaceful town.")
locations = [current_location] # you can add more locations to the scene
NPCs = [
    Character(
    name="John the Brave",
    description="A fearless warrior",
    current_location=current_location,
    )
]
protagonist = ProtagonistCharacter(
    name="Aldren",
    description="Brave and curious",
    current_location=current_location,
    memories=["Saved the village", "Lost a friend"],
    quests=["Find the ancient artifact", "Defeat the evil warlock"],
    skills=[
        Skill(
            name="Attack",
            description="Deliver a powerful blow",
            parameter_types=[ParameterType.character],
        )
    ],
    psychological_profile="Determined and compassionate",
)
items = [Item(name="Sword", description="A sharp blade")]
events = [
    CharacterAction(
        command="Say",
        protagonist=protagonist,
        parameters=[items[0], "What a fine sword!"],
    )
]

action = stepper.get_action(
    context=context,
    locations=locations,
    NPCs=NPCs,
    protagonist=protagonist,
    items=items,
    events=events,
)
```

## Input prompt

Here's a sample input prompt, showing you the format on which the model has been trained:
```txt
- WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
- KNOWN LOCATIONS: Old Town
- NPCS: John the Brave
- CURRENT LOCATION: Old Town: A quiet and peaceful town.
- CURRENT LOCATION ITEMS: Sword
- LAST EVENTS:
Aldren: Say Sword What a fine sword!
- PROTAGONIST NAME: Aldren
- PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
- PROTAGONIST MEMORIES:
Saved the village
Lost a friend
- PROTAGONIST PENDING QUESTS:
Find the ancient artifact
Defeat the evil warlock
- PROTAGONIST ALLOWED ACTIONS:
Attack <character> : Deliver a powerful blow
Aldren:
```

### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗


## Model info

- **Developed by:** Gigax
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
- **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!

## How to Cite

```bibtex
@misc{NPC-LLM-3_8B,
      url={[https://huggingface.co/Gigax/NPC-LLM-3_8B](https://huggingface.co/Gigax/NPC-LLM-3_8B)},
      title={NPC-LLM-3_8B},
      author={Gigax team}
}
```