File size: 1,752 Bytes
2370225
 
 
 
 
 
 
 
 
 
 
 
 
 
cc2b5cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
datasets:
- bitext/Bitext-retail-banking-llm-chatbot-training-dataset
language:
- en
base_model:
- google-t5/t5-small
pipeline_tag: question-answering
tags:
- fintech
- retail-banking
- fine-tuning
- chatbot
- llm
license: cdla-sharing-1.0
---
# fintech-chatbot-t5

## Model Description
This model was fine-tuned using a [retail banking chatbot dataset](https://huggingface.co/datasets/bitext/Bitext-retail-banking-llm-chatbot-training-dataset/tree/main). It is based on the T5-small architecture and is capable of answering common banking-related queries like account balances, transaction details, card activations, and more.

The model has been trained to generate responses to banking-related customer queries and is suited for use in automated customer service systems or virtual assistants.

## Model Details
- **Model Type:** T5-small
- **Training Dataset:** [retail banking chatbot dataset](https://huggingface.co/datasets/bitext/Bitext-retail-banking-llm-chatbot-training-dataset/tree/main)
- **Tasks:** Natural Language Generation (NLG)
- **Languages Supported:** English

## Training Details
- **Number of Epochs:** 3
- **Training Loss:** 0.79
- **Evaluation Loss:** 0.46
- **Evaluation Metric:** Mean Squared Error
- **Batch Size:** 8
- 

## How to Use the Model
You can load and use this model with the following code:

```python
from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained("cuneytkaya/fintech-chatbot-t5")
model = T5ForConditionalGeneration.from_pretrained("cuneytkaya/fintech-chatbot-t5")

input_text = "How can I activate my credit card?"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs)

print(tokenizer.decode(outputs[0]))