File size: 1,577 Bytes
9805805 c7bc841 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Meta-Llama-3.1-8B
---
# Empathetic teacher model
## Overview
This is a LLM fine-tuned with real-life, ideally-empathetic teacher-student conversations.
This model processes the recent conversation history and provides guidance on how a teacher might respond to the student's utterance.
To fine-tune an open-weighted LLM to act as this generic teacher, we are using the following datasets:
the Teacher-Student Chatroom Corpus,
TSCCv2 [Caines et al., 2022](https://aclanthology.org/2022.nlp4call-1.3),
CIMA [Stasaski et al., 2020](https://aclanthology.org/2020.bea-1.5),
the Multicultural Classroom Discourse Dataset [Rapanta et al., 2021](https://www.sciencedirect.com/science/article/pii/S2352340921007940),
MathDial [Macina et al., 2023](https://aclanthology.org/2023.findings-emnlp.372), and
Conversational Uptake [Demszky et al., 2021].
We are evaluating LLaMa-3, Phi-3, and Gemma-2 for this task.
Instead of using programmable fine-tuning libraries such as Axolotl ([link](https://github.com/OpenAccess-AI-Collective/axolotl))
or Huggingface TRL ([link](https://github.com/huggingface/trl)),
we are employing the more general command-line LLaMA-Factory ([link](https://github.com/hiyouga/LLaMA-Factory)) toolkit
that facilitates the fine-tuning of various well-known LLMs on custom data.
Parameter-efficient fine-tuning is achieved via the QLoRA method [Dettmers et al., 2023](https://proceedings.neurips.cc/paper_files/paper/2023/file/1feb87871436031bdc0f2beaa62a049b-Paper-Conference.pdf).
|