Edit model card

Fine-tuning GPT2 with energy plus medical dataset

Fine tuning pre-trained language models for text generation.

Pretrained model on Chinese language using a GPT2 for Large Language Head Model objective.

Model description

transferlearning from DavidLanz/uuu_fine_tune_taipower and fine-tuning with medical dataset for the GPT-2 architecture.

How to use

You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:

>>> from transformers import GPT2LMHeadModel, BertTokenizer, TextGenerationPipeline

>>> model_path = "DavidLanz/DavidLanz/uuu_fine_tune_gpt2"
>>> model = GPT2LMHeadModel.from_pretrained(model_path)
>>> tokenizer = BertTokenizer.from_pretrained(model_path)

>>> max_length = 200
>>> prompt = "歐洲能源政策"
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generated = text_generator(prompt, max_length=max_length, do_sample=True)
>>> print(text_generated[0]["generated_text"].replace(" ",""))
>>> from transformers import GPT2LMHeadModel, BertTokenizer, TextGenerationPipeline

>>> model_path = "DavidLanz/DavidLanz/uuu_fine_tune_gpt2"
>>> model = GPT2LMHeadModel.from_pretrained(model_path)
>>> tokenizer = BertTokenizer.from_pretrained(model_path)

>>> max_length = 200
>>> prompt = "蕁麻疹過敏"
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generated = text_generator(prompt, max_length=max_length, do_sample=True)
>>> print(text_generated[0]["generated_text"].replace(" ",""))
Downloads last month
11
Safetensors
Model size
102M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.