--- datasets: - hdogrukan/test4 pipeline_tag: text-generation license: mit language: - tr base_model: meta-llama/Meta-Llama-3.1-8B-Instruct --- # ENGPT - v0 Our model, fine-tuned with Llama 3.1 8B, has been trained using a dataset generated from the EPDK and TEDAŞ regulations within the Turkish Electricity Distribution system. ## Model Details It can answer questions related to regulations and provide detailed information about specific materials. ### Model Description - **Developed by:** hdogrukan ,ecokumus - **Language(s) (NLP):** Turkish - **Finetuned from model :** Llama 3.1 8B ### Model Sources - **Datasets:** - https://www.tedas.gov.tr/tr/1/sartnameler/RoutePage/63c658e3d27de36b22f9cef6 - https://www.mevzuat.gov.tr/ ### Metrics -https://wandb.ai/hdogrukan/Fine-tune%20llama-3.1-8b-it%20on%20Turkish%20Energy%20Sector%20V0/reports/Engpt-v0--Vmlldzo5MjIzMTIx ## Uses !pip install --upgrade transformers ```python import transformers import torch model_id = "hdogrukan/Llama-3.1-8B-Instruct-Energy" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are helpful asistant."}, {"role": "user", "content": "TEDAŞ-MLZ/99-032.E şartnamesini hangi kurum yayınlamıştır?"}, ] outputs = pipeline( messages, max_new_tokens=1024, temperature=0.2 ) # For more accuracy try this! """ outputs = pipeline( messages, max_new_tokens=512, #max_length=50, num_return_sequences=3, do_sample=True, top_k=50, top_p=0.95, temperature=0.5 ) """ print(outputs[0]["generated_text"][-1]) ``` Output : "TEDAŞ-MLZ/99-032.E şartnamesini Türkiye Elektrik Dağıtım A.Ş. yayınlamıştır."