doberst commited on
Commit
e3f7f60
1 Parent(s): b89bf42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -9
README.md CHANGED
@@ -60,14 +60,16 @@ Any model can provide inaccurate or incomplete information, and should be used i
60
 
61
  The fastest way to get started with BLING is through direct import in transformers:
62
 
63
- from transformers import AutoTokenizer, AutoModelForCausalLM
64
- tokenizer = AutoTokenizer.from_pretrained("llmware/bling-cerebras-1.3b-0.1")
65
- model = AutoModelForCausalLM.from_pretrained("llmware/bling-cerebras-1.3b-0.1")
 
 
66
 
67
 
68
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
69
 
70
- full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
71
 
72
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
73
 
@@ -76,7 +78,7 @@ The BLING model was fine-tuned with closed-context samples, which assume general
76
 
77
  To get the best results, package "my_prompt" as follows:
78
 
79
- my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
80
 
81
 
82
  ## Citation [optional]
@@ -94,7 +96,3 @@ Publication: April 6, 2023
94
 
95
  Darren Oberst & llmware team
96
 
97
- Please reach out anytime if you are interested in this project and would like to participate and work with us!
98
-
99
-
100
-
 
60
 
61
  The fastest way to get started with BLING is through direct import in transformers:
62
 
63
+ from transformers import AutoTokenizer, AutoModelForCausalLM
64
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-cerebras-1.3b-0.1")
65
+ model = AutoModelForCausalLM.from_pretrained("llmware/bling-cerebras-1.3b-0.1")
66
+
67
+ Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
68
 
69
 
70
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
71
 
72
+ full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
73
 
74
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
75
 
 
78
 
79
  To get the best results, package "my_prompt" as follows:
80
 
81
+ my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
82
 
83
 
84
  ## Citation [optional]
 
96
 
97
  Darren Oberst & llmware team
98