Ezi Ozoani commited on
Commit
f6b562f
β€’
1 Parent(s): 3ffe470

possible how to location move

Browse files
Files changed (4) hide show
  1. .DS_Store +0 -0
  2. app.py +54 -7
  3. assets/.DS_Store +0 -0
  4. assets/hugging_face_earth.png +0 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
app.py CHANGED
@@ -95,7 +95,7 @@ were utilized to estimate the carbon impact.*
95
  st.warning('This is a warning')
96
  # Object notation
97
  st.subheader('🌲')
98
- with st.expander("🌍 🌳"):
99
  st.markdown('''
100
 
101
  - **Hardware Type:** 8 16GB V100
@@ -278,16 +278,63 @@ GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGP
278
 
279
  # Try App
280
 
281
- col2.subheader('Try App')
282
- col2.code('''[To:do add code]
283
  ''')
284
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
285
  # Visuals
286
 
287
- col2.subheader('Visuals')
288
- col2.code('''
289
- [temp]
290
- ''')
291
 
292
 
293
 
 
95
  st.warning('This is a warning')
96
  # Object notation
97
  st.subheader('🌲')
98
+ with st.expander("🌍"):
99
  st.markdown('''
100
 
101
  - **Hardware Type:** 8 16GB V100
 
278
 
279
  # Try App
280
 
281
+ col2.header('Try App')
282
+ col2.code('''[To:do add integration with HF
283
  ''')
284
 
285
+ # How to Get Started
286
+
287
+ with col2.header('How to Get Started'):
288
+ col2.markdown('''
289
+ *Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
290
+ ''')
291
+ with col2.expander(""):
292
+ st.markdown('''
293
+
294
+ Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
295
+
296
+ ```python
297
+ >>> from transformers import pipeline, set_seed
298
+ >>> generator = pipeline('text-generation', model='distilgpt2')
299
+ >>> set_seed(42)
300
+ >>> generator("Hello, I'm a language model", max_length=20, num_return_sequences=5)
301
+ Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
302
+ [{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
303
+ {'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
304
+ {'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
305
+ {'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
306
+ {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
307
+ ```
308
+
309
+
310
+ **Here is how to use this model to get the features of a given text in PyTorch**:
311
+
312
+ ```python
313
+ from transformers import GPT2Tokenizer, GPT2Model
314
+ tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
315
+ model = GPT2Model.from_pretrained('distilgpt2')
316
+ text = "Replace me by any text you'd like."
317
+ encoded_input = tokenizer(text, return_tensors='pt')
318
+ output = model(**encoded_input)
319
+ ```
320
+
321
+ **And in TensorFlow:**
322
+
323
+ ```python
324
+ from transformers import GPT2Tokenizer, TFGPT2Model
325
+ tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
326
+ model = TFGPT2Model.from_pretrained('distilgpt2')
327
+ text = "Replace me by any text you'd like."
328
+ encoded_input = tokenizer(text, return_tensors='tf')
329
+ output = model(encoded_input)
330
+ ```
331
+
332
+ ''')
333
+
334
  # Visuals
335
 
336
+
337
+
 
 
338
 
339
 
340
 
assets/.DS_Store ADDED
Binary file (6.15 kB). View file
 
assets/hugging_face_earth.png ADDED