amezasor commited on
Commit
b848cbd
1 Parent(s): e0a466f

update after review

Browse files
Files changed (1) hide show
  1. README.md +33 -41
README.md CHANGED
@@ -2,9 +2,6 @@
2
  pipeline_tag: text-generation
3
  inference: false
4
  license: apache-2.0
5
- # datasets:
6
- # metrics:
7
- # - code_eval
8
  library_name: transformers
9
  tags:
10
  - language
@@ -205,39 +202,38 @@ model-index:
205
  ---
206
 
207
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
 
208
 
209
  # Granite-3.0-8B-Instruct
210
 
211
- ## Model Summary
212
- **Granite-3.0-8B-Instruct** is a lightweight and open-source 8B parameter model fine tuned from *Granite-3.0-8B-Base* on a combination of open-source and proprietary instruction data with a **permissively licensed**. This language model is designed to excel in instruction following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, funcion-calling, and more.
213
- <!-- The lightweight and open-source nature of this model makes it an excellent choice to serve as backbone of real-time applications such as chatbots and conversational agents. -->
214
 
215
  - **Developers:** IBM Research
216
- - **GitHub Repository:** [ibm-granite/granite-language-models](https://github.com/ibm-granite/granite-language-models)
217
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
218
- - **Paper:** [Granite Language Models](https://) <!-- TO DO: Update github repo link when it is ready -->
219
  - **Release Date**: October 21st, 2024
220
- - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
221
 
222
- ## Supported Languages
223
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
224
 
225
- ## Usage
226
- ### Intended use
227
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
228
 
229
- ### Capabilities
230
  * Summarization
231
  * Text classification
232
  * Text extraction
233
  * Question-answering
234
  * Retrieval Augmented Generation (RAG)
235
- * Code related
236
- * Function-calling
237
  * Multilingual dialog use cases
238
 
239
- ### Generation
240
- This is a simple example of how to use **Granite-3.0-8B-Instruct** model.
241
 
242
  Install the following libraries:
243
 
@@ -274,13 +270,8 @@ output = tokenizer.batch_decode(output)
274
  print(output)
275
  ```
276
 
277
- <!-- TO DO: function-calling-example
278
- -->
279
-
280
- <!-- ['<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>1. IBM Research - Almaden, San Jose, California<|end_of_text|>'] -->
281
-
282
- ## Model Architeture
283
- **Granite-3.0-8B-Instruct** is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embbeddings.
284
 
285
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
286
  | :-------- | :--------| :-------- | :------| :------|
@@ -300,22 +291,23 @@ print(output)
300
  | # Active Parameters | 2.5B | **8.1B** | 400M | 800M |
301
  | # Training tokens | 12T | **12T** | 10T | 10T |
302
 
303
- <!-- TO DO: To be completed once the paper is ready, we may changed title to Supervised Finetuning -->
304
- ## Training Data
305
- Granite Language Instruct models are trained on a collection of publicly available datasets with non-restrictive license, as well as an IBM collection of synthetic datasets. We annotated and filtered these datasets to only include high-quality instances from each of them in our final mixture. This dataset selection is representative of the following domains:
306
 
307
- * English datasets: [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub), [OASST-OctoPack](https://huggingface.co/datasets/bigcode/oasst-octopack), [Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater), [SoftAge-Multiturn](https://huggingface.co/datasets/SoftAge-AI/multi-turn_dataset), [Glaive-RAG-v1 ](https://huggingface.co/datasets/glaiveai/RAG-v1 ), [EvolKit-20k](https://huggingface.co/datasets/arcee-ai/EvolKit-20k ), [Magpie-Phi3-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Phi3-Pro-300K-Filtered).
308
- * Multilingual datasets: [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) and IBM Synthetic datasets (e.g., Blue Multilingual, Daring Anteater Translated).
309
- * Code datasets: [Glaive Code Assistant V3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [SQL Create Context Instruction](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), and [Self-OSS-Instruct-SC2](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Single and multi-turn IBM synthetic datasets, including a set of datasets generated via the evol-instruct method.
310
- * Math: [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [StackMathQA](https://huggingface.co/datasets/math-ai/StackMathQA ), and [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
311
- * Tools: [xlam-function-calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [Glaive Function Calling V2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [Hermes Function Calling V1](https://huggingface.co/datasets/NousResearch/hermes-function-calling-v1), and IBM Synthetic API data.
312
- * Safety: [SimpleSafetyTests](https://huggingface.co/datasets/Bertievidgen/SimpleSafetyTests), [HarmBench Behaviors](https://github.com/centerforaisafety/HarmBench/blob/main/data/behavior_datasets/harmbench_behaviors_text_all.csv), [Strong Reject](https://github.com/alexandrasouly/strongreject/blob/main/strongreject_dataset/strongreject_dataset.csv), [AdvBench](https://huggingface.co/datasets/walledai/AdvBench), [MistralGuard](https://huggingface.co/datasets/natolambert/xstest-v2-copy), [Do-Not-Answer](https://huggingface.co/datasets/LibrAI/do-not-answer), and IBM Synthetic data for safety.
313
 
314
- <!-- CHECK: removed Vela, only talk about blue-vela-->
315
- ## Infrastructure
316
- We train the Granite Language models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
317
-
318
- <!-- TO DO: Check multilingual statement once the paper is ready -->
319
- ## Ethical Considerations and Limitations
320
- Granite instruct models are primarily finetuned using instruction-response pairs mostly in English, but also in German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified). As this model has been exposed to multilingual data, it can handle multilingual dialog use cases with a limited performance in non-English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3.0-8B-Base](https://huggingface.co/ibm-granite/granite-3.0-8b-base)* model card.
321
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  pipeline_tag: text-generation
3
  inference: false
4
  license: apache-2.0
 
 
 
5
  library_name: transformers
6
  tags:
7
  - language
 
202
  ---
203
 
204
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
205
+ <!-- ![image/png](granite-3_0-language-models_Group_1.png) -->
206
 
207
  # Granite-3.0-8B-Instruct
208
 
209
+ **Model Summary:**
210
+ Granite-3.0-8B-Instruct is a 8B parameter model finetuned from *Granite-3.0-8B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
 
211
 
212
  - **Developers:** IBM Research
213
+ - **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models)
214
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
215
+ - **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf)
216
  - **Release Date**: October 21st, 2024
217
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
218
 
219
+ **Supported Languages:**
220
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may fintune Granite 3.0 models for languages beyond these 12 languages.
221
 
222
+ **Intended use:**
 
223
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
224
 
225
+ *Capabilities*
226
  * Summarization
227
  * Text classification
228
  * Text extraction
229
  * Question-answering
230
  * Retrieval Augmented Generation (RAG)
231
+ * Code related tasks
232
+ * Function-calling tasks
233
  * Multilingual dialog use cases
234
 
235
+ **Generation:**
236
+ This is a simple example of how to use Granite-3.0-8B-Instruct model.
237
 
238
  Install the following libraries:
239
 
 
270
  print(output)
271
  ```
272
 
273
+ **Model Architeture:**
274
+ Granite-3.0-8B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embbeddings.
 
 
 
 
 
275
 
276
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
277
  | :-------- | :--------| :-------- | :------| :------|
 
291
  | # Active Parameters | 2.5B | **8.1B** | 400M | 800M |
292
  | # Training tokens | 12T | **12T** | 10T | 10T |
293
 
294
+ **Training Data:**
295
+ Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. Please refer to [Granite 3.0 Language Models technical report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf) for more details on the individual categories and datasets.
 
296
 
297
+ **Infrastructure:**
298
+ We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
 
 
 
 
299
 
300
+ **Ethical Considerations and Limitations:**
301
+ Granite 3.0 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
 
 
 
 
 
302
 
303
+ <!-- ## Citation
304
+ ```
305
+ @misc{granite-models,
306
+ author = {author 1, author2, ...},
307
+ title = {},
308
+ journal = {},
309
+ volume = {},
310
+ year = {2024},
311
+ url = {https://arxiv.org/abs/0000.00000},
312
+ }
313
+ ``` -->