amezasor commited on
Commit
bbbf927
1 Parent(s): ab0c732

update after review

Browse files
Files changed (1) hide show
  1. README.md +33 -34
README.md CHANGED
@@ -205,38 +205,38 @@ model-index:
205
  ---
206
 
207
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
 
208
 
209
  # Granite-3.0-3B-A800M-Instruct
210
 
211
- ## Model Summary
212
- **Granite-3.0-3B-A800M-Instruct** is a lightweight and open-source 3B parameter model fine tuned from *Granite-3.0-3B-A800M-Base-4K* on a combination of open-source and proprietary instruction data with a **permissively licensed**. This language model is designed to excel in instruction following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, funcion-calling, and more.
213
 
214
  - **Developers:** IBM Research
215
- - **GitHub Repository:** [ibm-granite/granite-language-models](https://github.com/ibm-granite/granite-language-models)
216
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
217
- - **Paper:** [Granite Language Models](https://) <!-- TO DO: Update github repo link when it is ready -->
218
  - **Release Date**: October 21st, 2024
219
- - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
220
 
221
- ## Supported Languages
222
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
223
 
224
- ## Usage
225
- ### Intended use
226
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
227
 
228
- ### Capabilities
229
  * Summarization
230
  * Text classification
231
  * Text extraction
232
  * Question-answering
233
  * Retrieval Augmented Generation (RAG)
234
- * Code related
235
- * Function-calling
236
  * Multilingual dialog use cases
237
 
238
- ### Generation
239
- This is a simple example of how to use **Granite-3.0-3B-A800M-Instruct** model.
240
 
241
  Install the following libraries:
242
 
@@ -273,13 +273,8 @@ output = tokenizer.batch_decode(output)
273
  print(output)
274
  ```
275
 
276
- <!-- TO DO: function-calling-example
277
- -->
278
-
279
- <!-- ['<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>1. IBM Research - Almaden, San Jose, California<|end_of_text|>'] -->
280
-
281
- ## Model Architeture
282
- **Granite-3.0-3B-A800M-Instruct** is based on a decoder-only sparse Mixture of Experts(MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
283
 
284
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
285
  | :-------- | :--------| :--------| :--------| :-------- |
@@ -299,19 +294,23 @@ print(output)
299
  | # Active Parameters | 2.5B | 8.1B | 400M | **800M** |
300
  | # Training tokens | 12T | 12T | 10T | **10T** |
301
 
302
- <!-- TO DO: To be completed once the paper is ready, we may changed title to Supervised Finetuning -->
303
- ## Training Data
304
- Granite Language Instruct models are trained on a collection of publicly available datasets with non-restrictive license, as well as an IBM collection of synthetic datasets. We annotated and filtered these datasets to only include high-quality instances from each of them in our final mixture. This dataset selection is representative of the following domains:
305
 
306
- * English datasets: [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub), [OASST-OctoPack](https://huggingface.co/datasets/bigcode/oasst-octopack), [Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater), [SoftAge-Multiturn](https://huggingface.co/datasets/SoftAge-AI/multi-turn_dataset), [Glaive-RAG-v1 ](https://huggingface.co/datasets/glaiveai/RAG-v1 ), [EvolKit-20k](https://huggingface.co/datasets/arcee-ai/EvolKit-20k ), [Magpie-Phi3-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Phi3-Pro-300K-Filtered).
307
- * Multilingual datasets: [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) and IBM Synthetic datasets (e.g., Blue Multilingual, Daring Anteater Translated).
308
- * Code datasets: [Glaive Code Assistant V3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [SQL Create Context Instruction](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), and [Self-OSS-Instruct-SC2](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Single and multi-turn IBM synthetic datasets, including a set of datasets generated via the evol-instruct method.
309
- * Math: [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [StackMathQA](https://huggingface.co/datasets/math-ai/StackMathQA ), and [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
310
- * Tools: [xlam-function-calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [Glaive Function Calling V2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [Hermes Function Calling V1](https://huggingface.co/datasets/NousResearch/hermes-function-calling-v1), and IBM Synthetic API data.
311
- * Safety: [SimpleSafetyTests](https://huggingface.co/datasets/Bertievidgen/SimpleSafetyTests), [HarmBench Behaviors](https://github.com/centerforaisafety/HarmBench/blob/main/data/behavior_datasets/harmbench_behaviors_text_all.csv), [Strong Reject](https://github.com/alexandrasouly/strongreject/blob/main/strongreject_dataset/strongreject_dataset.csv), [AdvBench](https://huggingface.co/datasets/walledai/AdvBench), [MistralGuard](https://huggingface.co/datasets/natolambert/xstest-v2-copy), [Do-Not-Answer](https://huggingface.co/datasets/LibrAI/do-not-answer), and IBM Synthetic data for safety.
312
 
313
- ## Infrastructure
314
- We train the Granite Language models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
315
 
316
- ## Ethical Considerations and Limitations
317
- Granite instruct models are primarily finetuned using instruction-response pairs mostly in English, but also in German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified). As this model has been exposed to multilingual data, it can handle multilingual dialog use cases with a limited performance in non-English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3.0-3B-A800M-Base-4K](https://huggingface.co/ibm-granite/granite-3.0-3b-a800m-base)* model card.
 
 
 
 
 
 
 
 
 
 
205
  ---
206
 
207
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
208
+ ![image/png](granite-3_0-language-models_Group_1.png)
209
 
210
  # Granite-3.0-3B-A800M-Instruct
211
 
212
+ **Model Summary:**
213
+ Granite-3.0-3B-A800M-Instruct is a 3B parameter model finetuned from *Granite-3.0-3B-A800M-Base-4K* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
214
 
215
  - **Developers:** IBM Research
216
+ - **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models)
217
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
218
+ - **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf)
219
  - **Release Date**: October 21st, 2024
220
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
221
 
222
+ **Supported Languages:**
223
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may fintune Granite 3.0 models for languages beyond these 12 languages.
224
 
225
+ **Intended use:**
 
226
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
227
 
228
+ *Capabilities*
229
  * Summarization
230
  * Text classification
231
  * Text extraction
232
  * Question-answering
233
  * Retrieval Augmented Generation (RAG)
234
+ * Code related tasks
235
+ * Function-calling tasks
236
  * Multilingual dialog use cases
237
 
238
+ **Generation:**
239
+ This is a simple example of how to use Granite-3.0-3B-A800M-Instruct model.
240
 
241
  Install the following libraries:
242
 
 
273
  print(output)
274
  ```
275
 
276
+ **Model Architeture:**
277
+ Granite-3.0-3B-A800M-Instruct is based on a decoder-only sparse Mixture of Experts(MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
 
 
 
 
 
278
 
279
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
280
  | :-------- | :--------| :--------| :--------| :-------- |
 
294
  | # Active Parameters | 2.5B | 8.1B | 400M | **800M** |
295
  | # Training tokens | 12T | 12T | 10T | **10T** |
296
 
297
+ **Training Data:**
298
+ Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. Please refer to [Granite 3.0 Language Models technical report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf) for more details on the individual categories and datasets.
 
299
 
300
+ **Infrastructure:**
301
+ We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
 
 
 
 
302
 
303
+ **Ethical Considerations and Limitations:**
304
+ Granite 3.0 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
305
 
306
+ <!-- ## Citation
307
+ ```
308
+ @misc{granite-models,
309
+ author = {author 1, author2, ...},
310
+ title = {},
311
+ journal = {},
312
+ volume = {},
313
+ year = {2024},
314
+ url = {https://arxiv.org/abs/0000.00000},
315
+ }
316
+ ``` -->